Sep 13 00:41:35.172625 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:41:35.172651 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:41:35.172659 kernel: BIOS-provided physical RAM map: Sep 13 00:41:35.172665 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:41:35.172670 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:41:35.172676 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:41:35.172682 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:41:35.172688 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:41:35.172695 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:41:35.172701 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:41:35.172706 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:41:35.172712 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:41:35.172717 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:41:35.172723 kernel: NX (Execute Disable) protection: active Sep 13 00:41:35.172731 kernel: SMBIOS 2.8 present. Sep 13 00:41:35.172738 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:41:35.172744 kernel: Hypervisor detected: KVM Sep 13 00:41:35.172750 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:41:35.172758 kernel: kvm-clock: cpu 0, msr 4119f001, primary cpu clock Sep 13 00:41:35.172764 kernel: kvm-clock: using sched offset of 3131048526 cycles Sep 13 00:41:35.172771 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:41:35.172777 kernel: tsc: Detected 2794.750 MHz processor Sep 13 00:41:35.172783 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:41:35.172791 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:41:35.172797 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:41:35.172804 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:41:35.172810 kernel: Using GB pages for direct mapping Sep 13 00:41:35.172816 kernel: ACPI: Early table checksum verification disabled Sep 13 00:41:35.172822 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:41:35.172829 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172835 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172841 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172848 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:41:35.172854 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172861 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172867 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172873 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:41:35.172879 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:41:35.172885 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:41:35.172891 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:41:35.172901 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:41:35.172908 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:41:35.172914 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:41:35.172921 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:41:35.172928 kernel: No NUMA configuration found Sep 13 00:41:35.172934 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:41:35.172942 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:41:35.172948 kernel: Zone ranges: Sep 13 00:41:35.172955 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:41:35.172962 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:41:35.172968 kernel: Normal empty Sep 13 00:41:35.172975 kernel: Movable zone start for each node Sep 13 00:41:35.172981 kernel: Early memory node ranges Sep 13 00:41:35.172988 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:41:35.172994 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:41:35.173002 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:41:35.173011 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:41:35.173018 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:41:35.173024 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:41:35.173031 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:41:35.173037 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:41:35.173044 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:41:35.173051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:41:35.173057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:41:35.173064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:41:35.173083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:41:35.173090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:41:35.173096 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:41:35.173103 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:41:35.173109 kernel: TSC deadline timer available Sep 13 00:41:35.173116 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:41:35.173122 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:41:35.173129 kernel: kvm-guest: setup PV sched yield Sep 13 00:41:35.173136 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:41:35.173143 kernel: Booting paravirtualized kernel on KVM Sep 13 00:41:35.173150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:41:35.173157 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:41:35.173164 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:41:35.173170 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:41:35.173177 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:41:35.173183 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:41:35.173190 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 13 00:41:35.173196 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:41:35.173204 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:41:35.173211 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:41:35.173217 kernel: Policy zone: DMA32 Sep 13 00:41:35.173225 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:41:35.173232 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:41:35.173239 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:41:35.173246 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:41:35.173252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:41:35.173261 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 13 00:41:35.173267 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:41:35.173274 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:41:35.173280 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:41:35.173287 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:41:35.173294 kernel: rcu: RCU event tracing is enabled. Sep 13 00:41:35.173301 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:41:35.173307 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:41:35.173314 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:41:35.173322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:41:35.173329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:41:35.173338 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:41:35.173347 kernel: random: crng init done Sep 13 00:41:35.173355 kernel: Console: colour VGA+ 80x25 Sep 13 00:41:35.173364 kernel: printk: console [ttyS0] enabled Sep 13 00:41:35.173372 kernel: ACPI: Core revision 20210730 Sep 13 00:41:35.173381 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:41:35.173397 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:41:35.173406 kernel: x2apic enabled Sep 13 00:41:35.173412 kernel: Switched APIC routing to physical x2apic. Sep 13 00:41:35.173422 kernel: kvm-guest: setup PV IPIs Sep 13 00:41:35.173429 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:41:35.173435 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:41:35.173445 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 13 00:41:35.173451 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:41:35.173458 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:41:35.173465 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:41:35.173478 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:41:35.173485 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:41:35.173492 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:41:35.173513 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:41:35.173520 kernel: active return thunk: retbleed_return_thunk Sep 13 00:41:35.173527 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:41:35.173534 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:41:35.173541 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:41:35.173548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:41:35.173558 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:41:35.173565 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:41:35.173572 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:41:35.173579 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:41:35.173586 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:41:35.173593 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:41:35.173599 kernel: LSM: Security Framework initializing Sep 13 00:41:35.173608 kernel: SELinux: Initializing. Sep 13 00:41:35.173615 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:41:35.173622 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:41:35.173629 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:41:35.173636 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:41:35.173643 kernel: ... version: 0 Sep 13 00:41:35.173650 kernel: ... bit width: 48 Sep 13 00:41:35.173657 kernel: ... generic registers: 6 Sep 13 00:41:35.173663 kernel: ... value mask: 0000ffffffffffff Sep 13 00:41:35.173672 kernel: ... max period: 00007fffffffffff Sep 13 00:41:35.173679 kernel: ... fixed-purpose events: 0 Sep 13 00:41:35.173685 kernel: ... event mask: 000000000000003f Sep 13 00:41:35.173692 kernel: signal: max sigframe size: 1776 Sep 13 00:41:35.173699 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:41:35.173708 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:41:35.173716 kernel: x86: Booting SMP configuration: Sep 13 00:41:35.173723 kernel: .... node #0, CPUs: #1 Sep 13 00:41:35.173732 kernel: kvm-clock: cpu 1, msr 4119f041, secondary cpu clock Sep 13 00:41:35.173739 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:41:35.173747 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 13 00:41:35.173754 kernel: #2 Sep 13 00:41:35.173761 kernel: kvm-clock: cpu 2, msr 4119f081, secondary cpu clock Sep 13 00:41:35.173768 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:41:35.173774 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 13 00:41:35.173784 kernel: #3 Sep 13 00:41:35.173791 kernel: kvm-clock: cpu 3, msr 4119f0c1, secondary cpu clock Sep 13 00:41:35.173797 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:41:35.173804 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 13 00:41:35.173812 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:41:35.173819 kernel: smpboot: Max logical packages: 1 Sep 13 00:41:35.173826 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 13 00:41:35.173833 kernel: devtmpfs: initialized Sep 13 00:41:35.173840 kernel: x86/mm: Memory block size: 128MB Sep 13 00:41:35.173847 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:41:35.173854 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:41:35.173861 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:41:35.173868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:41:35.173876 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:41:35.173883 kernel: audit: type=2000 audit(1757724094.596:1): state=initialized audit_enabled=0 res=1 Sep 13 00:41:35.173890 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:41:35.173897 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:41:35.173903 kernel: cpuidle: using governor menu Sep 13 00:41:35.173910 kernel: ACPI: bus type PCI registered Sep 13 00:41:35.173917 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:41:35.173924 kernel: dca service started, version 1.12.1 Sep 13 00:41:35.173934 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:41:35.173942 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:41:35.173949 kernel: PCI: Using configuration type 1 for base access Sep 13 00:41:35.173956 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:41:35.173963 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:41:35.173970 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:41:35.173977 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:41:35.173984 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:41:35.173990 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:41:35.173997 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:41:35.174005 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:41:35.174012 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:41:35.174019 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:41:35.174026 kernel: ACPI: Interpreter enabled Sep 13 00:41:35.174033 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:41:35.174040 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:41:35.174047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:41:35.174054 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:41:35.174060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:41:35.174235 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:41:35.174315 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:41:35.174411 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:41:35.174421 kernel: PCI host bridge to bus 0000:00 Sep 13 00:41:35.174519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:41:35.174589 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:41:35.174660 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:41:35.174725 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:41:35.174792 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:41:35.174856 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:41:35.174926 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:41:35.175023 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:41:35.175130 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:41:35.175212 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:41:35.175286 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:41:35.175371 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:41:35.175458 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:41:35.175549 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:41:35.175624 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:41:35.175731 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:41:35.175815 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:41:35.175900 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:41:35.175974 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:41:35.176049 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:41:35.176138 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:41:35.176222 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:41:35.176300 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:41:35.176395 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:41:35.176473 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:41:35.176547 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:41:35.176634 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:41:35.176708 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:41:35.176795 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:41:35.176872 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:41:35.176946 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:41:35.177044 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:41:35.177131 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:41:35.177141 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:41:35.177149 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:41:35.177156 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:41:35.177166 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:41:35.177176 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:41:35.177183 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:41:35.177190 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:41:35.177197 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:41:35.177203 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:41:35.177210 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:41:35.177217 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:41:35.177224 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:41:35.177231 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:41:35.177239 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:41:35.177246 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:41:35.177254 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:41:35.177261 kernel: iommu: Default domain type: Translated Sep 13 00:41:35.177267 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:41:35.177347 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:41:35.177441 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:41:35.177514 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:41:35.177527 kernel: vgaarb: loaded Sep 13 00:41:35.177534 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:41:35.177541 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:41:35.177548 kernel: PTP clock support registered Sep 13 00:41:35.177555 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:41:35.177562 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:41:35.177568 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:41:35.177575 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:41:35.177582 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:41:35.177588 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:41:35.177597 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:41:35.177603 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:41:35.177610 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:41:35.177617 kernel: pnp: PnP ACPI init Sep 13 00:41:35.177718 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:41:35.177729 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:41:35.177736 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:41:35.177743 kernel: NET: Registered PF_INET protocol family Sep 13 00:41:35.177752 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:41:35.177759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:41:35.177766 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:41:35.177773 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:41:35.177780 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:41:35.177787 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:41:35.177794 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:41:35.177801 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:41:35.177807 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:41:35.177816 kernel: NET: Registered PF_XDP protocol family Sep 13 00:41:35.177891 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:41:35.177968 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:41:35.178052 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:41:35.178151 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:41:35.178220 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:41:35.178284 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:41:35.178294 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:41:35.178305 kernel: Initialise system trusted keyrings Sep 13 00:41:35.178312 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:41:35.178319 kernel: Key type asymmetric registered Sep 13 00:41:35.178326 kernel: Asymmetric key parser 'x509' registered Sep 13 00:41:35.178335 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:41:35.178344 kernel: io scheduler mq-deadline registered Sep 13 00:41:35.178353 kernel: io scheduler kyber registered Sep 13 00:41:35.178362 kernel: io scheduler bfq registered Sep 13 00:41:35.178370 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:41:35.178379 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:41:35.178395 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:41:35.178402 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:41:35.178409 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:41:35.178416 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:41:35.178423 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:41:35.178430 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:41:35.178437 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:41:35.178529 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:41:35.178542 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:41:35.178611 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:41:35.178679 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:41:34 UTC (1757724094) Sep 13 00:41:35.178746 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:41:35.178755 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:41:35.178762 kernel: Segment Routing with IPv6 Sep 13 00:41:35.178769 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:41:35.178776 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:41:35.178785 kernel: Key type dns_resolver registered Sep 13 00:41:35.178792 kernel: IPI shorthand broadcast: enabled Sep 13 00:41:35.178799 kernel: sched_clock: Marking stable (441304521, 209635587)->(821555496, -170615388) Sep 13 00:41:35.178806 kernel: registered taskstats version 1 Sep 13 00:41:35.178813 kernel: Loading compiled-in X.509 certificates Sep 13 00:41:35.178820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:41:35.178827 kernel: Key type .fscrypt registered Sep 13 00:41:35.178834 kernel: Key type fscrypt-provisioning registered Sep 13 00:41:35.178841 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:41:35.178849 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:41:35.178856 kernel: ima: No architecture policies found Sep 13 00:41:35.178863 kernel: clk: Disabling unused clocks Sep 13 00:41:35.178870 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:41:35.178877 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:41:35.178884 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:41:35.178891 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:41:35.178898 kernel: Run /init as init process Sep 13 00:41:35.178904 kernel: with arguments: Sep 13 00:41:35.178912 kernel: /init Sep 13 00:41:35.178919 kernel: with environment: Sep 13 00:41:35.178926 kernel: HOME=/ Sep 13 00:41:35.178933 kernel: TERM=linux Sep 13 00:41:35.178939 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:41:35.178948 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:41:35.178958 systemd[1]: Detected virtualization kvm. Sep 13 00:41:35.178965 systemd[1]: Detected architecture x86-64. Sep 13 00:41:35.178974 systemd[1]: Running in initrd. Sep 13 00:41:35.178981 systemd[1]: No hostname configured, using default hostname. Sep 13 00:41:35.178989 systemd[1]: Hostname set to . Sep 13 00:41:35.178996 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:41:35.179003 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:41:35.179011 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:41:35.179018 systemd[1]: Reached target cryptsetup.target. Sep 13 00:41:35.179025 systemd[1]: Reached target paths.target. Sep 13 00:41:35.179034 systemd[1]: Reached target slices.target. Sep 13 00:41:35.179049 systemd[1]: Reached target swap.target. Sep 13 00:41:35.179058 systemd[1]: Reached target timers.target. Sep 13 00:41:35.179066 systemd[1]: Listening on iscsid.socket. Sep 13 00:41:35.179085 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:41:35.179094 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:41:35.179102 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:41:35.179110 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:41:35.179118 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:41:35.179125 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:41:35.179133 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:41:35.179141 systemd[1]: Reached target sockets.target. Sep 13 00:41:35.179149 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:41:35.179157 systemd[1]: Finished network-cleanup.service. Sep 13 00:41:35.179166 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:41:35.179173 systemd[1]: Starting systemd-journald.service... Sep 13 00:41:35.179181 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:41:35.179189 systemd[1]: Starting systemd-resolved.service... Sep 13 00:41:35.179197 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:41:35.179204 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:41:35.179214 systemd-journald[197]: Journal started Sep 13 00:41:35.179288 systemd-journald[197]: Runtime Journal (/run/log/journal/b160b9113ee14e8cbd0e4a18b6177325) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:41:35.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.184867 systemd-modules-load[198]: Inserted module 'overlay' Sep 13 00:41:35.223789 kernel: audit: type=1130 audit(1757724095.184:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.223810 systemd[1]: Started systemd-journald.service. Sep 13 00:41:35.199800 systemd-resolved[199]: Positive Trust Anchors: Sep 13 00:41:35.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.199811 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:41:35.233531 kernel: audit: type=1130 audit(1757724095.224:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.233547 kernel: audit: type=1130 audit(1757724095.228:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.199842 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:41:35.237258 kernel: audit: type=1130 audit(1757724095.233:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.202316 systemd-resolved[199]: Defaulting to hostname 'linux'. Sep 13 00:41:35.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.225340 systemd[1]: Started systemd-resolved.service. Sep 13 00:41:35.247893 kernel: audit: type=1130 audit(1757724095.237:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.229016 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:41:35.234286 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:41:35.237589 systemd[1]: Reached target nss-lookup.target. Sep 13 00:41:35.241758 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:41:35.249120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:41:35.257117 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:41:35.258472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:41:35.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.262085 kernel: audit: type=1130 audit(1757724095.258:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.262403 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:41:35.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.267843 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:41:35.268305 kernel: audit: type=1130 audit(1757724095.263:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.271339 systemd-modules-load[198]: Inserted module 'br_netfilter' Sep 13 00:41:35.273009 kernel: Bridge firewalling registered Sep 13 00:41:35.279178 dracut-cmdline[215]: dracut-dracut-053 Sep 13 00:41:35.281927 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:41:35.290103 kernel: SCSI subsystem initialized Sep 13 00:41:35.301464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:41:35.301522 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:41:35.301550 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:41:35.305474 systemd-modules-load[198]: Inserted module 'dm_multipath' Sep 13 00:41:35.306305 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:41:35.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.308712 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:41:35.313654 kernel: audit: type=1130 audit(1757724095.307:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.318545 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:41:35.323508 kernel: audit: type=1130 audit(1757724095.318:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.350112 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:41:35.366111 kernel: iscsi: registered transport (tcp) Sep 13 00:41:35.393462 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:41:35.393518 kernel: QLogic iSCSI HBA Driver Sep 13 00:41:35.425278 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:41:35.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.427905 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:41:35.475125 kernel: raid6: avx2x4 gen() 29929 MB/s Sep 13 00:41:35.492102 kernel: raid6: avx2x4 xor() 7965 MB/s Sep 13 00:41:35.509129 kernel: raid6: avx2x2 gen() 28318 MB/s Sep 13 00:41:35.526131 kernel: raid6: avx2x2 xor() 18982 MB/s Sep 13 00:41:35.543108 kernel: raid6: avx2x1 gen() 24522 MB/s Sep 13 00:41:35.561100 kernel: raid6: avx2x1 xor() 12735 MB/s Sep 13 00:41:35.578102 kernel: raid6: sse2x4 gen() 10330 MB/s Sep 13 00:41:35.595114 kernel: raid6: sse2x4 xor() 5961 MB/s Sep 13 00:41:35.612110 kernel: raid6: sse2x2 gen() 10078 MB/s Sep 13 00:41:35.629098 kernel: raid6: sse2x2 xor() 6915 MB/s Sep 13 00:41:35.646102 kernel: raid6: sse2x1 gen() 10838 MB/s Sep 13 00:41:35.663750 kernel: raid6: sse2x1 xor() 7037 MB/s Sep 13 00:41:35.663771 kernel: raid6: using algorithm avx2x4 gen() 29929 MB/s Sep 13 00:41:35.663780 kernel: raid6: .... xor() 7965 MB/s, rmw enabled Sep 13 00:41:35.664446 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:41:35.677094 kernel: xor: automatically using best checksumming function avx Sep 13 00:41:35.771125 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:41:35.781096 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:41:35.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.782000 audit: BPF prog-id=7 op=LOAD Sep 13 00:41:35.782000 audit: BPF prog-id=8 op=LOAD Sep 13 00:41:35.783357 systemd[1]: Starting systemd-udevd.service... Sep 13 00:41:35.797696 systemd-udevd[398]: Using default interface naming scheme 'v252'. Sep 13 00:41:35.803741 systemd[1]: Started systemd-udevd.service. Sep 13 00:41:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.805478 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:41:35.819349 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Sep 13 00:41:35.850676 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:41:35.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.852531 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:41:35.897382 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:41:35.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:35.942256 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:41:35.952220 kernel: libata version 3.00 loaded. Sep 13 00:41:35.955466 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:41:35.955497 kernel: AES CTR mode by8 optimization enabled Sep 13 00:41:35.962614 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:41:36.005660 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:41:36.005677 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:41:36.005789 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:41:36.005875 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:41:36.005953 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:41:36.005963 kernel: GPT:9289727 != 19775487 Sep 13 00:41:36.005972 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:41:36.005980 kernel: GPT:9289727 != 19775487 Sep 13 00:41:36.005988 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:41:36.006004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:41:36.006012 kernel: scsi host0: ahci Sep 13 00:41:36.006144 kernel: scsi host1: ahci Sep 13 00:41:36.006236 kernel: scsi host2: ahci Sep 13 00:41:36.006336 kernel: scsi host3: ahci Sep 13 00:41:36.006452 kernel: scsi host4: ahci Sep 13 00:41:36.006572 kernel: scsi host5: ahci Sep 13 00:41:36.006738 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Sep 13 00:41:36.006750 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Sep 13 00:41:36.006758 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Sep 13 00:41:36.006767 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Sep 13 00:41:36.006776 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Sep 13 00:41:36.006785 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Sep 13 00:41:36.014090 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (450) Sep 13 00:41:36.018372 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:41:36.046490 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:41:36.053573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:41:36.057590 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:41:36.061476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:41:36.063489 systemd[1]: Starting disk-uuid.service... Sep 13 00:41:36.075534 disk-uuid[535]: Primary Header is updated. Sep 13 00:41:36.075534 disk-uuid[535]: Secondary Entries is updated. Sep 13 00:41:36.075534 disk-uuid[535]: Secondary Header is updated. Sep 13 00:41:36.080110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:41:36.084101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:41:36.318876 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:41:36.319031 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:41:36.319046 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:41:36.320961 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:41:36.321096 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:41:36.322112 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:41:36.323394 kernel: ata3.00: applying bridge limits Sep 13 00:41:36.324101 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:41:36.325101 kernel: ata3.00: configured for UDMA/100 Sep 13 00:41:36.327114 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:41:36.357466 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:41:36.376506 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:41:36.376538 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:41:37.162103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:41:37.162194 disk-uuid[536]: The operation has completed successfully. Sep 13 00:41:37.185493 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:41:37.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.185611 systemd[1]: Finished disk-uuid.service. Sep 13 00:41:37.200550 systemd[1]: Starting verity-setup.service... Sep 13 00:41:37.215106 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:41:37.239299 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:41:37.242222 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:41:37.246468 systemd[1]: Finished verity-setup.service. Sep 13 00:41:37.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.320948 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:41:37.322632 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:41:37.321995 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:41:37.322858 systemd[1]: Starting ignition-setup.service... Sep 13 00:41:37.325650 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:41:37.332616 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:41:37.332649 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:41:37.332660 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:41:37.342569 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:41:37.352860 systemd[1]: Finished ignition-setup.service. Sep 13 00:41:37.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.354250 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:41:37.432627 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:41:37.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.435000 audit: BPF prog-id=9 op=LOAD Sep 13 00:41:37.435805 systemd[1]: Starting systemd-networkd.service... Sep 13 00:41:37.444467 ignition[639]: Ignition 2.14.0 Sep 13 00:41:37.444478 ignition[639]: Stage: fetch-offline Sep 13 00:41:37.444568 ignition[639]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:41:37.444585 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:41:37.444691 ignition[639]: parsed url from cmdline: "" Sep 13 00:41:37.444695 ignition[639]: no config URL provided Sep 13 00:41:37.444700 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:41:37.444709 ignition[639]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:41:37.444728 ignition[639]: op(1): [started] loading QEMU firmware config module Sep 13 00:41:37.444733 ignition[639]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:41:37.450110 ignition[639]: op(1): [finished] loading QEMU firmware config module Sep 13 00:41:37.473365 systemd-networkd[715]: lo: Link UP Sep 13 00:41:37.473374 systemd-networkd[715]: lo: Gained carrier Sep 13 00:41:37.476293 systemd-networkd[715]: Enumeration completed Sep 13 00:41:37.476812 systemd[1]: Started systemd-networkd.service. Sep 13 00:41:37.476953 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:41:37.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.478729 systemd-networkd[715]: eth0: Link UP Sep 13 00:41:37.478734 systemd-networkd[715]: eth0: Gained carrier Sep 13 00:41:37.479188 systemd[1]: Reached target network.target. Sep 13 00:41:37.482415 systemd[1]: Starting iscsiuio.service... Sep 13 00:41:37.506464 ignition[639]: parsing config with SHA512: 921216b5c8d24737cc091324f3f9e6189499453135ec24163ed678146c2cbb4e57a3e6143ba3bb801cde137a15c01dde876799f823ca3b05b513613fd9f37678 Sep 13 00:41:37.517225 unknown[639]: fetched base config from "system" Sep 13 00:41:37.517239 unknown[639]: fetched user config from "qemu" Sep 13 00:41:37.520010 ignition[639]: fetch-offline: fetch-offline passed Sep 13 00:41:37.520117 ignition[639]: Ignition finished successfully Sep 13 00:41:37.521938 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:41:37.523059 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:41:37.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.524299 systemd[1]: Starting ignition-kargs.service... Sep 13 00:41:37.538039 systemd[1]: Started iscsiuio.service. Sep 13 00:41:37.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.541441 systemd[1]: Starting iscsid.service... Sep 13 00:41:37.543729 ignition[721]: Ignition 2.14.0 Sep 13 00:41:37.543745 ignition[721]: Stage: kargs Sep 13 00:41:37.543891 ignition[721]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:41:37.543907 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:41:37.547255 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:41:37.547255 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:41:37.547255 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:41:37.547255 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:41:37.547255 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:41:37.547255 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:41:37.554220 ignition[721]: kargs: kargs passed Sep 13 00:41:37.557375 systemd[1]: Started iscsid.service. Sep 13 00:41:37.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.554281 ignition[721]: Ignition finished successfully Sep 13 00:41:37.560053 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:41:37.561092 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:41:37.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.562857 systemd[1]: Finished ignition-kargs.service. Sep 13 00:41:37.565734 systemd[1]: Starting ignition-disks.service... Sep 13 00:41:37.576230 ignition[730]: Ignition 2.14.0 Sep 13 00:41:37.576689 ignition[730]: Stage: disks Sep 13 00:41:37.576805 ignition[730]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:41:37.576817 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:41:37.580257 ignition[730]: disks: disks passed Sep 13 00:41:37.580566 ignition[730]: Ignition finished successfully Sep 13 00:41:37.581704 systemd[1]: Finished ignition-disks.service. Sep 13 00:41:37.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.583340 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:41:37.583601 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:41:37.585366 systemd[1]: Reached target local-fs.target. Sep 13 00:41:37.585666 systemd[1]: Reached target sysinit.target. Sep 13 00:41:37.589126 systemd[1]: Reached target basic.target. Sep 13 00:41:37.591213 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:41:37.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.592216 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:41:37.593799 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:41:37.594655 systemd[1]: Reached target remote-fs.target. Sep 13 00:41:37.596238 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:41:37.605401 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:41:37.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.607287 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:41:37.619921 systemd-fsck[750]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:41:37.628943 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:41:37.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.631516 systemd[1]: Mounting sysroot.mount... Sep 13 00:41:37.643101 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:41:37.643289 systemd[1]: Mounted sysroot.mount. Sep 13 00:41:37.644246 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:41:37.647136 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:41:37.648230 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:41:37.648274 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:41:37.648300 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:41:37.650516 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:41:37.652909 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:41:37.657767 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:41:37.661567 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:41:37.665622 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:41:37.668799 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:41:37.704674 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:41:37.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.706478 systemd[1]: Starting ignition-mount.service... Sep 13 00:41:37.707823 systemd[1]: Starting sysroot-boot.service... Sep 13 00:41:37.712767 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:41:37.734500 ignition[802]: INFO : Ignition 2.14.0 Sep 13 00:41:37.734500 ignition[802]: INFO : Stage: mount Sep 13 00:41:37.737006 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:41:37.737006 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:41:37.737006 ignition[802]: INFO : mount: mount passed Sep 13 00:41:37.737006 ignition[802]: INFO : Ignition finished successfully Sep 13 00:41:37.742231 systemd[1]: Finished ignition-mount.service. Sep 13 00:41:37.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:37.837681 systemd[1]: Finished sysroot-boot.service. Sep 13 00:41:37.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:38.252655 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:41:38.260103 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Sep 13 00:41:38.260140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:41:38.262679 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:41:38.262703 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:41:38.266811 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:41:38.269666 systemd[1]: Starting ignition-files.service... Sep 13 00:41:38.290988 ignition[831]: INFO : Ignition 2.14.0 Sep 13 00:41:38.290988 ignition[831]: INFO : Stage: files Sep 13 00:41:38.293135 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:41:38.293135 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:41:38.293135 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:41:38.296988 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:41:38.296988 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:41:38.300170 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:41:38.300170 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:41:38.300170 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:41:38.300170 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:41:38.300170 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:41:38.300170 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:41:38.300170 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:41:38.298475 unknown[831]: wrote ssh authorized keys file for user: core Sep 13 00:41:38.374530 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:41:38.645014 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:41:38.647237 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:41:38.647237 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:41:38.896658 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:41:39.045715 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:41:39.048094 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:41:39.126257 systemd-networkd[715]: eth0: Gained IPv6LL Sep 13 00:41:39.437887 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:41:40.172342 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:41:40.172342 ignition[831]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:41:40.176892 ignition[831]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:41:40.210165 ignition[831]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:41:40.211797 ignition[831]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:41:40.213186 ignition[831]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:41:40.214959 ignition[831]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:41:40.216655 ignition[831]: INFO : files: files passed Sep 13 00:41:40.216655 ignition[831]: INFO : Ignition finished successfully Sep 13 00:41:40.219616 systemd[1]: Finished ignition-files.service. Sep 13 00:41:40.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.222015 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:41:40.226622 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:41:40.226643 kernel: audit: type=1130 audit(1757724100.219:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.226618 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:41:40.228616 initrd-setup-root-after-ignition[854]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:41:40.230092 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:41:40.231979 systemd[1]: Starting ignition-quench.service... Sep 13 00:41:40.233675 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:41:40.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.235845 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:41:40.239596 kernel: audit: type=1130 audit(1757724100.235:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.235915 systemd[1]: Finished ignition-quench.service. Sep 13 00:41:40.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.241177 systemd[1]: Reached target ignition-complete.target. Sep 13 00:41:40.248270 kernel: audit: type=1130 audit(1757724100.240:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.248295 kernel: audit: type=1131 audit(1757724100.240:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.248847 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:41:40.263379 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:41:40.263457 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:41:40.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.265975 systemd[1]: Reached target initrd-fs.target. Sep 13 00:41:40.272875 kernel: audit: type=1130 audit(1757724100.265:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.272897 kernel: audit: type=1131 audit(1757724100.265:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.272868 systemd[1]: Reached target initrd.target. Sep 13 00:41:40.274351 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:41:40.276322 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:41:40.288877 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:41:40.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.291150 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:41:40.294665 kernel: audit: type=1130 audit(1757724100.290:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.300215 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:41:40.300541 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:41:40.301993 systemd[1]: Stopped target timers.target. Sep 13 00:41:40.303674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:41:40.309675 kernel: audit: type=1131 audit(1757724100.304:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.303766 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:41:40.304916 systemd[1]: Stopped target initrd.target. Sep 13 00:41:40.309705 systemd[1]: Stopped target basic.target. Sep 13 00:41:40.310524 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:41:40.312226 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:41:40.313893 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:41:40.315590 systemd[1]: Stopped target remote-fs.target. Sep 13 00:41:40.317278 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:41:40.319176 systemd[1]: Stopped target sysinit.target. Sep 13 00:41:40.320799 systemd[1]: Stopped target local-fs.target. Sep 13 00:41:40.322384 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:41:40.331656 kernel: audit: type=1131 audit(1757724100.326:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.323994 systemd[1]: Stopped target swap.target. Sep 13 00:41:40.325474 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:41:40.338549 kernel: audit: type=1131 audit(1757724100.333:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.325569 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:41:40.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.327062 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:41:40.331706 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:41:40.331866 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:41:40.333646 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:41:40.333812 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:41:40.338715 systemd[1]: Stopped target paths.target. Sep 13 00:41:40.340324 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:41:40.344167 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:41:40.346036 systemd[1]: Stopped target slices.target. Sep 13 00:41:40.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.347691 systemd[1]: Stopped target sockets.target. Sep 13 00:41:40.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.349577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:41:40.349756 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:41:40.357587 iscsid[728]: iscsid shutting down. Sep 13 00:41:40.351596 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:41:40.351763 systemd[1]: Stopped ignition-files.service. Sep 13 00:41:40.354486 systemd[1]: Stopping ignition-mount.service... Sep 13 00:41:40.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.356200 systemd[1]: Stopping iscsid.service... Sep 13 00:41:40.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.359185 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:41:40.360406 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:41:40.360730 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:41:40.362629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:41:40.362833 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:41:40.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.368274 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:41:40.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.368399 systemd[1]: Stopped iscsid.service. Sep 13 00:41:40.369577 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:41:40.374362 ignition[871]: INFO : Ignition 2.14.0 Sep 13 00:41:40.374362 ignition[871]: INFO : Stage: umount Sep 13 00:41:40.374362 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:41:40.374362 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:41:40.374362 ignition[871]: INFO : umount: umount passed Sep 13 00:41:40.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.369865 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:41:40.385268 ignition[871]: INFO : Ignition finished successfully Sep 13 00:41:40.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.371761 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:41:40.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.371801 systemd[1]: Closed iscsid.socket. Sep 13 00:41:40.375007 systemd[1]: Stopping iscsiuio.service... Sep 13 00:41:40.376109 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:41:40.376183 systemd[1]: Stopped ignition-mount.service. Sep 13 00:41:40.378019 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:41:40.378138 systemd[1]: Stopped iscsiuio.service. Sep 13 00:41:40.379720 systemd[1]: Stopped target network.target. Sep 13 00:41:40.381823 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:41:40.381863 systemd[1]: Closed iscsiuio.socket. Sep 13 00:41:40.383549 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:41:40.383595 systemd[1]: Stopped ignition-disks.service. Sep 13 00:41:40.385331 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:41:40.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.385373 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:41:40.387040 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:41:40.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.387098 systemd[1]: Stopped ignition-setup.service. Sep 13 00:41:40.388891 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:41:40.390353 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:41:40.392833 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:41:40.408000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:41:40.397176 systemd-networkd[715]: eth0: DHCPv6 lease lost Sep 13 00:41:40.409000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:41:40.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.399242 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:41:40.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.399393 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:41:40.401862 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:41:40.401951 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:41:40.405549 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:41:40.405594 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:41:40.408005 systemd[1]: Stopping network-cleanup.service... Sep 13 00:41:40.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.409475 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:41:40.409526 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:41:40.410523 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:41:40.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.410563 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:41:40.412240 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:41:40.412295 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:41:40.414444 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:41:40.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.415925 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:41:40.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.419838 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:41:40.419949 systemd[1]: Stopped network-cleanup.service. Sep 13 00:41:40.423173 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:41:40.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.423296 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:41:40.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.425751 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:41:40.425786 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:41:40.427504 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:41:40.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.427539 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:41:40.429359 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:41:40.429399 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:41:40.430990 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:41:40.431024 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:41:40.432496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:41:40.432531 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:41:40.435391 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:41:40.436409 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:41:40.436463 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:41:40.439280 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:41:40.439336 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:41:40.440281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:41:40.440328 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:41:40.442649 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:41:40.443051 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:41:40.443147 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:41:40.482909 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:41:40.483017 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:41:40.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.484931 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:41:40.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:40.486509 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:41:40.486553 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:41:40.487727 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:41:40.495372 systemd[1]: Switching root. Sep 13 00:41:40.496000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:41:40.496000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:41:40.496000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:41:40.497000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:41:40.497000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:41:40.517171 systemd-journald[197]: Journal stopped Sep 13 00:41:44.554547 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Sep 13 00:41:44.554616 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:41:44.554640 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:41:44.554653 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:41:44.554665 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:41:44.554678 kernel: SELinux: policy capability open_perms=1 Sep 13 00:41:44.554694 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:41:44.554714 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:41:44.554726 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:41:44.554742 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:41:44.554757 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:41:44.554769 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:41:44.554782 systemd[1]: Successfully loaded SELinux policy in 43.186ms. Sep 13 00:41:44.554802 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.327ms. Sep 13 00:41:44.554821 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:41:44.554834 systemd[1]: Detected virtualization kvm. Sep 13 00:41:44.554848 systemd[1]: Detected architecture x86-64. Sep 13 00:41:44.554864 systemd[1]: Detected first boot. Sep 13 00:41:44.554878 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:41:44.554893 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:41:44.554908 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:41:44.554923 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:41:44.554938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:41:44.554954 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:41:44.554969 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:41:44.554984 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:41:44.555001 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:41:44.555014 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:41:44.555028 systemd[1]: Created slice system-getty.slice. Sep 13 00:41:44.555042 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:41:44.555055 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:41:44.555069 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:41:44.555105 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:41:44.555119 systemd[1]: Created slice user.slice. Sep 13 00:41:44.555133 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:41:44.555150 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:41:44.555179 systemd[1]: Set up automount boot.automount. Sep 13 00:41:44.555194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:41:44.555211 systemd[1]: Reached target integritysetup.target. Sep 13 00:41:44.555226 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:41:44.555240 systemd[1]: Reached target remote-fs.target. Sep 13 00:41:44.555254 systemd[1]: Reached target slices.target. Sep 13 00:41:44.555267 systemd[1]: Reached target swap.target. Sep 13 00:41:44.555283 systemd[1]: Reached target torcx.target. Sep 13 00:41:44.555296 systemd[1]: Reached target veritysetup.target. Sep 13 00:41:44.555310 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:41:44.555324 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:41:44.555338 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:41:44.555353 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:41:44.555366 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:41:44.555380 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:41:44.555394 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:41:44.555409 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:41:44.555423 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:41:44.555438 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:41:44.555451 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:41:44.555464 systemd[1]: Mounting media.mount... Sep 13 00:41:44.555477 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:44.555491 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:41:44.555506 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:41:44.555519 systemd[1]: Mounting tmp.mount... Sep 13 00:41:44.555533 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:41:44.555550 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:41:44.555563 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:41:44.555577 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:41:44.555592 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:41:44.555607 systemd[1]: Starting modprobe@drm.service... Sep 13 00:41:44.555621 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:41:44.555634 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:41:44.555647 systemd[1]: Starting modprobe@loop.service... Sep 13 00:41:44.555661 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:41:44.555677 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:41:44.555690 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:41:44.555703 systemd[1]: Starting systemd-journald.service... Sep 13 00:41:44.555716 kernel: fuse: init (API version 7.34) Sep 13 00:41:44.555729 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:41:44.555743 kernel: loop: module loaded Sep 13 00:41:44.555755 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:41:44.555768 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:41:44.555784 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:41:44.555800 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:44.555813 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:41:44.555829 systemd-journald[1009]: Journal started Sep 13 00:41:44.555871 systemd-journald[1009]: Runtime Journal (/run/log/journal/b160b9113ee14e8cbd0e4a18b6177325) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:41:44.437000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:41:44.437000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:41:44.552000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:41:44.552000 audit[1009]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe851c03c0 a2=4000 a3=7ffe851c045c items=0 ppid=1 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:41:44.552000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:41:44.576248 systemd[1]: Started systemd-journald.service. Sep 13 00:41:44.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.577245 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:41:44.578117 systemd[1]: Mounted media.mount. Sep 13 00:41:44.578988 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:41:44.579890 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:41:44.580776 systemd[1]: Mounted tmp.mount. Sep 13 00:41:44.581849 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:41:44.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.583096 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:41:44.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.584200 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:41:44.584424 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:41:44.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.585597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:41:44.585817 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:41:44.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.586861 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:41:44.587062 systemd[1]: Finished modprobe@drm.service. Sep 13 00:41:44.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.588053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:41:44.588302 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:41:44.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.589370 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:41:44.589571 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:41:44.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.590660 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:41:44.590877 systemd[1]: Finished modprobe@loop.service. Sep 13 00:41:44.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.592127 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:41:44.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.593383 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:41:44.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.594730 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:41:44.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.596128 systemd[1]: Reached target network-pre.target. Sep 13 00:41:44.598307 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:41:44.600215 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:41:44.601228 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:41:44.603493 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:41:44.641116 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:41:44.642054 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:41:44.643111 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:41:44.644231 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:41:44.646228 systemd-journald[1009]: Time spent on flushing to /var/log/journal/b160b9113ee14e8cbd0e4a18b6177325 is 61.615ms for 1044 entries. Sep 13 00:41:44.646228 systemd-journald[1009]: System Journal (/var/log/journal/b160b9113ee14e8cbd0e4a18b6177325) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:41:45.348000 systemd-journald[1009]: Received client request to flush runtime journal. Sep 13 00:41:44.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:45.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:44.645256 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:41:44.648704 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:41:44.651982 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:41:45.349257 udevadm[1055]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:41:44.652996 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:41:44.653926 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:41:44.655968 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:41:44.732813 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:41:44.739126 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:41:44.741728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:41:44.885908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:41:45.078287 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:41:45.107996 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:41:45.349819 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:41:45.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:45.424384 kernel: kauditd_printk_skb: 76 callbacks suppressed Sep 13 00:41:45.424433 kernel: audit: type=1130 audit(1757724105.423:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.038875 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:41:46.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.060137 kernel: audit: type=1130 audit(1757724106.056:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.060392 systemd[1]: Starting systemd-udevd.service... Sep 13 00:41:46.080645 systemd-udevd[1067]: Using default interface naming scheme 'v252'. Sep 13 00:41:46.098782 systemd[1]: Started systemd-udevd.service. Sep 13 00:41:46.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.102872 systemd[1]: Starting systemd-networkd.service... Sep 13 00:41:46.104097 kernel: audit: type=1130 audit(1757724106.100:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.111008 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:41:46.130212 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:41:46.152587 systemd[1]: Started systemd-userdbd.service. Sep 13 00:41:46.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.157100 kernel: audit: type=1130 audit(1757724106.153:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.174094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:41:46.189103 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:41:46.195103 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:41:46.205703 systemd-networkd[1074]: lo: Link UP Sep 13 00:41:46.206061 systemd-networkd[1074]: lo: Gained carrier Sep 13 00:41:46.206576 systemd-networkd[1074]: Enumeration completed Sep 13 00:41:46.206750 systemd[1]: Started systemd-networkd.service. Sep 13 00:41:46.209495 systemd-networkd[1074]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:41:46.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.266376 systemd-networkd[1074]: eth0: Link UP Sep 13 00:41:46.266380 systemd-networkd[1074]: eth0: Gained carrier Sep 13 00:41:46.270105 kernel: audit: type=1130 audit(1757724106.265:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.273000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:41:46.360162 kernel: audit: type=1400 audit(1757724106.273:118): avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:41:46.315246 systemd-networkd[1074]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:41:46.273000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5630574ad000 a1=338ec a2=7f663c400bc5 a3=5 items=110 ppid=1067 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:41:46.273000 audit: CWD cwd="/" Sep 13 00:41:46.367336 kernel: audit: type=1300 audit(1757724106.273:118): arch=c000003e syscall=175 success=yes exit=0 a0=5630574ad000 a1=338ec a2=7f663c400bc5 a3=5 items=110 ppid=1067 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:41:46.367385 kernel: audit: type=1307 audit(1757724106.273:118): cwd="/" Sep 13 00:41:46.367405 kernel: audit: type=1302 audit(1757724106.273:118): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=1 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.373496 kernel: audit: type=1302 audit(1757724106.273:118): item=1 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=2 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=3 name=(null) inode=12271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=4 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=5 name=(null) inode=12272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=6 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=7 name=(null) inode=12273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=8 name=(null) inode=12273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=9 name=(null) inode=12274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=10 name=(null) inode=12273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=11 name=(null) inode=12275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=12 name=(null) inode=12273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=13 name=(null) inode=12276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=14 name=(null) inode=12273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=15 name=(null) inode=12277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=16 name=(null) inode=12273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=17 name=(null) inode=12278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=18 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=19 name=(null) inode=12279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=20 name=(null) inode=12279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=21 name=(null) inode=12280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=22 name=(null) inode=12279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=23 name=(null) inode=12281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=24 name=(null) inode=12279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=25 name=(null) inode=12282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=26 name=(null) inode=12279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=27 name=(null) inode=12283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=28 name=(null) inode=12279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=29 name=(null) inode=12284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=30 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=31 name=(null) inode=12285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=32 name=(null) inode=12285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=33 name=(null) inode=12286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=34 name=(null) inode=12285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=35 name=(null) inode=12287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=36 name=(null) inode=12285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=37 name=(null) inode=12288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=38 name=(null) inode=12285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=39 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=40 name=(null) inode=12285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=41 name=(null) inode=16386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=42 name=(null) inode=12270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=43 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=44 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=45 name=(null) inode=16388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=46 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=47 name=(null) inode=16389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=48 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=49 name=(null) inode=16390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=50 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=51 name=(null) inode=16391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=52 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=53 name=(null) inode=16392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=55 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=56 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=57 name=(null) inode=16394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=58 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=59 name=(null) inode=16395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=60 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=61 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=62 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=63 name=(null) inode=16397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=64 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=65 name=(null) inode=16398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=66 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=67 name=(null) inode=16399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=68 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=69 name=(null) inode=16400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=70 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=71 name=(null) inode=16401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=72 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=73 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=74 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=75 name=(null) inode=16403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=76 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=77 name=(null) inode=16404 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=78 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=79 name=(null) inode=16405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=80 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=81 name=(null) inode=16406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=82 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=83 name=(null) inode=16407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=84 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=85 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=86 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=87 name=(null) inode=16409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=88 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=89 name=(null) inode=16410 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=90 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=91 name=(null) inode=16411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=92 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=93 name=(null) inode=16412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=94 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=95 name=(null) inode=16413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=96 name=(null) inode=16393 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=97 name=(null) inode=16414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=98 name=(null) inode=16414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=99 name=(null) inode=16415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=100 name=(null) inode=16414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=101 name=(null) inode=16416 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=102 name=(null) inode=16414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=103 name=(null) inode=16417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=104 name=(null) inode=16414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=105 name=(null) inode=16418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=106 name=(null) inode=16414 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=107 name=(null) inode=16419 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PATH item=109 name=(null) inode=16420 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:41:46.273000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:41:46.382062 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:41:46.383983 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:41:46.384478 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:41:46.403108 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:41:46.406101 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:41:46.411124 kernel: kvm: Nested Virtualization enabled Sep 13 00:41:46.411204 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:41:46.411230 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:41:46.412712 kernel: SVM: Virtual GIF supported Sep 13 00:41:46.432092 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:41:46.461722 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:41:46.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.537529 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:41:46.544955 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:41:46.570984 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:41:46.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.572252 systemd[1]: Reached target cryptsetup.target. Sep 13 00:41:46.574482 systemd[1]: Starting lvm2-activation.service... Sep 13 00:41:46.649188 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:41:46.671243 systemd[1]: Finished lvm2-activation.service. Sep 13 00:41:46.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.728217 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:41:46.729310 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:41:46.729336 systemd[1]: Reached target local-fs.target. Sep 13 00:41:46.786909 systemd[1]: Reached target machines.target. Sep 13 00:41:46.789226 systemd[1]: Starting ldconfig.service... Sep 13 00:41:46.790244 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:41:46.790277 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:46.791129 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:41:46.793346 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:41:46.795851 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:41:46.798030 systemd[1]: Starting systemd-sysext.service... Sep 13 00:41:46.799409 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1108 (bootctl) Sep 13 00:41:46.800565 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:41:46.804541 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:41:46.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.812058 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:41:46.848348 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:41:46.848557 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:41:46.868120 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:41:46.890555 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Sep 13 00:41:46.890555 systemd-fsck[1120]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:41:46.892163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:41:46.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:46.924534 systemd[1]: Mounting boot.mount... Sep 13 00:41:47.029581 systemd[1]: Mounted boot.mount. Sep 13 00:41:47.507008 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:41:47.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.536105 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:41:47.567199 ldconfig[1107]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:41:47.585122 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:41:47.674249 (sd-sysext)[1128]: Using extensions 'kubernetes'. Sep 13 00:41:47.674675 (sd-sysext)[1128]: Merged extensions into '/usr'. Sep 13 00:41:47.763671 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:47.765439 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:41:47.766610 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:41:47.768292 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:41:47.770428 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:41:47.772698 systemd[1]: Starting modprobe@loop.service... Sep 13 00:41:47.774459 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:41:47.774570 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:47.774668 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:47.777419 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:41:47.778739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:41:47.778894 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:41:47.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.780525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:41:47.780654 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:41:47.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.781931 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:41:47.782068 systemd[1]: Finished modprobe@loop.service. Sep 13 00:41:47.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.783538 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:41:47.783644 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:41:47.784663 systemd[1]: Finished systemd-sysext.service. Sep 13 00:41:47.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:47.834059 systemd[1]: Starting ensure-sysext.service... Sep 13 00:41:47.836052 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:41:47.842189 systemd[1]: Reloading. Sep 13 00:41:47.904775 /usr/lib/systemd/system-generators/torcx-generator[1162]: time="2025-09-13T00:41:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:41:47.904803 /usr/lib/systemd/system-generators/torcx-generator[1162]: time="2025-09-13T00:41:47Z" level=info msg="torcx already run" Sep 13 00:41:47.938741 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:41:47.939820 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:41:47.941528 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:41:48.102045 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:41:48.102069 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:41:48.120976 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:41:48.172060 systemd[1]: Finished ldconfig.service. Sep 13 00:41:48.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.180760 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:48.180956 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.182098 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:41:48.201747 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:41:48.203780 systemd[1]: Starting modprobe@loop.service... Sep 13 00:41:48.204654 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.204756 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:48.204856 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:48.205642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:41:48.205787 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:41:48.207150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:41:48.207378 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:41:48.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.208836 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:41:48.208968 systemd[1]: Finished modprobe@loop.service. Sep 13 00:41:48.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.210101 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:41:48.210188 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.211618 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:48.211808 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.212885 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:41:48.214182 systemd-networkd[1074]: eth0: Gained IPv6LL Sep 13 00:41:48.214725 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:41:48.216533 systemd[1]: Starting modprobe@loop.service... Sep 13 00:41:48.217422 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.217524 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:48.217619 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:48.218456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:41:48.218602 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:41:48.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.219810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:41:48.219945 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:41:48.243804 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:41:48.243955 systemd[1]: Finished modprobe@loop.service. Sep 13 00:41:48.247748 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:48.247996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.249213 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:41:48.251462 systemd[1]: Starting modprobe@drm.service... Sep 13 00:41:48.253765 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:41:48.256151 systemd[1]: Starting modprobe@loop.service... Sep 13 00:41:48.257772 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.258009 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:48.259773 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:41:48.260868 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:41:48.262149 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:41:48.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.263491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:41:48.263621 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:41:48.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.264805 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:41:48.264935 systemd[1]: Finished modprobe@drm.service. Sep 13 00:41:48.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.304008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:41:48.304162 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:41:48.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.306422 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:41:48.306571 systemd[1]: Finished modprobe@loop.service. Sep 13 00:41:48.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.307926 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:41:48.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.311092 systemd[1]: Starting audit-rules.service... Sep 13 00:41:48.313297 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:41:48.315571 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:41:48.316589 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:41:48.316657 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:41:48.318053 systemd[1]: Starting systemd-resolved.service... Sep 13 00:41:48.320058 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:41:48.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.321799 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:41:48.323293 systemd[1]: Finished ensure-sysext.service. Sep 13 00:41:48.324445 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:41:48.355634 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:41:48.379000 audit[1244]: SYSTEM_BOOT pid=1244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.381239 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:41:48.412990 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:41:48.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.436421 systemd[1]: Starting systemd-update-done.service... Sep 13 00:41:48.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:41:48.460488 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:41:48.461812 systemd[1]: Reached target time-set.target. Sep 13 00:41:49.267972 systemd-resolved[1241]: Positive Trust Anchors: Sep 13 00:41:49.268034 systemd-timesyncd[1243]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:41:49.268277 systemd-timesyncd[1243]: Initial clock synchronization to Sat 2025-09-13 00:41:49.267741 UTC. Sep 13 00:41:49.268564 systemd-resolved[1241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:41:49.268610 systemd-resolved[1241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:41:49.270476 augenrules[1263]: No rules Sep 13 00:41:49.268000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:41:49.268000 audit[1263]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef19e5400 a2=420 a3=0 items=0 ppid=1238 pid=1263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:41:49.268000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:41:49.270600 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:41:49.271258 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:41:49.273176 systemd[1]: Finished audit-rules.service. Sep 13 00:41:49.274559 systemd[1]: Finished systemd-update-done.service. Sep 13 00:41:49.281951 systemd-resolved[1241]: Defaulting to hostname 'linux'. Sep 13 00:41:49.284373 systemd[1]: Started systemd-resolved.service. Sep 13 00:41:49.285420 systemd[1]: Reached target network.target. Sep 13 00:41:49.286494 systemd[1]: Reached target network-online.target. Sep 13 00:41:49.287595 systemd[1]: Reached target nss-lookup.target. Sep 13 00:41:49.288614 systemd[1]: Reached target sysinit.target. Sep 13 00:41:49.289692 systemd[1]: Started motdgen.path. Sep 13 00:41:49.290680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:41:49.292238 systemd[1]: Started logrotate.timer. Sep 13 00:41:49.293214 systemd[1]: Started mdadm.timer. Sep 13 00:41:49.295007 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:41:49.296829 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:41:49.296859 systemd[1]: Reached target paths.target. Sep 13 00:41:49.297792 systemd[1]: Reached target timers.target. Sep 13 00:41:49.299028 systemd[1]: Listening on dbus.socket. Sep 13 00:41:49.301115 systemd[1]: Starting docker.socket... Sep 13 00:41:49.303854 systemd[1]: Listening on sshd.socket. Sep 13 00:41:49.304660 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:49.304953 systemd[1]: Listening on docker.socket. Sep 13 00:41:49.305707 systemd[1]: Reached target sockets.target. Sep 13 00:41:49.306517 systemd[1]: Reached target basic.target. Sep 13 00:41:49.377465 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:41:49.377559 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:41:49.377590 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:41:49.379293 systemd[1]: Starting containerd.service... Sep 13 00:41:49.381374 systemd[1]: Starting dbus.service... Sep 13 00:41:49.384945 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:41:49.387753 systemd[1]: Starting extend-filesystems.service... Sep 13 00:41:49.389080 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:41:49.391156 systemd[1]: Starting kubelet.service... Sep 13 00:41:49.394330 systemd[1]: Starting motdgen.service... Sep 13 00:41:49.396840 jq[1276]: false Sep 13 00:41:49.397353 systemd[1]: Starting prepare-helm.service... Sep 13 00:41:49.402784 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:41:49.407163 systemd[1]: Starting sshd-keygen.service... Sep 13 00:41:49.410720 systemd[1]: Starting systemd-logind.service... Sep 13 00:41:49.411601 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:41:49.411750 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:41:49.440179 systemd[1]: Starting update-engine.service... Sep 13 00:41:49.441984 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:41:49.444796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:41:49.450868 jq[1297]: true Sep 13 00:41:49.445330 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:41:49.447087 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:41:49.447301 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:41:49.457908 jq[1307]: true Sep 13 00:41:49.471975 extend-filesystems[1277]: Found loop1 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found sr0 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda1 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda2 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda3 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found usr Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda4 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda6 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda7 Sep 13 00:41:49.471975 extend-filesystems[1277]: Found vda9 Sep 13 00:41:49.471975 extend-filesystems[1277]: Checking size of /dev/vda9 Sep 13 00:41:49.469647 systemd[1]: Started dbus.service. Sep 13 00:41:49.469340 dbus-daemon[1275]: [system] SELinux support is enabled Sep 13 00:41:49.514620 tar[1304]: linux-amd64/helm Sep 13 00:41:49.473129 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:41:49.473149 systemd[1]: Reached target system-config.target. Sep 13 00:41:49.481492 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:41:49.481514 systemd[1]: Reached target user-config.target. Sep 13 00:41:49.482919 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:41:49.483184 systemd[1]: Finished motdgen.service. Sep 13 00:41:49.499806 systemd-logind[1291]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:41:49.499827 systemd-logind[1291]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:41:49.500474 systemd-logind[1291]: New seat seat0. Sep 13 00:41:49.507408 systemd[1]: Started systemd-logind.service. Sep 13 00:41:49.607283 bash[1332]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:41:49.608508 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:41:49.620789 env[1308]: time="2025-09-13T00:41:49.620623582Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:41:49.630274 extend-filesystems[1277]: Resized partition /dev/vda9 Sep 13 00:41:49.644876 extend-filesystems[1341]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:41:49.648296 update_engine[1295]: I0913 00:41:49.647571 1295 main.cc:92] Flatcar Update Engine starting Sep 13 00:41:49.678453 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:41:49.679091 systemd[1]: Started update-engine.service. Sep 13 00:41:49.680207 update_engine[1295]: I0913 00:41:49.679172 1295 update_check_scheduler.cc:74] Next update check in 2m6s Sep 13 00:41:49.682555 systemd[1]: Started locksmithd.service. Sep 13 00:41:49.717818 env[1308]: time="2025-09-13T00:41:49.717678230Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:41:49.717951 env[1308]: time="2025-09-13T00:41:49.717884847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:41:49.719713 env[1308]: time="2025-09-13T00:41:49.719683249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:41:49.719713 env[1308]: time="2025-09-13T00:41:49.719711121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:41:49.719974 env[1308]: time="2025-09-13T00:41:49.719943978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:41:49.719974 env[1308]: time="2025-09-13T00:41:49.719972542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:41:49.720039 env[1308]: time="2025-09-13T00:41:49.719984414Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:41:49.720039 env[1308]: time="2025-09-13T00:41:49.719992930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:41:49.720084 env[1308]: time="2025-09-13T00:41:49.720052692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:41:49.720310 env[1308]: time="2025-09-13T00:41:49.720289897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:41:49.720480 env[1308]: time="2025-09-13T00:41:49.720454054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:41:49.720480 env[1308]: time="2025-09-13T00:41:49.720470475Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:41:49.720574 env[1308]: time="2025-09-13T00:41:49.720514207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:41:49.720574 env[1308]: time="2025-09-13T00:41:49.720528865Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:41:49.849426 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:41:49.878655 extend-filesystems[1341]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:41:49.878655 extend-filesystems[1341]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:41:49.878655 extend-filesystems[1341]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:41:49.884971 extend-filesystems[1277]: Resized filesystem in /dev/vda9 Sep 13 00:41:49.880908 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:41:49.881137 systemd[1]: Finished extend-filesystems.service. Sep 13 00:41:49.887035 env[1308]: time="2025-09-13T00:41:49.887002964Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:41:49.887094 env[1308]: time="2025-09-13T00:41:49.887083424Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:41:49.887116 env[1308]: time="2025-09-13T00:41:49.887097912Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:41:49.887190 env[1308]: time="2025-09-13T00:41:49.887172722Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887234 env[1308]: time="2025-09-13T00:41:49.887193601Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887234 env[1308]: time="2025-09-13T00:41:49.887221073Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887272 env[1308]: time="2025-09-13T00:41:49.887233756Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887272 env[1308]: time="2025-09-13T00:41:49.887247232Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887272 env[1308]: time="2025-09-13T00:41:49.887260076Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887331 env[1308]: time="2025-09-13T00:41:49.887287507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887331 env[1308]: time="2025-09-13T00:41:49.887300311Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.887331 env[1308]: time="2025-09-13T00:41:49.887312153Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:41:49.887502 env[1308]: time="2025-09-13T00:41:49.887479267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:41:49.887602 env[1308]: time="2025-09-13T00:41:49.887583101Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:41:49.888029 env[1308]: time="2025-09-13T00:41:49.888008238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:41:49.888074 env[1308]: time="2025-09-13T00:41:49.888039217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888074 env[1308]: time="2025-09-13T00:41:49.888052001Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:41:49.888145 env[1308]: time="2025-09-13T00:41:49.888122873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888192 env[1308]: time="2025-09-13T00:41:49.888152208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888192 env[1308]: time="2025-09-13T00:41:49.888164411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888192 env[1308]: time="2025-09-13T00:41:49.888178167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888192 env[1308]: time="2025-09-13T00:41:49.888189699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888270 env[1308]: time="2025-09-13T00:41:49.888206009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888270 env[1308]: time="2025-09-13T00:41:49.888231227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888270 env[1308]: time="2025-09-13T00:41:49.888241917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888270 env[1308]: time="2025-09-13T00:41:49.888254410Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:41:49.888441 env[1308]: time="2025-09-13T00:41:49.888421664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888486 env[1308]: time="2025-09-13T00:41:49.888458633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888486 env[1308]: time="2025-09-13T00:41:49.888471237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888486 env[1308]: time="2025-09-13T00:41:49.888482878Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:41:49.888542 env[1308]: time="2025-09-13T00:41:49.888497867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:41:49.888542 env[1308]: time="2025-09-13T00:41:49.888507725Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:41:49.888542 env[1308]: time="2025-09-13T00:41:49.888537811Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:41:49.888609 env[1308]: time="2025-09-13T00:41:49.888578408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:41:49.888976 env[1308]: time="2025-09-13T00:41:49.888809130Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:41:49.888976 env[1308]: time="2025-09-13T00:41:49.888875825Z" level=info msg="Connect containerd service" Sep 13 00:41:49.888976 env[1308]: time="2025-09-13T00:41:49.888924997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:41:49.889840 env[1308]: time="2025-09-13T00:41:49.889542505Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:41:49.889840 env[1308]: time="2025-09-13T00:41:49.889695392Z" level=info msg="Start subscribing containerd event" Sep 13 00:41:49.889840 env[1308]: time="2025-09-13T00:41:49.889785341Z" level=info msg="Start recovering state" Sep 13 00:41:49.889927 env[1308]: time="2025-09-13T00:41:49.889898242Z" level=info msg="Start event monitor" Sep 13 00:41:49.889973 env[1308]: time="2025-09-13T00:41:49.889929331Z" level=info msg="Start snapshots syncer" Sep 13 00:41:49.889973 env[1308]: time="2025-09-13T00:41:49.889944729Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:41:49.889973 env[1308]: time="2025-09-13T00:41:49.889954027Z" level=info msg="Start streaming server" Sep 13 00:41:49.890080 env[1308]: time="2025-09-13T00:41:49.889908161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:41:49.890141 env[1308]: time="2025-09-13T00:41:49.890121010Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:41:49.890258 systemd[1]: Started containerd.service. Sep 13 00:41:49.891651 env[1308]: time="2025-09-13T00:41:49.890705115Z" level=info msg="containerd successfully booted in 0.271684s" Sep 13 00:41:50.043622 locksmithd[1342]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:41:50.200228 tar[1304]: linux-amd64/LICENSE Sep 13 00:41:50.200407 tar[1304]: linux-amd64/README.md Sep 13 00:41:50.205554 systemd[1]: Finished prepare-helm.service. Sep 13 00:41:50.792475 sshd_keygen[1299]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:41:50.811730 systemd[1]: Finished sshd-keygen.service. Sep 13 00:41:50.846284 systemd[1]: Starting issuegen.service... Sep 13 00:41:50.851482 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:41:50.851692 systemd[1]: Finished issuegen.service. Sep 13 00:41:50.853860 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:41:50.859968 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:41:50.862405 systemd[1]: Started getty@tty1.service. Sep 13 00:41:50.864237 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:41:50.865323 systemd[1]: Reached target getty.target. Sep 13 00:41:51.170433 systemd[1]: Started kubelet.service. Sep 13 00:41:51.172782 systemd[1]: Reached target multi-user.target. Sep 13 00:41:51.175978 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:41:51.183428 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:41:51.183726 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:41:51.190611 systemd[1]: Startup finished in 6.498s (kernel) + 9.823s (userspace) = 16.321s. Sep 13 00:41:51.950709 kubelet[1376]: E0913 00:41:51.950627 1376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:41:51.952132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:41:51.952402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:41:58.477584 systemd[1]: Created slice system-sshd.slice. Sep 13 00:41:58.478918 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:33662.service. Sep 13 00:41:58.512959 sshd[1386]: Accepted publickey for core from 10.0.0.1 port 33662 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:41:58.514578 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:41:58.523023 systemd[1]: Created slice user-500.slice. Sep 13 00:41:58.524105 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:41:58.525725 systemd-logind[1291]: New session 1 of user core. Sep 13 00:41:58.533314 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:41:58.534496 systemd[1]: Starting user@500.service... Sep 13 00:41:58.538241 (systemd)[1391]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:41:58.612801 systemd[1391]: Queued start job for default target default.target. Sep 13 00:41:58.613072 systemd[1391]: Reached target paths.target. Sep 13 00:41:58.613094 systemd[1391]: Reached target sockets.target. Sep 13 00:41:58.613110 systemd[1391]: Reached target timers.target. Sep 13 00:41:58.613135 systemd[1391]: Reached target basic.target. Sep 13 00:41:58.613187 systemd[1391]: Reached target default.target. Sep 13 00:41:58.613214 systemd[1391]: Startup finished in 67ms. Sep 13 00:41:58.613499 systemd[1]: Started user@500.service. Sep 13 00:41:58.614989 systemd[1]: Started session-1.scope. Sep 13 00:41:58.666026 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:33676.service. Sep 13 00:41:58.703464 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 33676 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:41:58.705146 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:41:58.708824 systemd-logind[1291]: New session 2 of user core. Sep 13 00:41:58.709942 systemd[1]: Started session-2.scope. Sep 13 00:41:58.764913 sshd[1400]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:58.768119 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:33692.service. Sep 13 00:41:58.768913 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:33676.service: Deactivated successfully. Sep 13 00:41:58.770568 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:41:58.770616 systemd-logind[1291]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:41:58.771792 systemd-logind[1291]: Removed session 2. Sep 13 00:41:58.799131 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 33692 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:41:58.800956 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:41:58.805301 systemd-logind[1291]: New session 3 of user core. Sep 13 00:41:58.806340 systemd[1]: Started session-3.scope. Sep 13 00:41:58.857930 sshd[1406]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:58.860422 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:33706.service. Sep 13 00:41:58.860835 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:33692.service: Deactivated successfully. Sep 13 00:41:58.861723 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:41:58.861751 systemd-logind[1291]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:41:58.862724 systemd-logind[1291]: Removed session 3. Sep 13 00:41:58.894963 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 33706 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:41:58.896452 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:41:58.900150 systemd-logind[1291]: New session 4 of user core. Sep 13 00:41:58.900924 systemd[1]: Started session-4.scope. Sep 13 00:41:58.957328 sshd[1412]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:58.959800 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:33712.service. Sep 13 00:41:58.960431 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:33706.service: Deactivated successfully. Sep 13 00:41:58.961259 systemd-logind[1291]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:41:58.961273 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:41:58.961999 systemd-logind[1291]: Removed session 4. Sep 13 00:41:58.990426 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 33712 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:41:58.991541 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:41:58.994843 systemd-logind[1291]: New session 5 of user core. Sep 13 00:41:58.995544 systemd[1]: Started session-5.scope. Sep 13 00:41:59.051432 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:41:59.051625 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:41:59.080731 systemd[1]: Starting docker.service... Sep 13 00:41:59.177491 env[1437]: time="2025-09-13T00:41:59.177410004Z" level=info msg="Starting up" Sep 13 00:41:59.179432 env[1437]: time="2025-09-13T00:41:59.179395938Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:41:59.179484 env[1437]: time="2025-09-13T00:41:59.179431334Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:41:59.179520 env[1437]: time="2025-09-13T00:41:59.179498941Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:41:59.179520 env[1437]: time="2025-09-13T00:41:59.179515713Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:41:59.181837 env[1437]: time="2025-09-13T00:41:59.181807199Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:41:59.181837 env[1437]: time="2025-09-13T00:41:59.181824942Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:41:59.181837 env[1437]: time="2025-09-13T00:41:59.181841313Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:41:59.181837 env[1437]: time="2025-09-13T00:41:59.181849168Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:42:00.025104 env[1437]: time="2025-09-13T00:42:00.025051121Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:42:00.025104 env[1437]: time="2025-09-13T00:42:00.025080476Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:42:00.025370 env[1437]: time="2025-09-13T00:42:00.025264351Z" level=info msg="Loading containers: start." Sep 13 00:42:00.153395 kernel: Initializing XFRM netlink socket Sep 13 00:42:00.187775 env[1437]: time="2025-09-13T00:42:00.187728642Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:42:00.239623 systemd-networkd[1074]: docker0: Link UP Sep 13 00:42:00.259971 env[1437]: time="2025-09-13T00:42:00.259914370Z" level=info msg="Loading containers: done." Sep 13 00:42:00.274412 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2616447318-merged.mount: Deactivated successfully. Sep 13 00:42:00.275228 env[1437]: time="2025-09-13T00:42:00.275120982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:42:00.275646 env[1437]: time="2025-09-13T00:42:00.275619537Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:42:00.275752 env[1437]: time="2025-09-13T00:42:00.275729904Z" level=info msg="Daemon has completed initialization" Sep 13 00:42:00.295416 systemd[1]: Started docker.service. Sep 13 00:42:00.306444 env[1437]: time="2025-09-13T00:42:00.306353243Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:42:01.657216 env[1308]: time="2025-09-13T00:42:01.657148844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:42:02.125263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:42:02.125505 systemd[1]: Stopped kubelet.service. Sep 13 00:42:02.127563 systemd[1]: Starting kubelet.service... Sep 13 00:42:02.283709 systemd[1]: Started kubelet.service. Sep 13 00:42:02.587304 kubelet[1575]: E0913 00:42:02.586445 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:02.589379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:02.589588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:03.002576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249781802.mount: Deactivated successfully. Sep 13 00:42:04.752626 env[1308]: time="2025-09-13T00:42:04.752560610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:04.754533 env[1308]: time="2025-09-13T00:42:04.754472104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:04.756940 env[1308]: time="2025-09-13T00:42:04.756905156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:04.759208 env[1308]: time="2025-09-13T00:42:04.759159393Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:04.760669 env[1308]: time="2025-09-13T00:42:04.760624540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:42:04.761747 env[1308]: time="2025-09-13T00:42:04.761703413Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:42:06.650306 env[1308]: time="2025-09-13T00:42:06.650224229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:06.651907 env[1308]: time="2025-09-13T00:42:06.651881146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:06.653691 env[1308]: time="2025-09-13T00:42:06.653637178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:06.655460 env[1308]: time="2025-09-13T00:42:06.655424900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:06.656084 env[1308]: time="2025-09-13T00:42:06.656057046Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:42:06.656674 env[1308]: time="2025-09-13T00:42:06.656626614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:42:09.537420 env[1308]: time="2025-09-13T00:42:09.537205548Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:09.539800 env[1308]: time="2025-09-13T00:42:09.539726585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:09.541524 env[1308]: time="2025-09-13T00:42:09.541475184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:09.543403 env[1308]: time="2025-09-13T00:42:09.543354307Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:09.544180 env[1308]: time="2025-09-13T00:42:09.544128128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:42:09.545170 env[1308]: time="2025-09-13T00:42:09.545139334Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:42:10.862861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045828922.mount: Deactivated successfully. Sep 13 00:42:12.625516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:42:12.625765 systemd[1]: Stopped kubelet.service. Sep 13 00:42:12.627909 systemd[1]: Starting kubelet.service... Sep 13 00:42:12.737276 systemd[1]: Started kubelet.service. Sep 13 00:42:12.871412 kubelet[1592]: E0913 00:42:12.871335 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:12.873145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:12.873296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:14.408256 env[1308]: time="2025-09-13T00:42:14.408178281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:14.482192 env[1308]: time="2025-09-13T00:42:14.482110414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:14.504656 env[1308]: time="2025-09-13T00:42:14.504590894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:14.528639 env[1308]: time="2025-09-13T00:42:14.528580485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:14.529415 env[1308]: time="2025-09-13T00:42:14.529348485Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:42:14.530266 env[1308]: time="2025-09-13T00:42:14.530225740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:42:16.175016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000180262.mount: Deactivated successfully. Sep 13 00:42:18.370823 env[1308]: time="2025-09-13T00:42:18.370724053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.373881 env[1308]: time="2025-09-13T00:42:18.373805240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.378410 env[1308]: time="2025-09-13T00:42:18.378342858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.380831 env[1308]: time="2025-09-13T00:42:18.380744912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.382216 env[1308]: time="2025-09-13T00:42:18.382124579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:42:18.382907 env[1308]: time="2025-09-13T00:42:18.382850831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:42:18.876230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536260703.mount: Deactivated successfully. Sep 13 00:42:18.884467 env[1308]: time="2025-09-13T00:42:18.884396088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.885689 env[1308]: time="2025-09-13T00:42:18.885654788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.887261 env[1308]: time="2025-09-13T00:42:18.887219863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.887774 env[1308]: time="2025-09-13T00:42:18.887735680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:42:18.888471 env[1308]: time="2025-09-13T00:42:18.888428198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:18.888632 env[1308]: time="2025-09-13T00:42:18.888493160Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:42:19.736552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717121993.mount: Deactivated successfully. Sep 13 00:42:22.875310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:42:22.875550 systemd[1]: Stopped kubelet.service. Sep 13 00:42:22.877389 systemd[1]: Starting kubelet.service... Sep 13 00:42:23.037878 systemd[1]: Started kubelet.service. Sep 13 00:42:23.073294 kubelet[1608]: E0913 00:42:23.073226 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:23.074953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:23.075096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:25.814196 env[1308]: time="2025-09-13T00:42:25.814125267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:25.942500 env[1308]: time="2025-09-13T00:42:25.942430525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:26.028125 env[1308]: time="2025-09-13T00:42:26.028068901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:26.081554 env[1308]: time="2025-09-13T00:42:26.081395946Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:26.082402 env[1308]: time="2025-09-13T00:42:26.082347770Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:42:28.203266 systemd[1]: Stopped kubelet.service. Sep 13 00:42:28.205911 systemd[1]: Starting kubelet.service... Sep 13 00:42:28.228418 systemd[1]: Reloading. Sep 13 00:42:28.299149 /usr/lib/systemd/system-generators/torcx-generator[1666]: time="2025-09-13T00:42:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:42:28.299178 /usr/lib/systemd/system-generators/torcx-generator[1666]: time="2025-09-13T00:42:28Z" level=info msg="torcx already run" Sep 13 00:42:28.533739 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:28.533761 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:28.555861 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:28.626580 systemd[1]: Started kubelet.service. Sep 13 00:42:28.628108 systemd[1]: Stopping kubelet.service... Sep 13 00:42:28.628437 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:42:28.628688 systemd[1]: Stopped kubelet.service. Sep 13 00:42:28.630324 systemd[1]: Starting kubelet.service... Sep 13 00:42:28.733691 systemd[1]: Started kubelet.service. Sep 13 00:42:28.782423 kubelet[1726]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:42:28.782423 kubelet[1726]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:42:28.782423 kubelet[1726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:42:28.782882 kubelet[1726]: I0913 00:42:28.782497 1726 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:42:29.294198 kubelet[1726]: I0913 00:42:29.294121 1726 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:42:29.294198 kubelet[1726]: I0913 00:42:29.294179 1726 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:42:29.294489 kubelet[1726]: I0913 00:42:29.294464 1726 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:42:29.407329 kubelet[1726]: E0913 00:42:29.407269 1726 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:29.408278 kubelet[1726]: I0913 00:42:29.408247 1726 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:42:29.418174 kubelet[1726]: E0913 00:42:29.418128 1726 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:42:29.418174 kubelet[1726]: I0913 00:42:29.418166 1726 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:42:29.424012 kubelet[1726]: I0913 00:42:29.423981 1726 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:42:29.425109 kubelet[1726]: I0913 00:42:29.425076 1726 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:42:29.425261 kubelet[1726]: I0913 00:42:29.425223 1726 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:42:29.425480 kubelet[1726]: I0913 00:42:29.425257 1726 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:42:29.425580 kubelet[1726]: I0913 00:42:29.425498 1726 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:42:29.425580 kubelet[1726]: I0913 00:42:29.425508 1726 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:42:29.425639 kubelet[1726]: I0913 00:42:29.425632 1726 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:42:29.435622 kubelet[1726]: I0913 00:42:29.435593 1726 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:42:29.435622 kubelet[1726]: I0913 00:42:29.435624 1726 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:42:29.435702 kubelet[1726]: I0913 00:42:29.435658 1726 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:42:29.435702 kubelet[1726]: I0913 00:42:29.435683 1726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:42:29.559449 kubelet[1726]: W0913 00:42:29.559265 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:29.559449 kubelet[1726]: E0913 00:42:29.559344 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:29.560721 kubelet[1726]: W0913 00:42:29.560655 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:29.560786 kubelet[1726]: E0913 00:42:29.560722 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:29.562350 kubelet[1726]: I0913 00:42:29.562327 1726 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:42:29.562744 kubelet[1726]: I0913 00:42:29.562731 1726 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:42:29.563925 kubelet[1726]: W0913 00:42:29.563893 1726 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:42:29.567000 kubelet[1726]: I0913 00:42:29.566950 1726 server.go:1274] "Started kubelet" Sep 13 00:42:29.567078 kubelet[1726]: I0913 00:42:29.567046 1726 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:42:29.568093 kubelet[1726]: I0913 00:42:29.568064 1726 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:42:29.570272 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:42:29.570465 kubelet[1726]: I0913 00:42:29.570435 1726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:42:29.574794 kubelet[1726]: I0913 00:42:29.574746 1726 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:42:29.575008 kubelet[1726]: I0913 00:42:29.574981 1726 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:42:29.575437 kubelet[1726]: I0913 00:42:29.575416 1726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:42:29.576352 kubelet[1726]: I0913 00:42:29.576337 1726 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:42:29.580497 kubelet[1726]: E0913 00:42:29.580477 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:29.581040 kubelet[1726]: E0913 00:42:29.581011 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Sep 13 00:42:29.581125 kubelet[1726]: I0913 00:42:29.581072 1726 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:42:29.581220 kubelet[1726]: I0913 00:42:29.581201 1726 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:42:29.581299 kubelet[1726]: E0913 00:42:29.576084 1726 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b0cd1c0fc38c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:42:29.566915468 +0000 UTC m=+0.824535456,LastTimestamp:2025-09-13 00:42:29.566915468 +0000 UTC m=+0.824535456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:42:29.581600 kubelet[1726]: W0913 00:42:29.581562 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:29.581672 kubelet[1726]: E0913 00:42:29.581605 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:29.582264 kubelet[1726]: I0913 00:42:29.582244 1726 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:42:29.582351 kubelet[1726]: I0913 00:42:29.582325 1726 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:42:29.583488 kubelet[1726]: E0913 00:42:29.583464 1726 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:42:29.586663 kubelet[1726]: I0913 00:42:29.586642 1726 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:42:29.618024 kubelet[1726]: I0913 00:42:29.617964 1726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:42:29.619011 kubelet[1726]: I0913 00:42:29.618990 1726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:42:29.619086 kubelet[1726]: I0913 00:42:29.619029 1726 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:42:29.619086 kubelet[1726]: I0913 00:42:29.619058 1726 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:42:29.619159 kubelet[1726]: E0913 00:42:29.619113 1726 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:42:29.619872 kubelet[1726]: W0913 00:42:29.619831 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:29.619956 kubelet[1726]: E0913 00:42:29.619878 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:29.620980 kubelet[1726]: I0913 00:42:29.620961 1726 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:42:29.620980 kubelet[1726]: I0913 00:42:29.620979 1726 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:42:29.621073 kubelet[1726]: I0913 00:42:29.621002 1726 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:42:29.681318 kubelet[1726]: E0913 00:42:29.681241 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:29.719545 kubelet[1726]: E0913 00:42:29.719493 1726 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:42:29.781941 kubelet[1726]: E0913 00:42:29.781903 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:29.782304 kubelet[1726]: E0913 00:42:29.782257 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Sep 13 00:42:29.882616 kubelet[1726]: E0913 00:42:29.882565 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:29.919997 kubelet[1726]: E0913 00:42:29.919935 1726 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:42:29.983323 kubelet[1726]: E0913 00:42:29.983269 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.084100 kubelet[1726]: E0913 00:42:30.084036 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.183135 kubelet[1726]: E0913 00:42:30.182948 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Sep 13 00:42:30.184971 kubelet[1726]: E0913 00:42:30.184937 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.285396 kubelet[1726]: E0913 00:42:30.285337 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.320562 kubelet[1726]: E0913 00:42:30.320518 1726 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:42:30.386110 kubelet[1726]: E0913 00:42:30.386026 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.486351 kubelet[1726]: E0913 00:42:30.486204 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.486351 kubelet[1726]: W0913 00:42:30.486232 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:30.486351 kubelet[1726]: E0913 00:42:30.486295 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:30.586759 kubelet[1726]: E0913 00:42:30.586681 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.619282 kubelet[1726]: W0913 00:42:30.619239 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:30.619334 kubelet[1726]: E0913 00:42:30.619302 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:30.687494 kubelet[1726]: E0913 00:42:30.687442 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.790476 kubelet[1726]: E0913 00:42:30.790343 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.812604 kubelet[1726]: W0913 00:42:30.812526 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:30.812663 kubelet[1726]: E0913 00:42:30.812613 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:30.891394 kubelet[1726]: E0913 00:42:30.891340 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:30.909085 kubelet[1726]: W0913 00:42:30.909005 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:30.909143 kubelet[1726]: E0913 00:42:30.909083 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:30.984228 kubelet[1726]: E0913 00:42:30.984162 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Sep 13 00:42:30.992671 kubelet[1726]: E0913 00:42:30.992616 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.093241 kubelet[1726]: E0913 00:42:31.093106 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.121324 kubelet[1726]: E0913 00:42:31.121275 1726 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:42:31.193932 kubelet[1726]: E0913 00:42:31.193858 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.294519 kubelet[1726]: E0913 00:42:31.294461 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.395158 kubelet[1726]: E0913 00:42:31.395107 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.495968 kubelet[1726]: E0913 00:42:31.495892 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.546031 kubelet[1726]: E0913 00:42:31.545996 1726 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:31.596318 kubelet[1726]: E0913 00:42:31.596272 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.697124 kubelet[1726]: E0913 00:42:31.696996 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.769947 kubelet[1726]: I0913 00:42:31.769897 1726 policy_none.go:49] "None policy: Start" Sep 13 00:42:31.770799 kubelet[1726]: I0913 00:42:31.770766 1726 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:42:31.770799 kubelet[1726]: I0913 00:42:31.770799 1726 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:42:31.797277 kubelet[1726]: E0913 00:42:31.797234 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.898039 kubelet[1726]: E0913 00:42:31.897975 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:31.998856 kubelet[1726]: E0913 00:42:31.998774 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:32.099769 kubelet[1726]: E0913 00:42:32.099716 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:32.147556 kubelet[1726]: W0913 00:42:32.147517 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:32.147625 kubelet[1726]: E0913 00:42:32.147568 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:32.200308 kubelet[1726]: E0913 00:42:32.200253 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:32.301083 kubelet[1726]: E0913 00:42:32.300969 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:32.401588 kubelet[1726]: E0913 00:42:32.401518 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:32.488352 kubelet[1726]: I0913 00:42:32.488293 1726 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:42:32.488646 kubelet[1726]: I0913 00:42:32.488624 1726 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:42:32.488742 kubelet[1726]: I0913 00:42:32.488664 1726 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:42:32.490103 kubelet[1726]: I0913 00:42:32.490008 1726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:42:32.491512 kubelet[1726]: E0913 00:42:32.491481 1726 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:42:32.585295 kubelet[1726]: E0913 00:42:32.585118 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Sep 13 00:42:32.590947 kubelet[1726]: I0913 00:42:32.590894 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:42:32.591389 kubelet[1726]: E0913 00:42:32.591328 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 13 00:42:32.662700 kubelet[1726]: W0913 00:42:32.662656 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:32.662903 kubelet[1726]: E0913 00:42:32.662716 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:32.793939 kubelet[1726]: I0913 00:42:32.793886 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:42:32.794317 kubelet[1726]: E0913 00:42:32.794283 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 13 00:42:32.801605 kubelet[1726]: I0913 00:42:32.801555 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:32.801605 kubelet[1726]: I0913 00:42:32.801593 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:32.801709 kubelet[1726]: I0913 00:42:32.801623 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6311d2394157ca941ccb1379b58b5132-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6311d2394157ca941ccb1379b58b5132\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:32.801709 kubelet[1726]: I0913 00:42:32.801649 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6311d2394157ca941ccb1379b58b5132-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6311d2394157ca941ccb1379b58b5132\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:32.801709 kubelet[1726]: I0913 00:42:32.801671 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6311d2394157ca941ccb1379b58b5132-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6311d2394157ca941ccb1379b58b5132\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:32.801709 kubelet[1726]: I0913 00:42:32.801695 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:32.801839 kubelet[1726]: I0913 00:42:32.801716 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:32.801839 kubelet[1726]: I0913 00:42:32.801748 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:32.801839 kubelet[1726]: I0913 00:42:32.801769 1726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:42:33.029285 kubelet[1726]: E0913 00:42:33.029218 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:33.029905 kubelet[1726]: E0913 00:42:33.029859 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:33.030137 env[1308]: time="2025-09-13T00:42:33.030094881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:42:33.030551 env[1308]: time="2025-09-13T00:42:33.030495724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:42:33.035993 kubelet[1726]: E0913 00:42:33.035969 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:33.036375 env[1308]: time="2025-09-13T00:42:33.036322901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6311d2394157ca941ccb1379b58b5132,Namespace:kube-system,Attempt:0,}" Sep 13 00:42:33.190114 kubelet[1726]: W0913 00:42:33.190032 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:33.190114 kubelet[1726]: E0913 00:42:33.190109 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:33.195931 kubelet[1726]: I0913 00:42:33.195874 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:42:33.196248 kubelet[1726]: E0913 00:42:33.196222 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 13 00:42:33.510225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523791975.mount: Deactivated successfully. Sep 13 00:42:33.514285 env[1308]: time="2025-09-13T00:42:33.514233554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.518082 env[1308]: time="2025-09-13T00:42:33.518050802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.519585 env[1308]: time="2025-09-13T00:42:33.519556714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.521213 env[1308]: time="2025-09-13T00:42:33.521189508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.523546 env[1308]: time="2025-09-13T00:42:33.523478078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.524617 env[1308]: time="2025-09-13T00:42:33.524568751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.526298 env[1308]: time="2025-09-13T00:42:33.526267792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.527525 env[1308]: time="2025-09-13T00:42:33.527484514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.529604 env[1308]: time="2025-09-13T00:42:33.529571312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.532348 env[1308]: time="2025-09-13T00:42:33.532319426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.533021 env[1308]: time="2025-09-13T00:42:33.533002655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.533644 env[1308]: time="2025-09-13T00:42:33.533620018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:33.566140 env[1308]: time="2025-09-13T00:42:33.565012835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:42:33.566140 env[1308]: time="2025-09-13T00:42:33.565087287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:42:33.566140 env[1308]: time="2025-09-13T00:42:33.565103448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:42:33.566140 env[1308]: time="2025-09-13T00:42:33.565290042Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30f7708c7644d601038b1a1c12d0e7f0862e9ef1dcca3bb61491da3042f4e69b pid=1768 runtime=io.containerd.runc.v2 Sep 13 00:42:33.567978 kubelet[1726]: W0913 00:42:33.567836 1726 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 13 00:42:33.567978 kubelet[1726]: E0913 00:42:33.567938 1726 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:42:33.594587 env[1308]: time="2025-09-13T00:42:33.593927341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:42:33.594587 env[1308]: time="2025-09-13T00:42:33.593984960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:42:33.594587 env[1308]: time="2025-09-13T00:42:33.594014036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:42:33.594587 env[1308]: time="2025-09-13T00:42:33.594484079Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc711ef58e98d9df7e0ef9267822efc8e8a0667d9b60a3f9c19eb6340f100833 pid=1798 runtime=io.containerd.runc.v2 Sep 13 00:42:33.615446 env[1308]: time="2025-09-13T00:42:33.615311001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:42:33.615446 env[1308]: time="2025-09-13T00:42:33.615438664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:42:33.615669 env[1308]: time="2025-09-13T00:42:33.615477678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:42:33.615669 env[1308]: time="2025-09-13T00:42:33.615647911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcc4b88014b5bef925a2e386c940e4936e04da3de2539ad9681063497ab8733b pid=1787 runtime=io.containerd.runc.v2 Sep 13 00:42:33.708092 env[1308]: time="2025-09-13T00:42:33.708006453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"30f7708c7644d601038b1a1c12d0e7f0862e9ef1dcca3bb61491da3042f4e69b\"" Sep 13 00:42:33.709330 kubelet[1726]: E0913 00:42:33.709291 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:33.711583 env[1308]: time="2025-09-13T00:42:33.711539329Z" level=info msg="CreateContainer within sandbox \"30f7708c7644d601038b1a1c12d0e7f0862e9ef1dcca3bb61491da3042f4e69b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:42:33.767762 env[1308]: time="2025-09-13T00:42:33.766837931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc711ef58e98d9df7e0ef9267822efc8e8a0667d9b60a3f9c19eb6340f100833\"" Sep 13 00:42:33.767894 kubelet[1726]: E0913 00:42:33.767317 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:33.769977 env[1308]: time="2025-09-13T00:42:33.769933846Z" level=info msg="CreateContainer within sandbox \"bc711ef58e98d9df7e0ef9267822efc8e8a0667d9b60a3f9c19eb6340f100833\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:42:33.786668 env[1308]: time="2025-09-13T00:42:33.786618103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6311d2394157ca941ccb1379b58b5132,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc4b88014b5bef925a2e386c940e4936e04da3de2539ad9681063497ab8733b\"" Sep 13 00:42:33.787633 kubelet[1726]: E0913 00:42:33.787607 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:33.789598 env[1308]: time="2025-09-13T00:42:33.789559714Z" level=info msg="CreateContainer within sandbox \"fcc4b88014b5bef925a2e386c940e4936e04da3de2539ad9681063497ab8733b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:42:33.841231 env[1308]: time="2025-09-13T00:42:33.841143755Z" level=info msg="CreateContainer within sandbox \"bc711ef58e98d9df7e0ef9267822efc8e8a0667d9b60a3f9c19eb6340f100833\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"113fbdaf2e6d3bda7e1a63ec56d3101da5c55b8ce692f81e986f40d766af0f52\"" Sep 13 00:42:33.842100 env[1308]: time="2025-09-13T00:42:33.842041722Z" level=info msg="StartContainer for \"113fbdaf2e6d3bda7e1a63ec56d3101da5c55b8ce692f81e986f40d766af0f52\"" Sep 13 00:42:33.843695 env[1308]: time="2025-09-13T00:42:33.843633408Z" level=info msg="CreateContainer within sandbox \"30f7708c7644d601038b1a1c12d0e7f0862e9ef1dcca3bb61491da3042f4e69b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b85244446f108b762fff319799b688ff107d5e0f32e38b7c195ee3b75537887\"" Sep 13 00:42:33.844158 env[1308]: time="2025-09-13T00:42:33.844135893Z" level=info msg="StartContainer for \"8b85244446f108b762fff319799b688ff107d5e0f32e38b7c195ee3b75537887\"" Sep 13 00:42:33.845949 env[1308]: time="2025-09-13T00:42:33.845903954Z" level=info msg="CreateContainer within sandbox \"fcc4b88014b5bef925a2e386c940e4936e04da3de2539ad9681063497ab8733b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"89e965002131b46426c3260a7e3912d6b50c1ab29caf02cac9d566bc7e2deab8\"" Sep 13 00:42:33.846563 env[1308]: time="2025-09-13T00:42:33.846512311Z" level=info msg="StartContainer for \"89e965002131b46426c3260a7e3912d6b50c1ab29caf02cac9d566bc7e2deab8\"" Sep 13 00:42:33.954373 env[1308]: time="2025-09-13T00:42:33.954288771Z" level=info msg="StartContainer for \"113fbdaf2e6d3bda7e1a63ec56d3101da5c55b8ce692f81e986f40d766af0f52\" returns successfully" Sep 13 00:42:33.981601 env[1308]: time="2025-09-13T00:42:33.981525207Z" level=info msg="StartContainer for \"8b85244446f108b762fff319799b688ff107d5e0f32e38b7c195ee3b75537887\" returns successfully" Sep 13 00:42:33.989627 env[1308]: time="2025-09-13T00:42:33.989553038Z" level=info msg="StartContainer for \"89e965002131b46426c3260a7e3912d6b50c1ab29caf02cac9d566bc7e2deab8\" returns successfully" Sep 13 00:42:33.998647 kubelet[1726]: I0913 00:42:33.998607 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:42:33.999137 kubelet[1726]: E0913 00:42:33.999070 1726 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 13 00:42:34.630485 kubelet[1726]: E0913 00:42:34.630446 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:34.632392 kubelet[1726]: E0913 00:42:34.632352 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:34.633776 kubelet[1726]: E0913 00:42:34.633757 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:34.903972 update_engine[1295]: I0913 00:42:34.903455 1295 update_attempter.cc:509] Updating boot flags... Sep 13 00:42:35.600573 kubelet[1726]: I0913 00:42:35.600522 1726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:42:35.636079 kubelet[1726]: E0913 00:42:35.636026 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:35.756865 kubelet[1726]: I0913 00:42:35.756815 1726 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:42:35.756865 kubelet[1726]: E0913 00:42:35.756859 1726 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:42:35.800876 kubelet[1726]: E0913 00:42:35.800815 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:35.902001 kubelet[1726]: E0913 00:42:35.901934 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.002909 kubelet[1726]: E0913 00:42:36.002836 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.104039 kubelet[1726]: E0913 00:42:36.103970 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.204929 kubelet[1726]: E0913 00:42:36.204755 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.305084 kubelet[1726]: E0913 00:42:36.305023 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.406004 kubelet[1726]: E0913 00:42:36.405929 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.507047 kubelet[1726]: E0913 00:42:36.506887 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.608048 kubelet[1726]: E0913 00:42:36.607992 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.709183 kubelet[1726]: E0913 00:42:36.709125 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.772293 kubelet[1726]: E0913 00:42:36.772184 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:36.809636 kubelet[1726]: E0913 00:42:36.809593 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:36.910161 kubelet[1726]: E0913 00:42:36.910068 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.010776 kubelet[1726]: E0913 00:42:37.010698 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.111231 kubelet[1726]: E0913 00:42:37.111042 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.211799 kubelet[1726]: E0913 00:42:37.211716 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.312385 kubelet[1726]: E0913 00:42:37.312320 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.413100 kubelet[1726]: E0913 00:42:37.413049 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.514281 kubelet[1726]: E0913 00:42:37.514214 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.614498 kubelet[1726]: E0913 00:42:37.614422 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.714774 kubelet[1726]: E0913 00:42:37.714615 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.815233 kubelet[1726]: E0913 00:42:37.815185 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:37.916124 kubelet[1726]: E0913 00:42:37.916069 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.016772 kubelet[1726]: E0913 00:42:38.016640 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.117004 kubelet[1726]: E0913 00:42:38.116934 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.217671 kubelet[1726]: E0913 00:42:38.217620 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.233798 systemd[1]: Reloading. Sep 13 00:42:38.302175 /usr/lib/systemd/system-generators/torcx-generator[2037]: time="2025-09-13T00:42:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:42:38.302201 /usr/lib/systemd/system-generators/torcx-generator[2037]: time="2025-09-13T00:42:38Z" level=info msg="torcx already run" Sep 13 00:42:38.318763 kubelet[1726]: E0913 00:42:38.318710 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.379438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:38.379454 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:38.401822 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:38.419910 kubelet[1726]: E0913 00:42:38.419817 1726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.479673 systemd[1]: Stopping kubelet.service... Sep 13 00:42:38.506810 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:42:38.507144 systemd[1]: Stopped kubelet.service. Sep 13 00:42:38.509012 systemd[1]: Starting kubelet.service... Sep 13 00:42:38.605103 systemd[1]: Started kubelet.service. Sep 13 00:42:38.655327 kubelet[2093]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:42:38.655327 kubelet[2093]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:42:38.655327 kubelet[2093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:42:38.655884 kubelet[2093]: I0913 00:42:38.655395 2093 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:42:38.661975 kubelet[2093]: I0913 00:42:38.661930 2093 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:42:38.661975 kubelet[2093]: I0913 00:42:38.661958 2093 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:42:38.662269 kubelet[2093]: I0913 00:42:38.662245 2093 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:42:38.663606 kubelet[2093]: I0913 00:42:38.663589 2093 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:42:38.665838 kubelet[2093]: I0913 00:42:38.665810 2093 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:42:38.669251 kubelet[2093]: E0913 00:42:38.669192 2093 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:42:38.669251 kubelet[2093]: I0913 00:42:38.669250 2093 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:42:38.672997 kubelet[2093]: I0913 00:42:38.672960 2093 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:42:38.673371 kubelet[2093]: I0913 00:42:38.673330 2093 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:42:38.673524 kubelet[2093]: I0913 00:42:38.673482 2093 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:42:38.673692 kubelet[2093]: I0913 00:42:38.673514 2093 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:42:38.673829 kubelet[2093]: I0913 00:42:38.673697 2093 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:42:38.673829 kubelet[2093]: I0913 00:42:38.673709 2093 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:42:38.673829 kubelet[2093]: I0913 00:42:38.673749 2093 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:42:38.673942 kubelet[2093]: I0913 00:42:38.673850 2093 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:42:38.673942 kubelet[2093]: I0913 00:42:38.673866 2093 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:42:38.673942 kubelet[2093]: I0913 00:42:38.673897 2093 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:42:38.673942 kubelet[2093]: I0913 00:42:38.673916 2093 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:42:38.678217 kubelet[2093]: I0913 00:42:38.674949 2093 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:42:38.678217 kubelet[2093]: I0913 00:42:38.675418 2093 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:42:38.678217 kubelet[2093]: I0913 00:42:38.675866 2093 server.go:1274] "Started kubelet" Sep 13 00:42:38.678217 kubelet[2093]: I0913 00:42:38.678043 2093 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:42:38.684161 kubelet[2093]: I0913 00:42:38.684085 2093 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:42:38.685063 kubelet[2093]: I0913 00:42:38.685040 2093 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:42:38.686063 kubelet[2093]: I0913 00:42:38.686027 2093 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:42:38.686231 kubelet[2093]: I0913 00:42:38.686208 2093 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:42:38.686463 kubelet[2093]: I0913 00:42:38.686435 2093 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:42:38.688605 kubelet[2093]: I0913 00:42:38.688570 2093 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:42:38.689840 kubelet[2093]: I0913 00:42:38.689818 2093 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:42:38.689969 kubelet[2093]: I0913 00:42:38.689938 2093 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:42:38.691294 kubelet[2093]: I0913 00:42:38.691262 2093 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:42:38.691494 kubelet[2093]: I0913 00:42:38.691465 2093 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:42:38.691989 kubelet[2093]: E0913 00:42:38.691954 2093 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:42:38.694102 kubelet[2093]: I0913 00:42:38.694070 2093 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:42:38.695651 kubelet[2093]: I0913 00:42:38.695617 2093 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:42:38.696675 kubelet[2093]: I0913 00:42:38.696652 2093 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:42:38.696772 kubelet[2093]: I0913 00:42:38.696679 2093 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:42:38.696772 kubelet[2093]: I0913 00:42:38.696703 2093 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:42:38.696893 kubelet[2093]: E0913 00:42:38.696769 2093 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:42:38.700085 kubelet[2093]: E0913 00:42:38.700031 2093 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:42:38.738155 kubelet[2093]: I0913 00:42:38.738110 2093 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:42:38.738155 kubelet[2093]: I0913 00:42:38.738140 2093 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:42:38.738155 kubelet[2093]: I0913 00:42:38.738163 2093 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:42:38.738451 kubelet[2093]: I0913 00:42:38.738402 2093 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:42:38.738504 kubelet[2093]: I0913 00:42:38.738447 2093 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:42:38.738504 kubelet[2093]: I0913 00:42:38.738474 2093 policy_none.go:49] "None policy: Start" Sep 13 00:42:38.739116 kubelet[2093]: I0913 00:42:38.739089 2093 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:42:38.739167 kubelet[2093]: I0913 00:42:38.739120 2093 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:42:38.739441 kubelet[2093]: I0913 00:42:38.739411 2093 state_mem.go:75] "Updated machine memory state" Sep 13 00:42:38.741651 kubelet[2093]: I0913 00:42:38.741620 2093 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:42:38.741874 kubelet[2093]: I0913 00:42:38.741847 2093 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:42:38.741924 kubelet[2093]: I0913 00:42:38.741871 2093 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:42:38.742288 kubelet[2093]: I0913 00:42:38.742175 2093 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:42:38.845729 kubelet[2093]: I0913 00:42:38.845667 2093 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:42:38.857774 kubelet[2093]: I0913 00:42:38.857634 2093 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:42:38.857774 kubelet[2093]: I0913 00:42:38.857747 2093 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:42:38.992716 kubelet[2093]: I0913 00:42:38.992633 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6311d2394157ca941ccb1379b58b5132-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6311d2394157ca941ccb1379b58b5132\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:38.992716 kubelet[2093]: I0913 00:42:38.992704 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:38.992969 kubelet[2093]: I0913 00:42:38.992753 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:38.992969 kubelet[2093]: I0913 00:42:38.992781 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:38.992969 kubelet[2093]: I0913 00:42:38.992811 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:38.992969 kubelet[2093]: I0913 00:42:38.992838 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:42:38.992969 kubelet[2093]: I0913 00:42:38.992878 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6311d2394157ca941ccb1379b58b5132-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6311d2394157ca941ccb1379b58b5132\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:38.993088 kubelet[2093]: I0913 00:42:38.992899 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6311d2394157ca941ccb1379b58b5132-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6311d2394157ca941ccb1379b58b5132\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:38.993088 kubelet[2093]: I0913 00:42:38.992920 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:39.108419 kubelet[2093]: E0913 00:42:39.108252 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:39.108657 kubelet[2093]: E0913 00:42:39.108628 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:39.108820 kubelet[2093]: E0913 00:42:39.108682 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:39.296048 sudo[2127]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:42:39.296257 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:42:39.675077 kubelet[2093]: I0913 00:42:39.675019 2093 apiserver.go:52] "Watching apiserver" Sep 13 00:42:39.691671 kubelet[2093]: I0913 00:42:39.691625 2093 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:42:39.718796 kubelet[2093]: E0913 00:42:39.718137 2093 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:42:39.718796 kubelet[2093]: E0913 00:42:39.718342 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:39.745753 kubelet[2093]: E0913 00:42:39.745684 2093 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:42:39.745978 kubelet[2093]: E0913 00:42:39.745949 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:39.746206 kubelet[2093]: E0913 00:42:39.746164 2093 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:42:39.746614 kubelet[2093]: E0913 00:42:39.746583 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:39.782130 kubelet[2093]: I0913 00:42:39.782069 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7820036 podStartE2EDuration="1.7820036s" podCreationTimestamp="2025-09-13 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:42:39.781696248 +0000 UTC m=+1.170977979" watchObservedRunningTime="2025-09-13 00:42:39.7820036 +0000 UTC m=+1.171285321" Sep 13 00:42:39.816386 kubelet[2093]: I0913 00:42:39.816260 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.816232834 podStartE2EDuration="1.816232834s" podCreationTimestamp="2025-09-13 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:42:39.816193629 +0000 UTC m=+1.205475350" watchObservedRunningTime="2025-09-13 00:42:39.816232834 +0000 UTC m=+1.205514565" Sep 13 00:42:39.838348 kubelet[2093]: I0913 00:42:39.838259 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8382053699999998 podStartE2EDuration="1.83820537s" podCreationTimestamp="2025-09-13 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:42:39.827789386 +0000 UTC m=+1.217071117" watchObservedRunningTime="2025-09-13 00:42:39.83820537 +0000 UTC m=+1.227487101" Sep 13 00:42:39.886128 sudo[2127]: pam_unix(sudo:session): session closed for user root Sep 13 00:42:40.713777 kubelet[2093]: E0913 00:42:40.713743 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:40.714257 kubelet[2093]: E0913 00:42:40.713847 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:40.714257 kubelet[2093]: E0913 00:42:40.714068 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:42.279842 sudo[1425]: pam_unix(sudo:session): session closed for user root Sep 13 00:42:42.281424 sshd[1419]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:42.284175 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:33712.service: Deactivated successfully. Sep 13 00:42:42.285213 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:42:42.285656 systemd-logind[1291]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:42:42.286451 systemd-logind[1291]: Removed session 5. Sep 13 00:42:42.311391 kubelet[2093]: E0913 00:42:42.311353 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:44.385561 kubelet[2093]: I0913 00:42:44.385499 2093 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:42:44.386045 env[1308]: time="2025-09-13T00:42:44.385876552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:42:44.386245 kubelet[2093]: I0913 00:42:44.386047 2093 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:42:45.941939 kubelet[2093]: I0913 00:42:45.941876 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/375c6819-5884-4087-9944-ed059eb977db-kube-proxy\") pod \"kube-proxy-whvk8\" (UID: \"375c6819-5884-4087-9944-ed059eb977db\") " pod="kube-system/kube-proxy-whvk8" Sep 13 00:42:45.941939 kubelet[2093]: I0913 00:42:45.941914 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z6gx\" (UniqueName: \"kubernetes.io/projected/375c6819-5884-4087-9944-ed059eb977db-kube-api-access-4z6gx\") pod \"kube-proxy-whvk8\" (UID: \"375c6819-5884-4087-9944-ed059eb977db\") " pod="kube-system/kube-proxy-whvk8" Sep 13 00:42:45.941939 kubelet[2093]: I0913 00:42:45.941934 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/375c6819-5884-4087-9944-ed059eb977db-xtables-lock\") pod \"kube-proxy-whvk8\" (UID: \"375c6819-5884-4087-9944-ed059eb977db\") " pod="kube-system/kube-proxy-whvk8" Sep 13 00:42:45.941939 kubelet[2093]: I0913 00:42:45.941952 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/375c6819-5884-4087-9944-ed059eb977db-lib-modules\") pod \"kube-proxy-whvk8\" (UID: \"375c6819-5884-4087-9944-ed059eb977db\") " pod="kube-system/kube-proxy-whvk8" Sep 13 00:42:46.042480 kubelet[2093]: I0913 00:42:46.042408 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-bpf-maps\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042480 kubelet[2093]: I0913 00:42:46.042458 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hostproc\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042480 kubelet[2093]: I0913 00:42:46.042474 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hsb9\" (UniqueName: \"kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-kube-api-access-6hsb9\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042480 kubelet[2093]: I0913 00:42:46.042490 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cni-path\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042768 kubelet[2093]: I0913 00:42:46.042508 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-net\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042768 kubelet[2093]: I0913 00:42:46.042707 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-run\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042768 kubelet[2093]: I0913 00:42:46.042753 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-cgroup\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042967 kubelet[2093]: I0913 00:42:46.042920 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-xtables-lock\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.042967 kubelet[2093]: I0913 00:42:46.042955 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-config-path\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.043083 kubelet[2093]: I0913 00:42:46.042976 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-etc-cni-netd\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.043083 kubelet[2093]: I0913 00:42:46.042995 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5588c8ea-0741-4042-a01c-31bd7cf40b6c-clustermesh-secrets\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.043083 kubelet[2093]: I0913 00:42:46.043008 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hubble-tls\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.043083 kubelet[2093]: I0913 00:42:46.043025 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-lib-modules\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.043083 kubelet[2093]: I0913 00:42:46.043045 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-kernel\") pod \"cilium-qk4gj\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " pod="kube-system/cilium-qk4gj" Sep 13 00:42:46.143422 kubelet[2093]: I0913 00:42:46.143379 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlztw\" (UniqueName: \"kubernetes.io/projected/cd1ffa74-38c8-44cb-b1a6-7630de316962-kube-api-access-mlztw\") pod \"cilium-operator-5d85765b45-hfj8c\" (UID: \"cd1ffa74-38c8-44cb-b1a6-7630de316962\") " pod="kube-system/cilium-operator-5d85765b45-hfj8c" Sep 13 00:42:46.143696 kubelet[2093]: I0913 00:42:46.143675 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd1ffa74-38c8-44cb-b1a6-7630de316962-cilium-config-path\") pod \"cilium-operator-5d85765b45-hfj8c\" (UID: \"cd1ffa74-38c8-44cb-b1a6-7630de316962\") " pod="kube-system/cilium-operator-5d85765b45-hfj8c" Sep 13 00:42:46.143800 kubelet[2093]: I0913 00:42:46.143760 2093 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:42:46.380689 kubelet[2093]: E0913 00:42:46.380633 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:46.381404 env[1308]: time="2025-09-13T00:42:46.381178976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hfj8c,Uid:cd1ffa74-38c8-44cb-b1a6-7630de316962,Namespace:kube-system,Attempt:0,}" Sep 13 00:42:46.444215 kubelet[2093]: E0913 00:42:46.444154 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:46.444593 env[1308]: time="2025-09-13T00:42:46.444536103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-whvk8,Uid:375c6819-5884-4087-9944-ed059eb977db,Namespace:kube-system,Attempt:0,}" Sep 13 00:42:46.463321 kubelet[2093]: E0913 00:42:46.463269 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:46.463898 env[1308]: time="2025-09-13T00:42:46.463857767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qk4gj,Uid:5588c8ea-0741-4042-a01c-31bd7cf40b6c,Namespace:kube-system,Attempt:0,}" Sep 13 00:42:46.894163 env[1308]: time="2025-09-13T00:42:46.894059617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:42:46.894163 env[1308]: time="2025-09-13T00:42:46.894154776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:42:46.894392 env[1308]: time="2025-09-13T00:42:46.894186415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:42:46.894392 env[1308]: time="2025-09-13T00:42:46.894329846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d pid=2187 runtime=io.containerd.runc.v2 Sep 13 00:42:46.899925 env[1308]: time="2025-09-13T00:42:46.899859136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:42:46.899925 env[1308]: time="2025-09-13T00:42:46.899919820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:42:46.900146 env[1308]: time="2025-09-13T00:42:46.899943054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:42:46.900146 env[1308]: time="2025-09-13T00:42:46.900053522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/493b27ba4eb7ebb67c1c5ad4c477faf87e5c31a03a49c6c6603b7465bcb9d9f3 pid=2214 runtime=io.containerd.runc.v2 Sep 13 00:42:46.900834 env[1308]: time="2025-09-13T00:42:46.900780824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:42:46.900904 env[1308]: time="2025-09-13T00:42:46.900838002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:42:46.900904 env[1308]: time="2025-09-13T00:42:46.900862027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:42:46.901042 env[1308]: time="2025-09-13T00:42:46.901005839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba pid=2213 runtime=io.containerd.runc.v2 Sep 13 00:42:46.954543 env[1308]: time="2025-09-13T00:42:46.954486354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-whvk8,Uid:375c6819-5884-4087-9944-ed059eb977db,Namespace:kube-system,Attempt:0,} returns sandbox id \"493b27ba4eb7ebb67c1c5ad4c477faf87e5c31a03a49c6c6603b7465bcb9d9f3\"" Sep 13 00:42:46.955155 kubelet[2093]: E0913 00:42:46.955120 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:46.959534 env[1308]: time="2025-09-13T00:42:46.959451881Z" level=info msg="CreateContainer within sandbox \"493b27ba4eb7ebb67c1c5ad4c477faf87e5c31a03a49c6c6603b7465bcb9d9f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:42:46.964181 env[1308]: time="2025-09-13T00:42:46.961772588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qk4gj,Uid:5588c8ea-0741-4042-a01c-31bd7cf40b6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\"" Sep 13 00:42:46.964181 env[1308]: time="2025-09-13T00:42:46.964010039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hfj8c,Uid:cd1ffa74-38c8-44cb-b1a6-7630de316962,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d\"" Sep 13 00:42:46.964347 kubelet[2093]: E0913 00:42:46.964144 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:46.964901 kubelet[2093]: E0913 00:42:46.964878 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:46.965703 env[1308]: time="2025-09-13T00:42:46.965631367Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:42:46.982790 env[1308]: time="2025-09-13T00:42:46.982713848Z" level=info msg="CreateContainer within sandbox \"493b27ba4eb7ebb67c1c5ad4c477faf87e5c31a03a49c6c6603b7465bcb9d9f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"771f324a7ef332b84487c4da3aa391bc3058629b54bcbc7fd323f6efef29bcb8\"" Sep 13 00:42:46.984002 env[1308]: time="2025-09-13T00:42:46.983970408Z" level=info msg="StartContainer for \"771f324a7ef332b84487c4da3aa391bc3058629b54bcbc7fd323f6efef29bcb8\"" Sep 13 00:42:47.029279 env[1308]: time="2025-09-13T00:42:47.029222926Z" level=info msg="StartContainer for \"771f324a7ef332b84487c4da3aa391bc3058629b54bcbc7fd323f6efef29bcb8\" returns successfully" Sep 13 00:42:47.724834 kubelet[2093]: E0913 00:42:47.724703 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:47.734434 kubelet[2093]: I0913 00:42:47.734331 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-whvk8" podStartSLOduration=2.734309655 podStartE2EDuration="2.734309655s" podCreationTimestamp="2025-09-13 00:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:42:47.733890976 +0000 UTC m=+9.123172707" watchObservedRunningTime="2025-09-13 00:42:47.734309655 +0000 UTC m=+9.123591386" Sep 13 00:42:48.461896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155211002.mount: Deactivated successfully. Sep 13 00:42:49.353173 kubelet[2093]: E0913 00:42:49.353131 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:49.436989 kubelet[2093]: E0913 00:42:49.436916 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:49.464805 env[1308]: time="2025-09-13T00:42:49.464731343Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.467388 env[1308]: time="2025-09-13T00:42:49.467312506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.469311 env[1308]: time="2025-09-13T00:42:49.469154156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:49.469739 env[1308]: time="2025-09-13T00:42:49.469690427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:42:49.471189 env[1308]: time="2025-09-13T00:42:49.471145028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:42:49.472432 env[1308]: time="2025-09-13T00:42:49.472375056Z" level=info msg="CreateContainer within sandbox \"e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:42:49.489055 env[1308]: time="2025-09-13T00:42:49.488963743Z" level=info msg="CreateContainer within sandbox \"e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\"" Sep 13 00:42:49.489705 env[1308]: time="2025-09-13T00:42:49.489672358Z" level=info msg="StartContainer for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\"" Sep 13 00:42:49.727596 env[1308]: time="2025-09-13T00:42:49.727546753Z" level=info msg="StartContainer for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" returns successfully" Sep 13 00:42:49.730654 kubelet[2093]: E0913 00:42:49.730491 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:50.732557 kubelet[2093]: E0913 00:42:50.732505 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:52.317871 kubelet[2093]: E0913 00:42:52.317500 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:42:52.328814 kubelet[2093]: I0913 00:42:52.328747 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hfj8c" podStartSLOduration=4.823050343 podStartE2EDuration="7.328725298s" podCreationTimestamp="2025-09-13 00:42:45 +0000 UTC" firstStartedPulling="2025-09-13 00:42:46.965204401 +0000 UTC m=+8.354486133" lastFinishedPulling="2025-09-13 00:42:49.470879357 +0000 UTC m=+10.860161088" observedRunningTime="2025-09-13 00:42:49.742693063 +0000 UTC m=+11.131974794" watchObservedRunningTime="2025-09-13 00:42:52.328725298 +0000 UTC m=+13.718007039" Sep 13 00:42:55.344620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145045119.mount: Deactivated successfully. Sep 13 00:42:59.798753 env[1308]: time="2025-09-13T00:42:59.798671346Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.800587 env[1308]: time="2025-09-13T00:42:59.800562621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.802186 env[1308]: time="2025-09-13T00:42:59.802158161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.802689 env[1308]: time="2025-09-13T00:42:59.802648924Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:42:59.804961 env[1308]: time="2025-09-13T00:42:59.804932126Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:42:59.817280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010844629.mount: Deactivated successfully. Sep 13 00:42:59.835674 env[1308]: time="2025-09-13T00:42:59.835612340Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\"" Sep 13 00:42:59.836129 env[1308]: time="2025-09-13T00:42:59.836096780Z" level=info msg="StartContainer for \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\"" Sep 13 00:43:00.135117 env[1308]: time="2025-09-13T00:43:00.135045219Z" level=info msg="StartContainer for \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\" returns successfully" Sep 13 00:43:00.244517 env[1308]: time="2025-09-13T00:43:00.244438871Z" level=info msg="shim disconnected" id=74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22 Sep 13 00:43:00.244517 env[1308]: time="2025-09-13T00:43:00.244505917Z" level=warning msg="cleaning up after shim disconnected" id=74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22 namespace=k8s.io Sep 13 00:43:00.244517 env[1308]: time="2025-09-13T00:43:00.244519051Z" level=info msg="cleaning up dead shim" Sep 13 00:43:00.250689 env[1308]: time="2025-09-13T00:43:00.250642531Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2556 runtime=io.containerd.runc.v2\n" Sep 13 00:43:00.769941 kubelet[2093]: E0913 00:43:00.769903 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:00.771630 env[1308]: time="2025-09-13T00:43:00.771567854Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:43:00.811289 env[1308]: time="2025-09-13T00:43:00.811219011Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\"" Sep 13 00:43:00.813095 env[1308]: time="2025-09-13T00:43:00.812285365Z" level=info msg="StartContainer for \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\"" Sep 13 00:43:00.815975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22-rootfs.mount: Deactivated successfully. Sep 13 00:43:00.869619 env[1308]: time="2025-09-13T00:43:00.869542613Z" level=info msg="StartContainer for \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\" returns successfully" Sep 13 00:43:00.875986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:43:00.876278 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:43:00.876588 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:43:00.878416 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:43:00.889046 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:43:00.901065 env[1308]: time="2025-09-13T00:43:00.901015357Z" level=info msg="shim disconnected" id=5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9 Sep 13 00:43:00.901065 env[1308]: time="2025-09-13T00:43:00.901067074Z" level=warning msg="cleaning up after shim disconnected" id=5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9 namespace=k8s.io Sep 13 00:43:00.901065 env[1308]: time="2025-09-13T00:43:00.901075801Z" level=info msg="cleaning up dead shim" Sep 13 00:43:00.908564 env[1308]: time="2025-09-13T00:43:00.908524791Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2621 runtime=io.containerd.runc.v2\n" Sep 13 00:43:01.773524 kubelet[2093]: E0913 00:43:01.773478 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:01.775057 env[1308]: time="2025-09-13T00:43:01.774825035Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:43:01.815721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9-rootfs.mount: Deactivated successfully. Sep 13 00:43:01.880830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219023685.mount: Deactivated successfully. Sep 13 00:43:01.882804 env[1308]: time="2025-09-13T00:43:01.882749564Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\"" Sep 13 00:43:01.883407 env[1308]: time="2025-09-13T00:43:01.883349471Z" level=info msg="StartContainer for \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\"" Sep 13 00:43:01.939304 env[1308]: time="2025-09-13T00:43:01.939212272Z" level=info msg="StartContainer for \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\" returns successfully" Sep 13 00:43:01.960236 env[1308]: time="2025-09-13T00:43:01.960168674Z" level=info msg="shim disconnected" id=7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f Sep 13 00:43:01.960236 env[1308]: time="2025-09-13T00:43:01.960213820Z" level=warning msg="cleaning up after shim disconnected" id=7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f namespace=k8s.io Sep 13 00:43:01.960236 env[1308]: time="2025-09-13T00:43:01.960222816Z" level=info msg="cleaning up dead shim" Sep 13 00:43:01.967086 env[1308]: time="2025-09-13T00:43:01.967048243Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2678 runtime=io.containerd.runc.v2\n" Sep 13 00:43:02.776574 kubelet[2093]: E0913 00:43:02.776539 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:02.778309 env[1308]: time="2025-09-13T00:43:02.778246262Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:43:02.801445 env[1308]: time="2025-09-13T00:43:02.801387374Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\"" Sep 13 00:43:02.801934 env[1308]: time="2025-09-13T00:43:02.801911960Z" level=info msg="StartContainer for \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\"" Sep 13 00:43:02.816586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f-rootfs.mount: Deactivated successfully. Sep 13 00:43:02.848475 env[1308]: time="2025-09-13T00:43:02.848417103Z" level=info msg="StartContainer for \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\" returns successfully" Sep 13 00:43:02.862703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5-rootfs.mount: Deactivated successfully. Sep 13 00:43:02.868155 env[1308]: time="2025-09-13T00:43:02.868073908Z" level=info msg="shim disconnected" id=55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5 Sep 13 00:43:02.868155 env[1308]: time="2025-09-13T00:43:02.868144631Z" level=warning msg="cleaning up after shim disconnected" id=55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5 namespace=k8s.io Sep 13 00:43:02.868155 env[1308]: time="2025-09-13T00:43:02.868161052Z" level=info msg="cleaning up dead shim" Sep 13 00:43:02.876171 env[1308]: time="2025-09-13T00:43:02.876103876Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2733 runtime=io.containerd.runc.v2\n" Sep 13 00:43:02.961524 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:56344.service. Sep 13 00:43:02.996498 sshd[2746]: Accepted publickey for core from 10.0.0.1 port 56344 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:02.998329 sshd[2746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:03.002610 systemd-logind[1291]: New session 6 of user core. Sep 13 00:43:03.003449 systemd[1]: Started session-6.scope. Sep 13 00:43:03.239170 sshd[2746]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:03.241627 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:56344.service: Deactivated successfully. Sep 13 00:43:03.242685 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:43:03.242760 systemd-logind[1291]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:43:03.243811 systemd-logind[1291]: Removed session 6. Sep 13 00:43:03.781075 kubelet[2093]: E0913 00:43:03.781036 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:03.783010 env[1308]: time="2025-09-13T00:43:03.782939073Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:43:03.811653 env[1308]: time="2025-09-13T00:43:03.811592746Z" level=info msg="CreateContainer within sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\"" Sep 13 00:43:03.812075 env[1308]: time="2025-09-13T00:43:03.812046288Z" level=info msg="StartContainer for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\"" Sep 13 00:43:03.828602 systemd[1]: run-containerd-runc-k8s.io-d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae-runc.GNFlYd.mount: Deactivated successfully. Sep 13 00:43:03.860305 env[1308]: time="2025-09-13T00:43:03.860242446Z" level=info msg="StartContainer for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" returns successfully" Sep 13 00:43:04.062516 kubelet[2093]: I0913 00:43:04.062356 2093 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:43:04.280777 kubelet[2093]: I0913 00:43:04.280710 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8807828-bcf2-42de-88e8-f242b001de0d-config-volume\") pod \"coredns-7c65d6cfc9-x9vbz\" (UID: \"a8807828-bcf2-42de-88e8-f242b001de0d\") " pod="kube-system/coredns-7c65d6cfc9-x9vbz" Sep 13 00:43:04.280777 kubelet[2093]: I0913 00:43:04.280793 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7ae9e3c-a82e-4ad8-9518-ed946906b49c-config-volume\") pod \"coredns-7c65d6cfc9-d42bc\" (UID: \"e7ae9e3c-a82e-4ad8-9518-ed946906b49c\") " pod="kube-system/coredns-7c65d6cfc9-d42bc" Sep 13 00:43:04.281022 kubelet[2093]: I0913 00:43:04.280842 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vsx\" (UniqueName: \"kubernetes.io/projected/e7ae9e3c-a82e-4ad8-9518-ed946906b49c-kube-api-access-v4vsx\") pod \"coredns-7c65d6cfc9-d42bc\" (UID: \"e7ae9e3c-a82e-4ad8-9518-ed946906b49c\") " pod="kube-system/coredns-7c65d6cfc9-d42bc" Sep 13 00:43:04.281022 kubelet[2093]: I0913 00:43:04.280868 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gg8\" (UniqueName: \"kubernetes.io/projected/a8807828-bcf2-42de-88e8-f242b001de0d-kube-api-access-j8gg8\") pod \"coredns-7c65d6cfc9-x9vbz\" (UID: \"a8807828-bcf2-42de-88e8-f242b001de0d\") " pod="kube-system/coredns-7c65d6cfc9-x9vbz" Sep 13 00:43:04.431309 kubelet[2093]: E0913 00:43:04.431261 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:04.432099 env[1308]: time="2025-09-13T00:43:04.432060408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9vbz,Uid:a8807828-bcf2-42de-88e8-f242b001de0d,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:04.443090 kubelet[2093]: E0913 00:43:04.443052 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:04.443346 env[1308]: time="2025-09-13T00:43:04.443298047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d42bc,Uid:e7ae9e3c-a82e-4ad8-9518-ed946906b49c,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:04.785860 kubelet[2093]: E0913 00:43:04.785671 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:04.801867 kubelet[2093]: I0913 00:43:04.801787 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qk4gj" podStartSLOduration=6.964059314 podStartE2EDuration="19.80176382s" podCreationTimestamp="2025-09-13 00:42:45 +0000 UTC" firstStartedPulling="2025-09-13 00:42:46.965979364 +0000 UTC m=+8.355261095" lastFinishedPulling="2025-09-13 00:42:59.803683869 +0000 UTC m=+21.192965601" observedRunningTime="2025-09-13 00:43:04.801531042 +0000 UTC m=+26.190812804" watchObservedRunningTime="2025-09-13 00:43:04.80176382 +0000 UTC m=+26.191045551" Sep 13 00:43:04.819114 systemd[1]: run-containerd-runc-k8s.io-d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae-runc.XVvibY.mount: Deactivated successfully. Sep 13 00:43:05.788089 kubelet[2093]: E0913 00:43:05.788037 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:05.869264 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:43:05.869431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:43:05.868454 systemd-networkd[1074]: cilium_host: Link UP Sep 13 00:43:05.868618 systemd-networkd[1074]: cilium_net: Link UP Sep 13 00:43:05.868799 systemd-networkd[1074]: cilium_net: Gained carrier Sep 13 00:43:05.868989 systemd-networkd[1074]: cilium_host: Gained carrier Sep 13 00:43:05.954067 systemd-networkd[1074]: cilium_vxlan: Link UP Sep 13 00:43:05.954079 systemd-networkd[1074]: cilium_vxlan: Gained carrier Sep 13 00:43:06.147403 kernel: NET: Registered PF_ALG protocol family Sep 13 00:43:06.522515 systemd-networkd[1074]: cilium_net: Gained IPv6LL Sep 13 00:43:06.586774 systemd-networkd[1074]: cilium_host: Gained IPv6LL Sep 13 00:43:06.728125 systemd-networkd[1074]: lxc_health: Link UP Sep 13 00:43:06.741487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:43:06.740909 systemd-networkd[1074]: lxc_health: Gained carrier Sep 13 00:43:06.789777 kubelet[2093]: E0913 00:43:06.789645 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:07.018721 systemd-networkd[1074]: lxcbe32d807c602: Link UP Sep 13 00:43:07.019039 systemd-networkd[1074]: lxc8e5272e6cebf: Link UP Sep 13 00:43:07.036507 kernel: eth0: renamed from tmpeb092 Sep 13 00:43:07.044447 kernel: eth0: renamed from tmpc1d63 Sep 13 00:43:07.054300 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:43:07.054436 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8e5272e6cebf: link becomes ready Sep 13 00:43:07.054701 systemd-networkd[1074]: lxc8e5272e6cebf: Gained carrier Sep 13 00:43:07.057761 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbe32d807c602: link becomes ready Sep 13 00:43:07.057239 systemd-networkd[1074]: lxcbe32d807c602: Gained carrier Sep 13 00:43:07.684716 systemd-networkd[1074]: cilium_vxlan: Gained IPv6LL Sep 13 00:43:08.242279 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:56348.service. Sep 13 00:43:08.250475 systemd-networkd[1074]: lxc_health: Gained IPv6LL Sep 13 00:43:08.250757 systemd-networkd[1074]: lxcbe32d807c602: Gained IPv6LL Sep 13 00:43:08.276320 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 56348 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:08.277919 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:08.281631 systemd-logind[1291]: New session 7 of user core. Sep 13 00:43:08.282621 systemd[1]: Started session-7.scope. Sep 13 00:43:08.408542 sshd[3302]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:08.410549 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:56348.service: Deactivated successfully. Sep 13 00:43:08.411267 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:43:08.412273 systemd-logind[1291]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:43:08.412972 systemd-logind[1291]: Removed session 7. Sep 13 00:43:08.466350 kubelet[2093]: E0913 00:43:08.465676 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:08.826594 systemd-networkd[1074]: lxc8e5272e6cebf: Gained IPv6LL Sep 13 00:43:10.402101 env[1308]: time="2025-09-13T00:43:10.401984999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:10.402681 env[1308]: time="2025-09-13T00:43:10.402042356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:10.402681 env[1308]: time="2025-09-13T00:43:10.402658754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:10.403099 env[1308]: time="2025-09-13T00:43:10.403051441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb092827565276b44e9701c85d32b811193722f235de2ee42bfdbb1053e2b37d pid=3335 runtime=io.containerd.runc.v2 Sep 13 00:43:10.430468 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:10.451286 env[1308]: time="2025-09-13T00:43:10.451215103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:10.451450 env[1308]: time="2025-09-13T00:43:10.451291336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:10.451450 env[1308]: time="2025-09-13T00:43:10.451317545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:10.451607 env[1308]: time="2025-09-13T00:43:10.451569719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d632ef93dd94507b14dea62643ea7feeb515b442b2437ce1663874acabbe68 pid=3372 runtime=io.containerd.runc.v2 Sep 13 00:43:10.457603 env[1308]: time="2025-09-13T00:43:10.457548723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d42bc,Uid:e7ae9e3c-a82e-4ad8-9518-ed946906b49c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb092827565276b44e9701c85d32b811193722f235de2ee42bfdbb1053e2b37d\"" Sep 13 00:43:10.458468 kubelet[2093]: E0913 00:43:10.458426 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:10.459938 env[1308]: time="2025-09-13T00:43:10.459893696Z" level=info msg="CreateContainer within sandbox \"eb092827565276b44e9701c85d32b811193722f235de2ee42bfdbb1053e2b37d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:43:10.477637 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:10.501598 env[1308]: time="2025-09-13T00:43:10.501538539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9vbz,Uid:a8807828-bcf2-42de-88e8-f242b001de0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d632ef93dd94507b14dea62643ea7feeb515b442b2437ce1663874acabbe68\"" Sep 13 00:43:10.502380 kubelet[2093]: E0913 00:43:10.502337 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:10.506279 env[1308]: time="2025-09-13T00:43:10.506232702Z" level=info msg="CreateContainer within sandbox \"c1d632ef93dd94507b14dea62643ea7feeb515b442b2437ce1663874acabbe68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:43:10.754724 kubelet[2093]: I0913 00:43:10.754580 2093 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:43:10.755096 kubelet[2093]: E0913 00:43:10.755054 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:10.798980 kubelet[2093]: E0913 00:43:10.798948 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:11.150381 env[1308]: time="2025-09-13T00:43:11.150297729Z" level=info msg="CreateContainer within sandbox \"c1d632ef93dd94507b14dea62643ea7feeb515b442b2437ce1663874acabbe68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88b47dadc91a46f3b4440af75ecc63f9e87c6922fe442e91c0fb8a1d109e438c\"" Sep 13 00:43:11.150966 env[1308]: time="2025-09-13T00:43:11.150910749Z" level=info msg="StartContainer for \"88b47dadc91a46f3b4440af75ecc63f9e87c6922fe442e91c0fb8a1d109e438c\"" Sep 13 00:43:11.152452 env[1308]: time="2025-09-13T00:43:11.152421565Z" level=info msg="CreateContainer within sandbox \"eb092827565276b44e9701c85d32b811193722f235de2ee42bfdbb1053e2b37d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c0ff0104980f5223d3c247f7457080a73bfec19cb71d5afd6e003190516b6cf\"" Sep 13 00:43:11.152770 env[1308]: time="2025-09-13T00:43:11.152746224Z" level=info msg="StartContainer for \"9c0ff0104980f5223d3c247f7457080a73bfec19cb71d5afd6e003190516b6cf\"" Sep 13 00:43:11.408229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895277609.mount: Deactivated successfully. Sep 13 00:43:11.418908 env[1308]: time="2025-09-13T00:43:11.418851508Z" level=info msg="StartContainer for \"9c0ff0104980f5223d3c247f7457080a73bfec19cb71d5afd6e003190516b6cf\" returns successfully" Sep 13 00:43:11.544861 env[1308]: time="2025-09-13T00:43:11.544791937Z" level=info msg="StartContainer for \"88b47dadc91a46f3b4440af75ecc63f9e87c6922fe442e91c0fb8a1d109e438c\" returns successfully" Sep 13 00:43:11.802827 kubelet[2093]: E0913 00:43:11.802726 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:11.805674 kubelet[2093]: E0913 00:43:11.805652 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:11.874422 kubelet[2093]: I0913 00:43:11.874321 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x9vbz" podStartSLOduration=26.874301604 podStartE2EDuration="26.874301604s" podCreationTimestamp="2025-09-13 00:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:11.873519586 +0000 UTC m=+33.262801327" watchObservedRunningTime="2025-09-13 00:43:11.874301604 +0000 UTC m=+33.263583335" Sep 13 00:43:11.889826 kubelet[2093]: I0913 00:43:11.889763 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d42bc" podStartSLOduration=26.889734615 podStartE2EDuration="26.889734615s" podCreationTimestamp="2025-09-13 00:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:11.889260355 +0000 UTC m=+33.278542086" watchObservedRunningTime="2025-09-13 00:43:11.889734615 +0000 UTC m=+33.279016346" Sep 13 00:43:12.807400 kubelet[2093]: E0913 00:43:12.807344 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.807929 kubelet[2093]: E0913 00:43:12.807344 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.412712 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:45594.service. Sep 13 00:43:13.446545 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 45594 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:13.447750 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:13.451277 systemd-logind[1291]: New session 8 of user core. Sep 13 00:43:13.452205 systemd[1]: Started session-8.scope. Sep 13 00:43:13.588928 sshd[3498]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:13.591707 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:45594.service: Deactivated successfully. Sep 13 00:43:13.592700 systemd-logind[1291]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:43:13.592735 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:43:13.593545 systemd-logind[1291]: Removed session 8. Sep 13 00:43:13.811752 kubelet[2093]: E0913 00:43:13.811342 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.812107 kubelet[2093]: E0913 00:43:13.811912 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:14.812816 kubelet[2093]: E0913 00:43:14.812786 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:18.593681 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:45600.service. Sep 13 00:43:18.627299 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 45600 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:18.628779 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:18.633166 systemd-logind[1291]: New session 9 of user core. Sep 13 00:43:18.634128 systemd[1]: Started session-9.scope. Sep 13 00:43:18.756732 sshd[3515]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:18.759116 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:45600.service: Deactivated successfully. Sep 13 00:43:18.760118 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:43:18.761137 systemd-logind[1291]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:43:18.761898 systemd-logind[1291]: Removed session 9. Sep 13 00:43:23.760106 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:33520.service. Sep 13 00:43:23.792611 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 33520 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:23.794353 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:23.798244 systemd-logind[1291]: New session 10 of user core. Sep 13 00:43:23.799108 systemd[1]: Started session-10.scope. Sep 13 00:43:23.925039 sshd[3531]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:23.928050 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:33520.service: Deactivated successfully. Sep 13 00:43:23.929317 systemd-logind[1291]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:43:23.929398 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:43:23.930236 systemd-logind[1291]: Removed session 10. Sep 13 00:43:28.928061 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:33526.service. Sep 13 00:43:28.968243 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 33526 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:28.969890 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:28.974477 systemd-logind[1291]: New session 11 of user core. Sep 13 00:43:28.975232 systemd[1]: Started session-11.scope. Sep 13 00:43:29.083699 sshd[3547]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:29.086326 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:33534.service. Sep 13 00:43:29.086858 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:33526.service: Deactivated successfully. Sep 13 00:43:29.087935 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:43:29.088012 systemd-logind[1291]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:43:29.088753 systemd-logind[1291]: Removed session 11. Sep 13 00:43:29.117288 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 33534 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:29.118604 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:29.121940 systemd-logind[1291]: New session 12 of user core. Sep 13 00:43:29.122738 systemd[1]: Started session-12.scope. Sep 13 00:43:29.302138 sshd[3560]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:29.306275 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:33546.service. Sep 13 00:43:29.307041 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:33534.service: Deactivated successfully. Sep 13 00:43:29.309095 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:43:29.310248 systemd-logind[1291]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:43:29.313730 systemd-logind[1291]: Removed session 12. Sep 13 00:43:29.341205 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 33546 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:29.342703 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:29.346735 systemd-logind[1291]: New session 13 of user core. Sep 13 00:43:29.347940 systemd[1]: Started session-13.scope. Sep 13 00:43:29.462937 sshd[3574]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:29.465184 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:33546.service: Deactivated successfully. Sep 13 00:43:29.466381 systemd-logind[1291]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:43:29.466439 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:43:29.467263 systemd-logind[1291]: Removed session 13. Sep 13 00:43:34.466254 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:51458.service. Sep 13 00:43:34.495886 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 51458 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:34.497398 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:34.501099 systemd-logind[1291]: New session 14 of user core. Sep 13 00:43:34.501860 systemd[1]: Started session-14.scope. Sep 13 00:43:34.612126 sshd[3589]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:34.614654 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:51458.service: Deactivated successfully. Sep 13 00:43:34.615576 systemd-logind[1291]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:43:34.615625 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:43:34.616595 systemd-logind[1291]: Removed session 14. Sep 13 00:43:39.616775 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:51460.service. Sep 13 00:43:39.648711 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:39.650031 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:39.654122 systemd-logind[1291]: New session 15 of user core. Sep 13 00:43:39.655013 systemd[1]: Started session-15.scope. Sep 13 00:43:39.764849 sshd[3605]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:39.767575 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:51460.service: Deactivated successfully. Sep 13 00:43:39.768804 systemd-logind[1291]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:43:39.768834 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:43:39.769868 systemd-logind[1291]: Removed session 15. Sep 13 00:43:44.768794 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:43324.service. Sep 13 00:43:44.798198 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:44.799191 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:44.802659 systemd-logind[1291]: New session 16 of user core. Sep 13 00:43:44.803660 systemd[1]: Started session-16.scope. Sep 13 00:43:44.908739 sshd[3619]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:44.911324 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:43324.service: Deactivated successfully. Sep 13 00:43:44.912122 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:43:44.912919 systemd-logind[1291]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:43:44.913991 systemd-logind[1291]: Removed session 16. Sep 13 00:43:44.917898 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:43338.service. Sep 13 00:43:44.949301 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 43338 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:44.950549 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:44.954465 systemd-logind[1291]: New session 17 of user core. Sep 13 00:43:44.955286 systemd[1]: Started session-17.scope. Sep 13 00:43:45.898909 sshd[3633]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:45.901423 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:43344.service. Sep 13 00:43:45.903088 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:43338.service: Deactivated successfully. Sep 13 00:43:45.904268 systemd-logind[1291]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:43:45.904348 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:43:45.905211 systemd-logind[1291]: Removed session 17. Sep 13 00:43:45.944054 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 43344 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:45.945409 sshd[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:45.949128 systemd-logind[1291]: New session 18 of user core. Sep 13 00:43:45.949962 systemd[1]: Started session-18.scope. Sep 13 00:43:48.858976 sshd[3644]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:48.862707 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:43352.service. Sep 13 00:43:48.863388 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:43344.service: Deactivated successfully. Sep 13 00:43:48.864617 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:43:48.864682 systemd-logind[1291]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:43:48.866042 systemd-logind[1291]: Removed session 18. Sep 13 00:43:48.895777 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 43352 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:48.897301 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:48.901759 systemd-logind[1291]: New session 19 of user core. Sep 13 00:43:48.902736 systemd[1]: Started session-19.scope. Sep 13 00:43:49.633022 sshd[3689]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:49.636636 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:43358.service. Sep 13 00:43:49.637306 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:43352.service: Deactivated successfully. Sep 13 00:43:49.638451 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:43:49.638919 systemd-logind[1291]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:43:49.639855 systemd-logind[1291]: Removed session 19. Sep 13 00:43:49.666010 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 43358 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:49.667336 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:49.670860 systemd-logind[1291]: New session 20 of user core. Sep 13 00:43:49.671669 systemd[1]: Started session-20.scope. Sep 13 00:43:49.837759 sshd[3700]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:49.840373 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:43358.service: Deactivated successfully. Sep 13 00:43:49.841353 systemd-logind[1291]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:43:49.841399 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:43:49.842237 systemd-logind[1291]: Removed session 20. Sep 13 00:43:54.840689 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:46822.service. Sep 13 00:43:54.871929 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 46822 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:54.873230 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:54.877132 systemd-logind[1291]: New session 21 of user core. Sep 13 00:43:54.878237 systemd[1]: Started session-21.scope. Sep 13 00:43:54.980212 sshd[3717]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:54.983332 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:46822.service: Deactivated successfully. Sep 13 00:43:54.984230 systemd-logind[1291]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:43:54.984254 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:43:54.985048 systemd-logind[1291]: Removed session 21. Sep 13 00:43:55.870983 update_engine[1295]: I0913 00:43:55.870913 1295 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:43:55.870983 update_engine[1295]: I0913 00:43:55.870994 1295 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:43:55.871946 update_engine[1295]: I0913 00:43:55.871886 1295 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:43:55.872338 update_engine[1295]: I0913 00:43:55.872298 1295 omaha_request_params.cc:62] Current group set to lts Sep 13 00:43:55.873095 update_engine[1295]: I0913 00:43:55.873064 1295 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:43:55.873095 update_engine[1295]: I0913 00:43:55.873078 1295 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:43:55.873223 update_engine[1295]: I0913 00:43:55.873099 1295 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:43:55.873223 update_engine[1295]: I0913 00:43:55.873144 1295 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:43:55.873296 update_engine[1295]: I0913 00:43:55.873226 1295 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 00:43:55.873296 update_engine[1295]: I0913 00:43:55.873234 1295 omaha_request_action.cc:271] Request: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: Sep 13 00:43:55.873296 update_engine[1295]: I0913 00:43:55.873238 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:43:55.874544 locksmithd[1342]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:43:55.875921 update_engine[1295]: I0913 00:43:55.875872 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:43:55.876106 update_engine[1295]: I0913 00:43:55.876087 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:43:55.885098 update_engine[1295]: E0913 00:43:55.885061 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:43:55.885160 update_engine[1295]: I0913 00:43:55.885147 1295 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:43:56.697942 kubelet[2093]: E0913 00:43:56.697889 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:59.984504 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:42984.service. Sep 13 00:44:00.013779 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 42984 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:00.014922 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:00.019237 systemd-logind[1291]: New session 22 of user core. Sep 13 00:44:00.019949 systemd[1]: Started session-22.scope. Sep 13 00:44:00.150614 sshd[3735]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:00.153197 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:42984.service: Deactivated successfully. Sep 13 00:44:00.154243 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:44:00.154247 systemd-logind[1291]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:44:00.155246 systemd-logind[1291]: Removed session 22. Sep 13 00:44:05.153770 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:42998.service. Sep 13 00:44:05.186027 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 42998 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:05.187575 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:05.191591 systemd-logind[1291]: New session 23 of user core. Sep 13 00:44:05.192488 systemd[1]: Started session-23.scope. Sep 13 00:44:05.299568 sshd[3751]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:05.302186 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:42998.service: Deactivated successfully. Sep 13 00:44:05.303043 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:44:05.303837 systemd-logind[1291]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:44:05.304571 systemd-logind[1291]: Removed session 23. Sep 13 00:44:05.870251 update_engine[1295]: I0913 00:44:05.870162 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:44:05.870738 update_engine[1295]: I0913 00:44:05.870468 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:44:05.870738 update_engine[1295]: I0913 00:44:05.870656 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:44:05.880494 update_engine[1295]: E0913 00:44:05.880431 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:44:05.880630 update_engine[1295]: I0913 00:44:05.880556 1295 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:44:10.303863 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:56620.service. Sep 13 00:44:10.339648 sshd[3765]: Accepted publickey for core from 10.0.0.1 port 56620 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:10.341634 sshd[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:10.346648 systemd-logind[1291]: New session 24 of user core. Sep 13 00:44:10.347684 systemd[1]: Started session-24.scope. Sep 13 00:44:10.450428 sshd[3765]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:10.453191 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:56630.service. Sep 13 00:44:10.454598 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:56620.service: Deactivated successfully. Sep 13 00:44:10.455766 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:44:10.456373 systemd-logind[1291]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:44:10.457237 systemd-logind[1291]: Removed session 24. Sep 13 00:44:10.488143 sshd[3779]: Accepted publickey for core from 10.0.0.1 port 56630 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:10.489513 sshd[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:10.493584 systemd-logind[1291]: New session 25 of user core. Sep 13 00:44:10.494333 systemd[1]: Started session-25.scope. Sep 13 00:44:11.697412 kubelet[2093]: E0913 00:44:11.697349 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:11.697946 kubelet[2093]: E0913 00:44:11.697393 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:12.264118 env[1308]: time="2025-09-13T00:44:12.264056307Z" level=info msg="StopContainer for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" with timeout 30 (s)" Sep 13 00:44:12.264892 env[1308]: time="2025-09-13T00:44:12.264789876Z" level=info msg="Stop container \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" with signal terminated" Sep 13 00:44:12.274519 systemd[1]: run-containerd-runc-k8s.io-d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae-runc.5CLokM.mount: Deactivated successfully. Sep 13 00:44:12.292702 env[1308]: time="2025-09-13T00:44:12.292622278Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:44:12.295477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5-rootfs.mount: Deactivated successfully. Sep 13 00:44:12.299347 env[1308]: time="2025-09-13T00:44:12.299305377Z" level=info msg="StopContainer for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" with timeout 2 (s)" Sep 13 00:44:12.299635 env[1308]: time="2025-09-13T00:44:12.299602108Z" level=info msg="Stop container \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" with signal terminated" Sep 13 00:44:12.306689 systemd-networkd[1074]: lxc_health: Link DOWN Sep 13 00:44:12.306699 systemd-networkd[1074]: lxc_health: Lost carrier Sep 13 00:44:12.337149 env[1308]: time="2025-09-13T00:44:12.337091167Z" level=info msg="shim disconnected" id=9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5 Sep 13 00:44:12.337149 env[1308]: time="2025-09-13T00:44:12.337149588Z" level=warning msg="cleaning up after shim disconnected" id=9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5 namespace=k8s.io Sep 13 00:44:12.337443 env[1308]: time="2025-09-13T00:44:12.337162181Z" level=info msg="cleaning up dead shim" Sep 13 00:44:12.344753 env[1308]: time="2025-09-13T00:44:12.344701150Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3840 runtime=io.containerd.runc.v2\n" Sep 13 00:44:12.348631 env[1308]: time="2025-09-13T00:44:12.348594287Z" level=info msg="StopContainer for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" returns successfully" Sep 13 00:44:12.349590 env[1308]: time="2025-09-13T00:44:12.349544856Z" level=info msg="StopPodSandbox for \"e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d\"" Sep 13 00:44:12.349677 env[1308]: time="2025-09-13T00:44:12.349632602Z" level=info msg="Container to stop \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:12.351957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d-shm.mount: Deactivated successfully. Sep 13 00:44:12.364169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae-rootfs.mount: Deactivated successfully. Sep 13 00:44:12.372891 env[1308]: time="2025-09-13T00:44:12.372838139Z" level=info msg="shim disconnected" id=d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae Sep 13 00:44:12.372891 env[1308]: time="2025-09-13T00:44:12.372888064Z" level=warning msg="cleaning up after shim disconnected" id=d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae namespace=k8s.io Sep 13 00:44:12.372891 env[1308]: time="2025-09-13T00:44:12.372896941Z" level=info msg="cleaning up dead shim" Sep 13 00:44:12.383916 env[1308]: time="2025-09-13T00:44:12.383838828Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3880 runtime=io.containerd.runc.v2\n" Sep 13 00:44:12.385337 env[1308]: time="2025-09-13T00:44:12.385282972Z" level=info msg="shim disconnected" id=e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d Sep 13 00:44:12.385439 env[1308]: time="2025-09-13T00:44:12.385339469Z" level=warning msg="cleaning up after shim disconnected" id=e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d namespace=k8s.io Sep 13 00:44:12.385439 env[1308]: time="2025-09-13T00:44:12.385352153Z" level=info msg="cleaning up dead shim" Sep 13 00:44:12.386691 env[1308]: time="2025-09-13T00:44:12.386652784Z" level=info msg="StopContainer for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" returns successfully" Sep 13 00:44:12.387432 env[1308]: time="2025-09-13T00:44:12.387392705Z" level=info msg="StopPodSandbox for \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\"" Sep 13 00:44:12.387502 env[1308]: time="2025-09-13T00:44:12.387485259Z" level=info msg="Container to stop \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:12.387535 env[1308]: time="2025-09-13T00:44:12.387510598Z" level=info msg="Container to stop \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:12.387535 env[1308]: time="2025-09-13T00:44:12.387526878Z" level=info msg="Container to stop \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:12.387595 env[1308]: time="2025-09-13T00:44:12.387543459Z" level=info msg="Container to stop \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:12.387595 env[1308]: time="2025-09-13T00:44:12.387557636Z" level=info msg="Container to stop \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:12.393135 env[1308]: time="2025-09-13T00:44:12.393088855Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3899 runtime=io.containerd.runc.v2\n" Sep 13 00:44:12.394456 env[1308]: time="2025-09-13T00:44:12.394413982Z" level=info msg="TearDown network for sandbox \"e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d\" successfully" Sep 13 00:44:12.394531 env[1308]: time="2025-09-13T00:44:12.394450442Z" level=info msg="StopPodSandbox for \"e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d\" returns successfully" Sep 13 00:44:12.423209 env[1308]: time="2025-09-13T00:44:12.423137532Z" level=info msg="shim disconnected" id=d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba Sep 13 00:44:12.423209 env[1308]: time="2025-09-13T00:44:12.423189090Z" level=warning msg="cleaning up after shim disconnected" id=d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba namespace=k8s.io Sep 13 00:44:12.423209 env[1308]: time="2025-09-13T00:44:12.423199700Z" level=info msg="cleaning up dead shim" Sep 13 00:44:12.432161 env[1308]: time="2025-09-13T00:44:12.432113400Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3932 runtime=io.containerd.runc.v2\n" Sep 13 00:44:12.432762 env[1308]: time="2025-09-13T00:44:12.432718595Z" level=info msg="TearDown network for sandbox \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" successfully" Sep 13 00:44:12.432762 env[1308]: time="2025-09-13T00:44:12.432748201Z" level=info msg="StopPodSandbox for \"d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba\" returns successfully" Sep 13 00:44:12.481076 kubelet[2093]: I0913 00:44:12.481012 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5588c8ea-0741-4042-a01c-31bd7cf40b6c-clustermesh-secrets\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481076 kubelet[2093]: I0913 00:44:12.481060 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-kernel\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481076 kubelet[2093]: I0913 00:44:12.481080 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-net\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481378 kubelet[2093]: I0913 00:44:12.481096 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hsb9\" (UniqueName: \"kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-kube-api-access-6hsb9\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481378 kubelet[2093]: I0913 00:44:12.481113 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hostproc\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481378 kubelet[2093]: I0913 00:44:12.481127 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-run\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481378 kubelet[2093]: I0913 00:44:12.481139 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-cgroup\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481378 kubelet[2093]: I0913 00:44:12.481151 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-lib-modules\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481378 kubelet[2093]: I0913 00:44:12.481180 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-bpf-maps\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481531 kubelet[2093]: I0913 00:44:12.481193 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-etc-cni-netd\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481531 kubelet[2093]: I0913 00:44:12.481214 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-config-path\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481531 kubelet[2093]: I0913 00:44:12.481238 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hubble-tls\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.481531 kubelet[2093]: I0913 00:44:12.481255 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd1ffa74-38c8-44cb-b1a6-7630de316962-cilium-config-path\") pod \"cd1ffa74-38c8-44cb-b1a6-7630de316962\" (UID: \"cd1ffa74-38c8-44cb-b1a6-7630de316962\") " Sep 13 00:44:12.482617 kubelet[2093]: I0913 00:44:12.481196 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482617 kubelet[2093]: I0913 00:44:12.481261 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482617 kubelet[2093]: I0913 00:44:12.481281 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482617 kubelet[2093]: I0913 00:44:12.481785 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482617 kubelet[2093]: I0913 00:44:12.481819 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482814 kubelet[2093]: I0913 00:44:12.481840 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482814 kubelet[2093]: I0913 00:44:12.481860 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hostproc" (OuterVolumeSpecName: "hostproc") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.482814 kubelet[2093]: I0913 00:44:12.481876 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.484083 kubelet[2093]: I0913 00:44:12.484022 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5588c8ea-0741-4042-a01c-31bd7cf40b6c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:44:12.484513 kubelet[2093]: I0913 00:44:12.484459 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:44:12.485108 kubelet[2093]: I0913 00:44:12.485079 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1ffa74-38c8-44cb-b1a6-7630de316962-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd1ffa74-38c8-44cb-b1a6-7630de316962" (UID: "cd1ffa74-38c8-44cb-b1a6-7630de316962"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:44:12.485812 kubelet[2093]: I0913 00:44:12.485781 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-kube-api-access-6hsb9" (OuterVolumeSpecName: "kube-api-access-6hsb9") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "kube-api-access-6hsb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:44:12.485876 kubelet[2093]: I0913 00:44:12.485844 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:44:12.582346 kubelet[2093]: I0913 00:44:12.582176 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-xtables-lock\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.582346 kubelet[2093]: I0913 00:44:12.582256 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.582346 kubelet[2093]: I0913 00:44:12.582274 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlztw\" (UniqueName: \"kubernetes.io/projected/cd1ffa74-38c8-44cb-b1a6-7630de316962-kube-api-access-mlztw\") pod \"cd1ffa74-38c8-44cb-b1a6-7630de316962\" (UID: \"cd1ffa74-38c8-44cb-b1a6-7630de316962\") " Sep 13 00:44:12.582346 kubelet[2093]: I0913 00:44:12.582333 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cni-path\") pod \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\" (UID: \"5588c8ea-0741-4042-a01c-31bd7cf40b6c\") " Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582404 2093 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582417 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582424 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582431 2093 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582441 2093 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582450 2093 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hsb9\" (UniqueName: \"kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-kube-api-access-6hsb9\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582457 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582664 kubelet[2093]: I0913 00:44:12.582465 2093 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5588c8ea-0741-4042-a01c-31bd7cf40b6c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582474 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd1ffa74-38c8-44cb-b1a6-7630de316962-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582484 2093 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582491 2093 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582497 2093 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582504 2093 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5588c8ea-0741-4042-a01c-31bd7cf40b6c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582512 2093 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.582923 kubelet[2093]: I0913 00:44:12.582528 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cni-path" (OuterVolumeSpecName: "cni-path") pod "5588c8ea-0741-4042-a01c-31bd7cf40b6c" (UID: "5588c8ea-0741-4042-a01c-31bd7cf40b6c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:12.586292 kubelet[2093]: I0913 00:44:12.586214 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1ffa74-38c8-44cb-b1a6-7630de316962-kube-api-access-mlztw" (OuterVolumeSpecName: "kube-api-access-mlztw") pod "cd1ffa74-38c8-44cb-b1a6-7630de316962" (UID: "cd1ffa74-38c8-44cb-b1a6-7630de316962"). InnerVolumeSpecName "kube-api-access-mlztw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:44:12.682757 kubelet[2093]: I0913 00:44:12.682698 2093 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5588c8ea-0741-4042-a01c-31bd7cf40b6c-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.682757 kubelet[2093]: I0913 00:44:12.682738 2093 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlztw\" (UniqueName: \"kubernetes.io/projected/cd1ffa74-38c8-44cb-b1a6-7630de316962-kube-api-access-mlztw\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:12.932731 kubelet[2093]: I0913 00:44:12.932690 2093 scope.go:117] "RemoveContainer" containerID="d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae" Sep 13 00:44:12.934174 env[1308]: time="2025-09-13T00:44:12.934131252Z" level=info msg="RemoveContainer for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\"" Sep 13 00:44:13.048096 env[1308]: time="2025-09-13T00:44:13.048029662Z" level=info msg="RemoveContainer for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" returns successfully" Sep 13 00:44:13.048479 kubelet[2093]: I0913 00:44:13.048445 2093 scope.go:117] "RemoveContainer" containerID="55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5" Sep 13 00:44:13.049749 env[1308]: time="2025-09-13T00:44:13.049711053Z" level=info msg="RemoveContainer for \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\"" Sep 13 00:44:13.261213 env[1308]: time="2025-09-13T00:44:13.261078336Z" level=info msg="RemoveContainer for \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\" returns successfully" Sep 13 00:44:13.261461 kubelet[2093]: I0913 00:44:13.261417 2093 scope.go:117] "RemoveContainer" containerID="7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f" Sep 13 00:44:13.262652 env[1308]: time="2025-09-13T00:44:13.262613942Z" level=info msg="RemoveContainer for \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\"" Sep 13 00:44:13.269701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba-rootfs.mount: Deactivated successfully. Sep 13 00:44:13.269876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9fc54b52393f84d8cbc72deb0b99e9cc2760dfcdfc715e10706ccd10a4bd70d-rootfs.mount: Deactivated successfully. Sep 13 00:44:13.269990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d999535c1a4bbf0164ffb125c873fda2b6bbe94ce6645d0c673f25153ca93aba-shm.mount: Deactivated successfully. Sep 13 00:44:13.270089 systemd[1]: var-lib-kubelet-pods-cd1ffa74\x2d38c8\x2d44cb\x2db1a6\x2d7630de316962-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlztw.mount: Deactivated successfully. Sep 13 00:44:13.270192 systemd[1]: var-lib-kubelet-pods-5588c8ea\x2d0741\x2d4042\x2da01c\x2d31bd7cf40b6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hsb9.mount: Deactivated successfully. Sep 13 00:44:13.270305 systemd[1]: var-lib-kubelet-pods-5588c8ea\x2d0741\x2d4042\x2da01c\x2d31bd7cf40b6c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:44:13.270429 systemd[1]: var-lib-kubelet-pods-5588c8ea\x2d0741\x2d4042\x2da01c\x2d31bd7cf40b6c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:44:13.358379 env[1308]: time="2025-09-13T00:44:13.358294106Z" level=info msg="RemoveContainer for \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\" returns successfully" Sep 13 00:44:13.358793 kubelet[2093]: I0913 00:44:13.358628 2093 scope.go:117] "RemoveContainer" containerID="5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9" Sep 13 00:44:13.360119 env[1308]: time="2025-09-13T00:44:13.360088450Z" level=info msg="RemoveContainer for \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\"" Sep 13 00:44:13.368043 env[1308]: time="2025-09-13T00:44:13.368000612Z" level=info msg="RemoveContainer for \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\" returns successfully" Sep 13 00:44:13.368261 kubelet[2093]: I0913 00:44:13.368182 2093 scope.go:117] "RemoveContainer" containerID="74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22" Sep 13 00:44:13.369264 env[1308]: time="2025-09-13T00:44:13.369233315Z" level=info msg="RemoveContainer for \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\"" Sep 13 00:44:13.372554 env[1308]: time="2025-09-13T00:44:13.372504223Z" level=info msg="RemoveContainer for \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\" returns successfully" Sep 13 00:44:13.372733 kubelet[2093]: I0913 00:44:13.372654 2093 scope.go:117] "RemoveContainer" containerID="d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae" Sep 13 00:44:13.372966 env[1308]: time="2025-09-13T00:44:13.372825572Z" level=error msg="ContainerStatus for \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\": not found" Sep 13 00:44:13.373061 kubelet[2093]: E0913 00:44:13.373022 2093 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\": not found" containerID="d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae" Sep 13 00:44:13.373155 kubelet[2093]: I0913 00:44:13.373056 2093 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae"} err="failed to get container status \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\": rpc error: code = NotFound desc = an error occurred when try to find container \"d56fc7dbf79a1bdd71fe0ed8bff9d09700feda90f6c2fc578b80d1f4c9282cae\": not found" Sep 13 00:44:13.373155 kubelet[2093]: I0913 00:44:13.373151 2093 scope.go:117] "RemoveContainer" containerID="55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5" Sep 13 00:44:13.373327 env[1308]: time="2025-09-13T00:44:13.373284509Z" level=error msg="ContainerStatus for \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\": not found" Sep 13 00:44:13.373437 kubelet[2093]: E0913 00:44:13.373404 2093 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\": not found" containerID="55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5" Sep 13 00:44:13.373437 kubelet[2093]: I0913 00:44:13.373422 2093 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5"} err="failed to get container status \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"55c6975190f728841ec7627552bc84c6e5c3e19c9de571a226a5bdd6a5b1b6f5\": not found" Sep 13 00:44:13.373437 kubelet[2093]: I0913 00:44:13.373434 2093 scope.go:117] "RemoveContainer" containerID="7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f" Sep 13 00:44:13.373689 env[1308]: time="2025-09-13T00:44:13.373619023Z" level=error msg="ContainerStatus for \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\": not found" Sep 13 00:44:13.373869 kubelet[2093]: E0913 00:44:13.373850 2093 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\": not found" containerID="7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f" Sep 13 00:44:13.373948 kubelet[2093]: I0913 00:44:13.373869 2093 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f"} err="failed to get container status \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dfae96d391184ad56ccf2caa6910e2e18127ac7a6f7ae580ef79dd6d9aa404f\": not found" Sep 13 00:44:13.373948 kubelet[2093]: I0913 00:44:13.373881 2093 scope.go:117] "RemoveContainer" containerID="5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9" Sep 13 00:44:13.374379 env[1308]: time="2025-09-13T00:44:13.374275074Z" level=error msg="ContainerStatus for \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\": not found" Sep 13 00:44:13.374549 kubelet[2093]: E0913 00:44:13.374525 2093 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\": not found" containerID="5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9" Sep 13 00:44:13.374549 kubelet[2093]: I0913 00:44:13.374547 2093 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9"} err="failed to get container status \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cdbc1cf6c6af417bfaec0eb3ec4a8ef70dcb85e0feb68bb37e9c11a47bd4fa9\": not found" Sep 13 00:44:13.374654 kubelet[2093]: I0913 00:44:13.374559 2093 scope.go:117] "RemoveContainer" containerID="74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22" Sep 13 00:44:13.374803 env[1308]: time="2025-09-13T00:44:13.374745234Z" level=error msg="ContainerStatus for \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\": not found" Sep 13 00:44:13.375006 kubelet[2093]: E0913 00:44:13.374963 2093 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\": not found" containerID="74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22" Sep 13 00:44:13.375063 kubelet[2093]: I0913 00:44:13.375022 2093 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22"} err="failed to get container status \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\": rpc error: code = NotFound desc = an error occurred when try to find container \"74441a9a473234e526ac63e5f15f659bba969687bab699ae6cab7726dce66d22\": not found" Sep 13 00:44:13.375097 kubelet[2093]: I0913 00:44:13.375067 2093 scope.go:117] "RemoveContainer" containerID="9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5" Sep 13 00:44:13.376178 env[1308]: time="2025-09-13T00:44:13.376142017Z" level=info msg="RemoveContainer for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\"" Sep 13 00:44:13.381068 env[1308]: time="2025-09-13T00:44:13.381032431Z" level=info msg="RemoveContainer for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" returns successfully" Sep 13 00:44:13.381265 kubelet[2093]: I0913 00:44:13.381210 2093 scope.go:117] "RemoveContainer" containerID="9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5" Sep 13 00:44:13.381577 env[1308]: time="2025-09-13T00:44:13.381493954Z" level=error msg="ContainerStatus for \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\": not found" Sep 13 00:44:13.381770 kubelet[2093]: E0913 00:44:13.381743 2093 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\": not found" containerID="9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5" Sep 13 00:44:13.381860 kubelet[2093]: I0913 00:44:13.381769 2093 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5"} err="failed to get container status \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9029f373361dce7bc9c01e3901fb3ecac5eac224061d26a997d24ae54b09b5d5\": not found" Sep 13 00:44:13.762911 kubelet[2093]: E0913 00:44:13.762856 2093 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:44:14.207697 sshd[3779]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:14.210793 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:56634.service. Sep 13 00:44:14.211457 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:56630.service: Deactivated successfully. Sep 13 00:44:14.212604 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:44:14.213430 systemd-logind[1291]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:44:14.214184 systemd-logind[1291]: Removed session 25. Sep 13 00:44:14.243237 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 56634 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:14.244303 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:14.247917 systemd-logind[1291]: New session 26 of user core. Sep 13 00:44:14.248868 systemd[1]: Started session-26.scope. Sep 13 00:44:14.700527 kubelet[2093]: I0913 00:44:14.700455 2093 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" path="/var/lib/kubelet/pods/5588c8ea-0741-4042-a01c-31bd7cf40b6c/volumes" Sep 13 00:44:14.701033 kubelet[2093]: I0913 00:44:14.701012 2093 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1ffa74-38c8-44cb-b1a6-7630de316962" path="/var/lib/kubelet/pods/cd1ffa74-38c8-44cb-b1a6-7630de316962/volumes" Sep 13 00:44:14.787714 sshd[3951]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:14.791274 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:56638.service. Sep 13 00:44:14.793578 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:56634.service: Deactivated successfully. Sep 13 00:44:14.794717 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:44:14.794829 systemd-logind[1291]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:44:14.796074 systemd-logind[1291]: Removed session 26. Sep 13 00:44:14.823971 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 56638 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:14.825281 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:14.828613 systemd-logind[1291]: New session 27 of user core. Sep 13 00:44:14.829347 systemd[1]: Started session-27.scope. Sep 13 00:44:14.853941 kubelet[2093]: E0913 00:44:14.853890 2093 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" containerName="mount-cgroup" Sep 13 00:44:14.853941 kubelet[2093]: E0913 00:44:14.853924 2093 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" containerName="mount-bpf-fs" Sep 13 00:44:14.853941 kubelet[2093]: E0913 00:44:14.853930 2093 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" containerName="clean-cilium-state" Sep 13 00:44:14.853941 kubelet[2093]: E0913 00:44:14.853936 2093 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" containerName="cilium-agent" Sep 13 00:44:14.853941 kubelet[2093]: E0913 00:44:14.853943 2093 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd1ffa74-38c8-44cb-b1a6-7630de316962" containerName="cilium-operator" Sep 13 00:44:14.853941 kubelet[2093]: E0913 00:44:14.853949 2093 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" containerName="apply-sysctl-overwrites" Sep 13 00:44:14.854577 kubelet[2093]: I0913 00:44:14.853974 2093 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1ffa74-38c8-44cb-b1a6-7630de316962" containerName="cilium-operator" Sep 13 00:44:14.854577 kubelet[2093]: I0913 00:44:14.853980 2093 memory_manager.go:354] "RemoveStaleState removing state" podUID="5588c8ea-0741-4042-a01c-31bd7cf40b6c" containerName="cilium-agent" Sep 13 00:44:14.953200 sshd[3963]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:14.956128 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:56646.service. Sep 13 00:44:14.959451 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:56638.service: Deactivated successfully. Sep 13 00:44:14.960771 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:44:14.961247 systemd-logind[1291]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:44:14.961998 systemd-logind[1291]: Removed session 27. Sep 13 00:44:14.967978 kubelet[2093]: E0913 00:44:14.965674 2093 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-kn7sg lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-nsvb8" podUID="74f10cf0-4160-47b2-9298-88f3d84d9cb0" Sep 13 00:44:14.987570 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 56646 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:14.988895 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:14.992557 systemd-logind[1291]: New session 28 of user core. Sep 13 00:44:14.993543 systemd[1]: Started session-28.scope. Sep 13 00:44:14.995069 kubelet[2093]: I0913 00:44:14.995036 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-xtables-lock\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995069 kubelet[2093]: I0913 00:44:14.995071 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hubble-tls\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995185 kubelet[2093]: I0913 00:44:14.995093 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-run\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995185 kubelet[2093]: I0913 00:44:14.995107 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-cgroup\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995185 kubelet[2093]: I0913 00:44:14.995140 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-etc-cni-netd\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995185 kubelet[2093]: I0913 00:44:14.995156 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-lib-modules\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995185 kubelet[2093]: I0913 00:44:14.995172 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-ipsec-secrets\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995185 kubelet[2093]: I0913 00:44:14.995187 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-net\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995351 kubelet[2093]: I0913 00:44:14.995202 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-bpf-maps\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995351 kubelet[2093]: I0913 00:44:14.995225 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-clustermesh-secrets\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995351 kubelet[2093]: I0913 00:44:14.995241 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7sg\" (UniqueName: \"kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-kube-api-access-kn7sg\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995351 kubelet[2093]: I0913 00:44:14.995255 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hostproc\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995351 kubelet[2093]: I0913 00:44:14.995269 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-kernel\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995496 kubelet[2093]: I0913 00:44:14.995282 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-config-path\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:14.995496 kubelet[2093]: I0913 00:44:14.995297 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cni-path\") pod \"cilium-nsvb8\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " pod="kube-system/cilium-nsvb8" Sep 13 00:44:15.870451 update_engine[1295]: I0913 00:44:15.870383 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:44:15.870952 update_engine[1295]: I0913 00:44:15.870673 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:44:15.870952 update_engine[1295]: I0913 00:44:15.870889 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:44:15.878292 update_engine[1295]: E0913 00:44:15.878252 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:44:15.878376 update_engine[1295]: I0913 00:44:15.878329 1295 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:44:16.001162 kubelet[2093]: I0913 00:44:16.001106 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-etc-cni-netd\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001162 kubelet[2093]: I0913 00:44:16.001166 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hostproc\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001685 kubelet[2093]: I0913 00:44:16.001190 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-xtables-lock\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001685 kubelet[2093]: I0913 00:44:16.001193 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.001685 kubelet[2093]: I0913 00:44:16.001231 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hubble-tls\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001685 kubelet[2093]: I0913 00:44:16.001253 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.001685 kubelet[2093]: I0913 00:44:16.001296 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hostproc" (OuterVolumeSpecName: "hostproc") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.001803 kubelet[2093]: I0913 00:44:16.001340 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.001803 kubelet[2093]: I0913 00:44:16.001644 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-bpf-maps\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001803 kubelet[2093]: I0913 00:44:16.001690 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn7sg\" (UniqueName: \"kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-kube-api-access-kn7sg\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001803 kubelet[2093]: I0913 00:44:16.001715 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-run\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001803 kubelet[2093]: I0913 00:44:16.001734 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-cgroup\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001803 kubelet[2093]: I0913 00:44:16.001753 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-lib-modules\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001937 kubelet[2093]: I0913 00:44:16.001805 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.001937 kubelet[2093]: I0913 00:44:16.001816 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-ipsec-secrets\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001937 kubelet[2093]: I0913 00:44:16.001837 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.001937 kubelet[2093]: I0913 00:44:16.001842 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-config-path\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.001937 kubelet[2093]: I0913 00:44:16.001876 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cni-path\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.001909 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-clustermesh-secrets\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.001934 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-kernel\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.001955 2093 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-net\") pod \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\" (UID: \"74f10cf0-4160-47b2-9298-88f3d84d9cb0\") " Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.001998 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.002016 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.002029 2093 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.002076 kubelet[2093]: I0913 00:44:16.002042 2093 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.002246 kubelet[2093]: I0913 00:44:16.002055 2093 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.002246 kubelet[2093]: I0913 00:44:16.002067 2093 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.002246 kubelet[2093]: I0913 00:44:16.002094 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.002246 kubelet[2093]: I0913 00:44:16.002118 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cni-path" (OuterVolumeSpecName: "cni-path") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.004170 kubelet[2093]: I0913 00:44:16.004137 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:44:16.004843 kubelet[2093]: I0913 00:44:16.004182 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.004843 kubelet[2093]: I0913 00:44:16.004684 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-kube-api-access-kn7sg" (OuterVolumeSpecName: "kube-api-access-kn7sg") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "kube-api-access-kn7sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:44:16.004843 kubelet[2093]: I0913 00:44:16.004726 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:44:16.006944 kubelet[2093]: I0913 00:44:16.006898 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:44:16.007044 systemd[1]: var-lib-kubelet-pods-74f10cf0\x2d4160\x2d47b2\x2d9298\x2d88f3d84d9cb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkn7sg.mount: Deactivated successfully. Sep 13 00:44:16.007275 systemd[1]: var-lib-kubelet-pods-74f10cf0\x2d4160\x2d47b2\x2d9298\x2d88f3d84d9cb0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:44:16.007544 kubelet[2093]: I0913 00:44:16.007260 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:44:16.007681 kubelet[2093]: I0913 00:44:16.007653 2093 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74f10cf0-4160-47b2-9298-88f3d84d9cb0" (UID: "74f10cf0-4160-47b2-9298-88f3d84d9cb0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:44:16.009992 systemd[1]: var-lib-kubelet-pods-74f10cf0\x2d4160\x2d47b2\x2d9298\x2d88f3d84d9cb0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:44:16.010139 systemd[1]: var-lib-kubelet-pods-74f10cf0\x2d4160\x2d47b2\x2d9298\x2d88f3d84d9cb0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:44:16.102642 kubelet[2093]: I0913 00:44:16.102572 2093 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn7sg\" (UniqueName: \"kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-kube-api-access-kn7sg\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102642 kubelet[2093]: I0913 00:44:16.102620 2093 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102642 kubelet[2093]: I0913 00:44:16.102642 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102851 kubelet[2093]: I0913 00:44:16.102657 2093 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102851 kubelet[2093]: I0913 00:44:16.102669 2093 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f10cf0-4160-47b2-9298-88f3d84d9cb0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102851 kubelet[2093]: I0913 00:44:16.102678 2093 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f10cf0-4160-47b2-9298-88f3d84d9cb0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102851 kubelet[2093]: I0913 00:44:16.102687 2093 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102851 kubelet[2093]: I0913 00:44:16.102695 2093 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f10cf0-4160-47b2-9298-88f3d84d9cb0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:16.102851 kubelet[2093]: I0913 00:44:16.102703 2093 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f10cf0-4160-47b2-9298-88f3d84d9cb0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:17.005701 kubelet[2093]: I0913 00:44:17.005660 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-hostproc\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.005701 kubelet[2093]: I0913 00:44:17.005697 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-cilium-run\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006180 kubelet[2093]: I0913 00:44:17.005725 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-xtables-lock\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006180 kubelet[2093]: I0913 00:44:17.005787 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/279716e4-f2b2-451c-a81f-8b39aac062cb-hubble-tls\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006180 kubelet[2093]: I0913 00:44:17.005820 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-bpf-maps\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006180 kubelet[2093]: I0913 00:44:17.005845 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-cilium-cgroup\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006180 kubelet[2093]: I0913 00:44:17.005868 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-etc-cni-netd\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006180 kubelet[2093]: I0913 00:44:17.005895 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/279716e4-f2b2-451c-a81f-8b39aac062cb-clustermesh-secrets\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006328 kubelet[2093]: I0913 00:44:17.005917 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgvrq\" (UniqueName: \"kubernetes.io/projected/279716e4-f2b2-451c-a81f-8b39aac062cb-kube-api-access-sgvrq\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006328 kubelet[2093]: I0913 00:44:17.005954 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-lib-modules\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006328 kubelet[2093]: I0913 00:44:17.005992 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/279716e4-f2b2-451c-a81f-8b39aac062cb-cilium-config-path\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006328 kubelet[2093]: I0913 00:44:17.006014 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/279716e4-f2b2-451c-a81f-8b39aac062cb-cilium-ipsec-secrets\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006328 kubelet[2093]: I0913 00:44:17.006039 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-cni-path\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006471 kubelet[2093]: I0913 00:44:17.006057 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-host-proc-sys-net\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.006471 kubelet[2093]: I0913 00:44:17.006073 2093 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/279716e4-f2b2-451c-a81f-8b39aac062cb-host-proc-sys-kernel\") pod \"cilium-6wzrn\" (UID: \"279716e4-f2b2-451c-a81f-8b39aac062cb\") " pod="kube-system/cilium-6wzrn" Sep 13 00:44:17.279962 kubelet[2093]: E0913 00:44:17.279796 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:17.280665 env[1308]: time="2025-09-13T00:44:17.280581322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wzrn,Uid:279716e4-f2b2-451c-a81f-8b39aac062cb,Namespace:kube-system,Attempt:0,}" Sep 13 00:44:17.363660 env[1308]: time="2025-09-13T00:44:17.363578118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:17.363660 env[1308]: time="2025-09-13T00:44:17.363618104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:17.363660 env[1308]: time="2025-09-13T00:44:17.363629255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:17.363918 env[1308]: time="2025-09-13T00:44:17.363774589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3 pid=4011 runtime=io.containerd.runc.v2 Sep 13 00:44:17.398517 env[1308]: time="2025-09-13T00:44:17.398431471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wzrn,Uid:279716e4-f2b2-451c-a81f-8b39aac062cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\"" Sep 13 00:44:17.399263 kubelet[2093]: E0913 00:44:17.399219 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:17.401715 env[1308]: time="2025-09-13T00:44:17.401657350Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:44:17.415443 env[1308]: time="2025-09-13T00:44:17.415278225Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba4b2f31b5a070ce6fe23dab53fe4d286e20d3b2cae0526c735cc72b3f38998f\"" Sep 13 00:44:17.415977 env[1308]: time="2025-09-13T00:44:17.415926291Z" level=info msg="StartContainer for \"ba4b2f31b5a070ce6fe23dab53fe4d286e20d3b2cae0526c735cc72b3f38998f\"" Sep 13 00:44:17.456495 env[1308]: time="2025-09-13T00:44:17.456426751Z" level=info msg="StartContainer for \"ba4b2f31b5a070ce6fe23dab53fe4d286e20d3b2cae0526c735cc72b3f38998f\" returns successfully" Sep 13 00:44:17.496636 env[1308]: time="2025-09-13T00:44:17.496562783Z" level=info msg="shim disconnected" id=ba4b2f31b5a070ce6fe23dab53fe4d286e20d3b2cae0526c735cc72b3f38998f Sep 13 00:44:17.496636 env[1308]: time="2025-09-13T00:44:17.496626864Z" level=warning msg="cleaning up after shim disconnected" id=ba4b2f31b5a070ce6fe23dab53fe4d286e20d3b2cae0526c735cc72b3f38998f namespace=k8s.io Sep 13 00:44:17.496636 env[1308]: time="2025-09-13T00:44:17.496636613Z" level=info msg="cleaning up dead shim" Sep 13 00:44:17.503726 env[1308]: time="2025-09-13T00:44:17.503674440Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4093 runtime=io.containerd.runc.v2\n" Sep 13 00:44:17.947669 kubelet[2093]: E0913 00:44:17.947628 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:17.949332 env[1308]: time="2025-09-13T00:44:17.949288362Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:44:17.960987 env[1308]: time="2025-09-13T00:44:17.960941626Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c39c4f160ba8e133b5202e72bbdd8d08a841bcd663405653a0ba57a9be935adf\"" Sep 13 00:44:17.961793 env[1308]: time="2025-09-13T00:44:17.961752690Z" level=info msg="StartContainer for \"c39c4f160ba8e133b5202e72bbdd8d08a841bcd663405653a0ba57a9be935adf\"" Sep 13 00:44:18.002199 env[1308]: time="2025-09-13T00:44:18.002112604Z" level=info msg="StartContainer for \"c39c4f160ba8e133b5202e72bbdd8d08a841bcd663405653a0ba57a9be935adf\" returns successfully" Sep 13 00:44:18.025914 env[1308]: time="2025-09-13T00:44:18.025863236Z" level=info msg="shim disconnected" id=c39c4f160ba8e133b5202e72bbdd8d08a841bcd663405653a0ba57a9be935adf Sep 13 00:44:18.025914 env[1308]: time="2025-09-13T00:44:18.025912078Z" level=warning msg="cleaning up after shim disconnected" id=c39c4f160ba8e133b5202e72bbdd8d08a841bcd663405653a0ba57a9be935adf namespace=k8s.io Sep 13 00:44:18.025914 env[1308]: time="2025-09-13T00:44:18.025921696Z" level=info msg="cleaning up dead shim" Sep 13 00:44:18.033264 env[1308]: time="2025-09-13T00:44:18.033211148Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4155 runtime=io.containerd.runc.v2\n" Sep 13 00:44:18.698406 kubelet[2093]: E0913 00:44:18.698268 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:18.700419 kubelet[2093]: I0913 00:44:18.700390 2093 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f10cf0-4160-47b2-9298-88f3d84d9cb0" path="/var/lib/kubelet/pods/74f10cf0-4160-47b2-9298-88f3d84d9cb0/volumes" Sep 13 00:44:18.763786 kubelet[2093]: E0913 00:44:18.763711 2093 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:44:18.951902 kubelet[2093]: E0913 00:44:18.951490 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:18.954567 env[1308]: time="2025-09-13T00:44:18.954488821Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:44:18.970802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011643385.mount: Deactivated successfully. Sep 13 00:44:18.975172 env[1308]: time="2025-09-13T00:44:18.975092143Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5547eca1bc60e1fe664cf229c34baa782c98049d4113a294dc451ca468d32c1c\"" Sep 13 00:44:18.975745 env[1308]: time="2025-09-13T00:44:18.975719028Z" level=info msg="StartContainer for \"5547eca1bc60e1fe664cf229c34baa782c98049d4113a294dc451ca468d32c1c\"" Sep 13 00:44:19.035091 env[1308]: time="2025-09-13T00:44:19.035025178Z" level=info msg="StartContainer for \"5547eca1bc60e1fe664cf229c34baa782c98049d4113a294dc451ca468d32c1c\" returns successfully" Sep 13 00:44:19.061278 env[1308]: time="2025-09-13T00:44:19.061207521Z" level=info msg="shim disconnected" id=5547eca1bc60e1fe664cf229c34baa782c98049d4113a294dc451ca468d32c1c Sep 13 00:44:19.061278 env[1308]: time="2025-09-13T00:44:19.061283494Z" level=warning msg="cleaning up after shim disconnected" id=5547eca1bc60e1fe664cf229c34baa782c98049d4113a294dc451ca468d32c1c namespace=k8s.io Sep 13 00:44:19.061576 env[1308]: time="2025-09-13T00:44:19.061299163Z" level=info msg="cleaning up dead shim" Sep 13 00:44:19.070741 env[1308]: time="2025-09-13T00:44:19.070685356Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4210 runtime=io.containerd.runc.v2\n" Sep 13 00:44:19.112970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5547eca1bc60e1fe664cf229c34baa782c98049d4113a294dc451ca468d32c1c-rootfs.mount: Deactivated successfully. Sep 13 00:44:19.954922 kubelet[2093]: E0913 00:44:19.954886 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:19.956451 env[1308]: time="2025-09-13T00:44:19.956403242Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:44:20.093810 env[1308]: time="2025-09-13T00:44:20.093748182Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38c7fc9a805a322bf21177c3bed116bcba944ef3ce127b95733e6c1f1387a71c\"" Sep 13 00:44:20.094771 env[1308]: time="2025-09-13T00:44:20.094720119Z" level=info msg="StartContainer for \"38c7fc9a805a322bf21177c3bed116bcba944ef3ce127b95733e6c1f1387a71c\"" Sep 13 00:44:20.160180 env[1308]: time="2025-09-13T00:44:20.160030423Z" level=info msg="StartContainer for \"38c7fc9a805a322bf21177c3bed116bcba944ef3ce127b95733e6c1f1387a71c\" returns successfully" Sep 13 00:44:20.174201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38c7fc9a805a322bf21177c3bed116bcba944ef3ce127b95733e6c1f1387a71c-rootfs.mount: Deactivated successfully. Sep 13 00:44:20.267731 env[1308]: time="2025-09-13T00:44:20.267613953Z" level=info msg="shim disconnected" id=38c7fc9a805a322bf21177c3bed116bcba944ef3ce127b95733e6c1f1387a71c Sep 13 00:44:20.267731 env[1308]: time="2025-09-13T00:44:20.267668967Z" level=warning msg="cleaning up after shim disconnected" id=38c7fc9a805a322bf21177c3bed116bcba944ef3ce127b95733e6c1f1387a71c namespace=k8s.io Sep 13 00:44:20.267731 env[1308]: time="2025-09-13T00:44:20.267678405Z" level=info msg="cleaning up dead shim" Sep 13 00:44:20.274618 env[1308]: time="2025-09-13T00:44:20.274576012Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4264 runtime=io.containerd.runc.v2\n" Sep 13 00:44:20.959120 kubelet[2093]: E0913 00:44:20.959087 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:20.960730 env[1308]: time="2025-09-13T00:44:20.960675161Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:44:20.976133 env[1308]: time="2025-09-13T00:44:20.976074305Z" level=info msg="CreateContainer within sandbox \"a61703701b71ba1731374e6268850fe87d24dfb8dc01707c0955e521f82f5bd3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8dbb1d3af07abe4c1be08a75e7a87a8c2fc8615f2373684522718470bdc1f45\"" Sep 13 00:44:20.976772 env[1308]: time="2025-09-13T00:44:20.976744381Z" level=info msg="StartContainer for \"e8dbb1d3af07abe4c1be08a75e7a87a8c2fc8615f2373684522718470bdc1f45\"" Sep 13 00:44:21.025116 env[1308]: time="2025-09-13T00:44:21.025068225Z" level=info msg="StartContainer for \"e8dbb1d3af07abe4c1be08a75e7a87a8c2fc8615f2373684522718470bdc1f45\" returns successfully" Sep 13 00:44:21.291389 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:44:21.333716 kubelet[2093]: I0913 00:44:21.333657 2093 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:44:21Z","lastTransitionTime":"2025-09-13T00:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:44:21.963773 kubelet[2093]: E0913 00:44:21.963743 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:21.979391 kubelet[2093]: I0913 00:44:21.979303 2093 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6wzrn" podStartSLOduration=5.979275177 podStartE2EDuration="5.979275177s" podCreationTimestamp="2025-09-13 00:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:44:21.978783227 +0000 UTC m=+103.368064948" watchObservedRunningTime="2025-09-13 00:44:21.979275177 +0000 UTC m=+103.368556928" Sep 13 00:44:23.280606 kubelet[2093]: E0913 00:44:23.280567 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:23.939925 systemd-networkd[1074]: lxc_health: Link UP Sep 13 00:44:23.952400 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:44:23.953516 systemd-networkd[1074]: lxc_health: Gained carrier Sep 13 00:44:25.178514 systemd-networkd[1074]: lxc_health: Gained IPv6LL Sep 13 00:44:25.289503 kubelet[2093]: E0913 00:44:25.289459 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:25.875835 update_engine[1295]: I0913 00:44:25.875757 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:44:25.876265 update_engine[1295]: I0913 00:44:25.876029 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:44:25.876265 update_engine[1295]: I0913 00:44:25.876257 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:44:25.885933 update_engine[1295]: E0913 00:44:25.885905 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:44:25.885999 update_engine[1295]: I0913 00:44:25.885972 1295 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:44:25.885999 update_engine[1295]: I0913 00:44:25.885977 1295 omaha_request_action.cc:621] Omaha request response: Sep 13 00:44:25.886063 update_engine[1295]: E0913 00:44:25.886050 1295 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 13 00:44:25.886088 update_engine[1295]: I0913 00:44:25.886076 1295 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:44:25.886088 update_engine[1295]: I0913 00:44:25.886078 1295 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:44:25.886088 update_engine[1295]: I0913 00:44:25.886081 1295 update_attempter.cc:306] Processing Done. Sep 13 00:44:25.886169 update_engine[1295]: E0913 00:44:25.886092 1295 update_attempter.cc:619] Update failed. Sep 13 00:44:25.886169 update_engine[1295]: I0913 00:44:25.886096 1295 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:44:25.886169 update_engine[1295]: I0913 00:44:25.886098 1295 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:44:25.886169 update_engine[1295]: I0913 00:44:25.886101 1295 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:44:25.886259 update_engine[1295]: I0913 00:44:25.886179 1295 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:44:25.886259 update_engine[1295]: I0913 00:44:25.886195 1295 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 00:44:25.886259 update_engine[1295]: I0913 00:44:25.886198 1295 omaha_request_action.cc:271] Request: Sep 13 00:44:25.886259 update_engine[1295]: Sep 13 00:44:25.886259 update_engine[1295]: Sep 13 00:44:25.886259 update_engine[1295]: Sep 13 00:44:25.886259 update_engine[1295]: Sep 13 00:44:25.886259 update_engine[1295]: Sep 13 00:44:25.886259 update_engine[1295]: Sep 13 00:44:25.886259 update_engine[1295]: I0913 00:44:25.886202 1295 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:44:25.886474 update_engine[1295]: I0913 00:44:25.886292 1295 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:44:25.886474 update_engine[1295]: I0913 00:44:25.886404 1295 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:44:25.886690 locksmithd[1342]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:44:25.895038 update_engine[1295]: E0913 00:44:25.895011 1295 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895068 1295 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895073 1295 omaha_request_action.cc:621] Omaha request response: Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895076 1295 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895079 1295 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895081 1295 update_attempter.cc:306] Processing Done. Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895084 1295 update_attempter.cc:310] Error event sent. Sep 13 00:44:25.895094 update_engine[1295]: I0913 00:44:25.895090 1295 update_check_scheduler.cc:74] Next update check in 45m59s Sep 13 00:44:25.895424 locksmithd[1342]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:44:25.971096 kubelet[2093]: E0913 00:44:25.971031 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:26.972256 kubelet[2093]: E0913 00:44:26.972224 2093 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:29.697200 sshd[3977]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:29.699550 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:56646.service: Deactivated successfully. Sep 13 00:44:29.700833 systemd-logind[1291]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:44:29.700885 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:44:29.701732 systemd-logind[1291]: Removed session 28.