Sep 6 00:16:51.902171 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:16:51.902207 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:16:51.902226 kernel: BIOS-provided physical RAM map: Sep 6 00:16:51.902237 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:16:51.902246 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:16:51.902254 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:16:51.902266 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 6 00:16:51.902277 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 6 00:16:51.902291 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:16:51.902300 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:16:51.902312 kernel: NX (Execute Disable) protection: active Sep 6 00:16:51.902322 kernel: SMBIOS 2.8 present. Sep 6 00:16:51.902329 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 6 00:16:51.902336 kernel: Hypervisor detected: KVM Sep 6 00:16:51.902344 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:16:51.902354 kernel: kvm-clock: cpu 0, msr 4119f001, primary cpu clock Sep 6 00:16:51.902361 kernel: kvm-clock: using sched offset of 3043162333 cycles Sep 6 00:16:51.902369 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:16:51.902381 kernel: tsc: Detected 2494.134 MHz processor Sep 6 00:16:51.913447 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:16:51.913471 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:16:51.913482 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 6 00:16:51.913493 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:16:51.913511 kernel: ACPI: Early table checksum verification disabled Sep 6 00:16:51.913521 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 6 00:16:51.913531 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913542 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913552 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913561 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 6 00:16:51.913575 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913585 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913595 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913608 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:51.913618 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 6 00:16:51.913628 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 6 00:16:51.913638 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 6 00:16:51.913648 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 6 00:16:51.913657 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 6 00:16:51.913667 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 6 00:16:51.913677 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 6 00:16:51.913695 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:16:51.913705 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:16:51.913716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:16:51.913727 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 00:16:51.913739 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 6 00:16:51.913749 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 6 00:16:51.913763 kernel: Zone ranges: Sep 6 00:16:51.913774 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:16:51.913786 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 6 00:16:51.913798 kernel: Normal empty Sep 6 00:16:51.913809 kernel: Movable zone start for each node Sep 6 00:16:51.913821 kernel: Early memory node ranges Sep 6 00:16:51.913833 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:16:51.913845 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 6 00:16:51.913857 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 6 00:16:51.913871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:16:51.913888 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:16:51.913901 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 6 00:16:51.913913 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:16:51.913925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:16:51.913937 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:16:51.913950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:16:51.913961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:16:51.913973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:16:51.913989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:16:51.914005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:16:51.914016 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:16:51.914028 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:16:51.914039 kernel: TSC deadline timer available Sep 6 00:16:51.914055 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:16:51.914067 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 6 00:16:51.914080 kernel: Booting paravirtualized kernel on KVM Sep 6 00:16:51.914091 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:16:51.914107 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:16:51.914120 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:16:51.914132 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:16:51.914144 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:16:51.914158 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 6 00:16:51.914169 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 6 00:16:51.914180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 6 00:16:51.914190 kernel: Policy zone: DMA32 Sep 6 00:16:51.914203 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:16:51.914219 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:16:51.914230 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:16:51.914241 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:16:51.914252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:16:51.914263 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 6 00:16:51.914274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:16:51.914286 kernel: Kernel/User page tables isolation: enabled Sep 6 00:16:51.914301 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:16:51.914317 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:16:51.914328 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:16:51.914341 kernel: rcu: RCU event tracing is enabled. Sep 6 00:16:51.914352 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:16:51.914363 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:16:51.914374 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:16:51.914431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:16:51.914446 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:16:51.914459 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:16:51.914476 kernel: random: crng init done Sep 6 00:16:51.914488 kernel: Console: colour VGA+ 80x25 Sep 6 00:16:51.914501 kernel: printk: console [tty0] enabled Sep 6 00:16:51.914513 kernel: printk: console [ttyS0] enabled Sep 6 00:16:51.914526 kernel: ACPI: Core revision 20210730 Sep 6 00:16:51.914538 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:16:51.914550 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:16:51.914563 kernel: x2apic enabled Sep 6 00:16:51.914574 kernel: Switched APIC routing to physical x2apic. Sep 6 00:16:51.914587 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:16:51.914605 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Sep 6 00:16:51.914617 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Sep 6 00:16:51.914638 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 00:16:51.914650 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 00:16:51.914661 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:16:51.914673 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:16:51.914686 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:16:51.914698 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 00:16:51.914715 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:16:51.914740 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:16:51.914754 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:16:51.914770 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:16:51.914785 kernel: active return thunk: its_return_thunk Sep 6 00:16:51.914799 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:16:51.914812 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:16:51.914825 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:16:51.914838 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:16:51.914852 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:16:51.914869 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:16:51.914882 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:16:51.914894 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:16:51.914906 kernel: LSM: Security Framework initializing Sep 6 00:16:51.914919 kernel: SELinux: Initializing. Sep 6 00:16:51.914932 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:16:51.914945 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:16:51.914962 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 6 00:16:51.914975 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 6 00:16:51.914986 kernel: signal: max sigframe size: 1776 Sep 6 00:16:51.914999 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:16:51.915012 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:16:51.915024 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:16:51.915037 kernel: x86: Booting SMP configuration: Sep 6 00:16:51.915050 kernel: .... node #0, CPUs: #1 Sep 6 00:16:51.915063 kernel: kvm-clock: cpu 1, msr 4119f041, secondary cpu clock Sep 6 00:16:51.915119 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 6 00:16:51.915129 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:16:51.915137 kernel: smpboot: Max logical packages: 1 Sep 6 00:16:51.915145 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Sep 6 00:16:51.915153 kernel: devtmpfs: initialized Sep 6 00:16:51.915162 kernel: x86/mm: Memory block size: 128MB Sep 6 00:16:51.915170 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:16:51.915179 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:16:51.915187 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:16:51.915199 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:16:51.915212 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:16:51.915224 kernel: audit: type=2000 audit(1757117810.600:1): state=initialized audit_enabled=0 res=1 Sep 6 00:16:51.915232 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:16:51.915241 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:16:51.915249 kernel: cpuidle: using governor menu Sep 6 00:16:51.915262 kernel: ACPI: bus type PCI registered Sep 6 00:16:51.915270 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:16:51.915279 kernel: dca service started, version 1.12.1 Sep 6 00:16:51.915290 kernel: PCI: Using configuration type 1 for base access Sep 6 00:16:51.915298 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:16:51.915307 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:16:51.915315 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:16:51.915323 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:16:51.915332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:16:51.915340 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:16:51.915348 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:16:51.915356 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:16:51.915367 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:16:51.915375 kernel: ACPI: Interpreter enabled Sep 6 00:16:51.915384 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:16:51.915410 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:16:51.915424 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:16:51.915438 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:16:51.915449 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:16:51.915706 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:16:51.915844 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:16:51.915865 kernel: acpiphp: Slot [3] registered Sep 6 00:16:51.915878 kernel: acpiphp: Slot [4] registered Sep 6 00:16:51.915892 kernel: acpiphp: Slot [5] registered Sep 6 00:16:51.915905 kernel: acpiphp: Slot [6] registered Sep 6 00:16:51.915916 kernel: acpiphp: Slot [7] registered Sep 6 00:16:51.915928 kernel: acpiphp: Slot [8] registered Sep 6 00:16:51.915941 kernel: acpiphp: Slot [9] registered Sep 6 00:16:51.915954 kernel: acpiphp: Slot [10] registered Sep 6 00:16:51.915974 kernel: acpiphp: Slot [11] registered Sep 6 00:16:51.915989 kernel: acpiphp: Slot [12] registered Sep 6 00:16:51.916000 kernel: acpiphp: Slot [13] registered Sep 6 00:16:51.916008 kernel: acpiphp: Slot [14] registered Sep 6 00:16:51.916017 kernel: acpiphp: Slot [15] registered Sep 6 00:16:51.916025 kernel: acpiphp: Slot [16] registered Sep 6 00:16:51.916034 kernel: acpiphp: Slot [17] registered Sep 6 00:16:51.916042 kernel: acpiphp: Slot [18] registered Sep 6 00:16:51.916050 kernel: acpiphp: Slot [19] registered Sep 6 00:16:51.916061 kernel: acpiphp: Slot [20] registered Sep 6 00:16:51.916070 kernel: acpiphp: Slot [21] registered Sep 6 00:16:51.916078 kernel: acpiphp: Slot [22] registered Sep 6 00:16:51.916086 kernel: acpiphp: Slot [23] registered Sep 6 00:16:51.916095 kernel: acpiphp: Slot [24] registered Sep 6 00:16:51.916103 kernel: acpiphp: Slot [25] registered Sep 6 00:16:51.916111 kernel: acpiphp: Slot [26] registered Sep 6 00:16:51.916119 kernel: acpiphp: Slot [27] registered Sep 6 00:16:51.916128 kernel: acpiphp: Slot [28] registered Sep 6 00:16:51.916136 kernel: acpiphp: Slot [29] registered Sep 6 00:16:51.916147 kernel: acpiphp: Slot [30] registered Sep 6 00:16:51.916155 kernel: acpiphp: Slot [31] registered Sep 6 00:16:51.916163 kernel: PCI host bridge to bus 0000:00 Sep 6 00:16:51.916288 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:16:51.916373 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:16:51.916504 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:16:51.916619 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:16:51.916701 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 6 00:16:51.916788 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:16:51.916924 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:16:51.917053 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:16:51.917157 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 6 00:16:51.917249 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 6 00:16:51.917379 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 6 00:16:51.917495 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 6 00:16:51.917588 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 6 00:16:51.917687 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 6 00:16:51.917802 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 6 00:16:51.917892 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 6 00:16:51.917989 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:16:51.918082 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 6 00:16:51.918168 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 6 00:16:51.918282 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 6 00:16:51.918398 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 6 00:16:51.918499 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 6 00:16:51.918589 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 6 00:16:51.918716 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 6 00:16:51.918866 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:16:51.919024 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:16:51.919185 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 6 00:16:51.919330 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 6 00:16:51.919498 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 6 00:16:51.919673 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:16:51.919824 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 6 00:16:51.919975 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 6 00:16:51.920128 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 6 00:16:51.920308 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 6 00:16:51.928563 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 6 00:16:51.928694 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 6 00:16:51.928788 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 6 00:16:51.928897 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:16:51.929014 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:16:51.929132 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 6 00:16:51.929221 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 6 00:16:51.929367 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:16:51.929479 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 6 00:16:51.929574 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 6 00:16:51.929661 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 6 00:16:51.929781 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 6 00:16:51.929870 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 6 00:16:51.929958 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 6 00:16:51.929969 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:16:51.929978 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:16:51.929987 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:16:51.929999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:16:51.930007 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:16:51.930015 kernel: iommu: Default domain type: Translated Sep 6 00:16:51.930024 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:16:51.930112 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 6 00:16:51.930199 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:16:51.930286 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 6 00:16:51.930297 kernel: vgaarb: loaded Sep 6 00:16:51.930305 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:16:51.930317 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:16:51.930326 kernel: PTP clock support registered Sep 6 00:16:51.930334 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:16:51.930342 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:16:51.930350 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:16:51.930359 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 6 00:16:51.930367 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:16:51.930376 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:16:51.930384 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:16:51.930408 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:16:51.930418 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:16:51.930426 kernel: pnp: PnP ACPI init Sep 6 00:16:51.930434 kernel: pnp: PnP ACPI: found 4 devices Sep 6 00:16:51.930443 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:16:51.930451 kernel: NET: Registered PF_INET protocol family Sep 6 00:16:51.930460 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:16:51.930468 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:16:51.930477 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:16:51.930488 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:16:51.930497 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:16:51.930505 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:16:51.930513 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:16:51.930522 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:16:51.930530 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:16:51.930538 kernel: NET: Registered PF_XDP protocol family Sep 6 00:16:51.930629 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:16:51.930713 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:16:51.930823 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:16:51.930946 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:16:51.931067 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 6 00:16:51.931202 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 6 00:16:51.931314 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:16:51.931486 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:16:51.931507 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:16:51.931678 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 42477 usecs Sep 6 00:16:51.931701 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:16:51.931715 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:16:51.931729 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Sep 6 00:16:51.931741 kernel: Initialise system trusted keyrings Sep 6 00:16:51.931754 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:16:51.931767 kernel: Key type asymmetric registered Sep 6 00:16:51.931779 kernel: Asymmetric key parser 'x509' registered Sep 6 00:16:51.931804 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:16:51.931824 kernel: io scheduler mq-deadline registered Sep 6 00:16:51.931839 kernel: io scheduler kyber registered Sep 6 00:16:51.931852 kernel: io scheduler bfq registered Sep 6 00:16:51.931864 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:16:51.931877 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 6 00:16:51.931891 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:16:51.931903 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:16:51.931916 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:16:51.931927 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:16:51.931944 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:16:51.931958 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:16:51.931973 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:16:51.931987 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:16:51.932156 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 00:16:51.932290 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 00:16:51.932445 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T00:16:51 UTC (1757117811) Sep 6 00:16:51.932570 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 00:16:51.932595 kernel: intel_pstate: CPU model not supported Sep 6 00:16:51.932607 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:16:51.932621 kernel: Segment Routing with IPv6 Sep 6 00:16:51.932634 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:16:51.932647 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:16:51.932659 kernel: Key type dns_resolver registered Sep 6 00:16:51.932671 kernel: IPI shorthand broadcast: enabled Sep 6 00:16:51.932683 kernel: sched_clock: Marking stable (608340416, 81210221)->(801464760, -111914123) Sep 6 00:16:51.932697 kernel: registered taskstats version 1 Sep 6 00:16:51.932714 kernel: Loading compiled-in X.509 certificates Sep 6 00:16:51.932725 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:16:51.932739 kernel: Key type .fscrypt registered Sep 6 00:16:51.932752 kernel: Key type fscrypt-provisioning registered Sep 6 00:16:51.932764 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:16:51.932776 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:16:51.932789 kernel: ima: No architecture policies found Sep 6 00:16:51.932801 kernel: clk: Disabling unused clocks Sep 6 00:16:51.932817 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:16:51.932829 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:16:51.932840 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:16:51.932851 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:16:51.932863 kernel: Run /init as init process Sep 6 00:16:51.932874 kernel: with arguments: Sep 6 00:16:51.932913 kernel: /init Sep 6 00:16:51.932929 kernel: with environment: Sep 6 00:16:51.932941 kernel: HOME=/ Sep 6 00:16:51.932954 kernel: TERM=linux Sep 6 00:16:51.932971 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:16:51.932989 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:16:51.933004 systemd[1]: Detected virtualization kvm. Sep 6 00:16:51.933018 systemd[1]: Detected architecture x86-64. Sep 6 00:16:51.933030 systemd[1]: Running in initrd. Sep 6 00:16:51.933046 systemd[1]: No hostname configured, using default hostname. Sep 6 00:16:51.933060 systemd[1]: Hostname set to . Sep 6 00:16:51.933078 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:16:51.933092 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:16:51.933105 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:16:51.933120 systemd[1]: Reached target cryptsetup.target. Sep 6 00:16:51.933132 systemd[1]: Reached target paths.target. Sep 6 00:16:51.933145 systemd[1]: Reached target slices.target. Sep 6 00:16:51.933162 systemd[1]: Reached target swap.target. Sep 6 00:16:51.933176 systemd[1]: Reached target timers.target. Sep 6 00:16:51.933196 systemd[1]: Listening on iscsid.socket. Sep 6 00:16:51.933213 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:16:51.933228 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:16:51.933243 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:16:51.933258 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:16:51.933275 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:16:51.933290 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:16:51.933307 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:16:51.933326 systemd[1]: Reached target sockets.target. Sep 6 00:16:51.933343 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:16:51.933363 systemd[1]: Finished network-cleanup.service. Sep 6 00:16:51.933377 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:16:51.933531 systemd[1]: Starting systemd-journald.service... Sep 6 00:16:51.933550 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:16:51.933574 systemd[1]: Starting systemd-resolved.service... Sep 6 00:16:51.933589 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:16:51.933606 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:16:51.933620 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:16:51.933635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:16:51.933656 systemd-journald[184]: Journal started Sep 6 00:16:51.933751 systemd-journald[184]: Runtime Journal (/run/log/journal/43351ea2911b4ef780462f8f16a14f0a) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:16:51.922429 systemd-modules-load[185]: Inserted module 'overlay' Sep 6 00:16:51.937748 systemd-resolved[186]: Positive Trust Anchors: Sep 6 00:16:51.955997 systemd[1]: Started systemd-journald.service. Sep 6 00:16:51.937756 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:16:51.937821 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:16:51.940559 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 6 00:16:51.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.954492 systemd[1]: Started systemd-resolved.service. Sep 6 00:16:51.966066 kernel: audit: type=1130 audit(1757117811.953:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.966099 kernel: audit: type=1130 audit(1757117811.954:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.966112 kernel: audit: type=1130 audit(1757117811.955:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.955194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:16:51.969761 kernel: audit: type=1130 audit(1757117811.966:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.955632 systemd[1]: Reached target nss-lookup.target. Sep 6 00:16:51.966213 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:16:51.969119 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:16:51.972397 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:16:51.981057 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 6 00:16:51.981540 kernel: Bridge firewalling registered Sep 6 00:16:51.985949 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:16:51.992062 kernel: audit: type=1130 audit(1757117811.986:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:51.989310 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:16:52.004027 dracut-cmdline[203]: dracut-dracut-053 Sep 6 00:16:52.006942 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:16:52.008311 kernel: SCSI subsystem initialized Sep 6 00:16:52.019455 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:16:52.019519 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:16:52.027039 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:16:52.030520 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 6 00:16:52.031349 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:16:52.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.032584 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:16:52.035603 kernel: audit: type=1130 audit(1757117812.031:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.041229 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:16:52.044736 kernel: audit: type=1130 audit(1757117812.041:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.084419 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:16:52.102421 kernel: iscsi: registered transport (tcp) Sep 6 00:16:52.128427 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:16:52.128495 kernel: QLogic iSCSI HBA Driver Sep 6 00:16:52.172757 kernel: audit: type=1130 audit(1757117812.169:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.169805 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:16:52.171125 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:16:52.223480 kernel: raid6: avx2x4 gen() 18175 MB/s Sep 6 00:16:52.240449 kernel: raid6: avx2x4 xor() 7462 MB/s Sep 6 00:16:52.257430 kernel: raid6: avx2x2 gen() 17976 MB/s Sep 6 00:16:52.274523 kernel: raid6: avx2x2 xor() 18715 MB/s Sep 6 00:16:52.291455 kernel: raid6: avx2x1 gen() 13122 MB/s Sep 6 00:16:52.308482 kernel: raid6: avx2x1 xor() 17934 MB/s Sep 6 00:16:52.325563 kernel: raid6: sse2x4 gen() 12389 MB/s Sep 6 00:16:52.342480 kernel: raid6: sse2x4 xor() 6828 MB/s Sep 6 00:16:52.359475 kernel: raid6: sse2x2 gen() 13732 MB/s Sep 6 00:16:52.376480 kernel: raid6: sse2x2 xor() 8538 MB/s Sep 6 00:16:52.393484 kernel: raid6: sse2x1 gen() 11958 MB/s Sep 6 00:16:52.410662 kernel: raid6: sse2x1 xor() 5843 MB/s Sep 6 00:16:52.410764 kernel: raid6: using algorithm avx2x4 gen() 18175 MB/s Sep 6 00:16:52.410778 kernel: raid6: .... xor() 7462 MB/s, rmw enabled Sep 6 00:16:52.411746 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:16:52.424454 kernel: xor: automatically using best checksumming function avx Sep 6 00:16:52.527446 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:16:52.542272 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:16:52.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.543804 systemd[1]: Starting systemd-udevd.service... Sep 6 00:16:52.547438 kernel: audit: type=1130 audit(1757117812.542:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.542000 audit: BPF prog-id=7 op=LOAD Sep 6 00:16:52.542000 audit: BPF prog-id=8 op=LOAD Sep 6 00:16:52.560133 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 6 00:16:52.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.565624 systemd[1]: Started systemd-udevd.service. Sep 6 00:16:52.569042 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:16:52.588465 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Sep 6 00:16:52.628136 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:16:52.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.630124 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:16:52.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:52.677946 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:16:52.731483 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 00:16:52.765477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:16:52.765496 kernel: GPT:9289727 != 125829119 Sep 6 00:16:52.765508 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:16:52.765525 kernel: GPT:9289727 != 125829119 Sep 6 00:16:52.765536 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:16:52.765547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:16:52.765559 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:16:52.765570 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:16:52.767643 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 6 00:16:52.797421 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:16:52.818414 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (435) Sep 6 00:16:52.824822 kernel: AES CTR mode by8 optimization enabled Sep 6 00:16:52.821077 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:16:52.831106 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:16:52.831763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:16:52.834023 systemd[1]: Starting disk-uuid.service... Sep 6 00:16:52.839602 kernel: libata version 3.00 loaded. Sep 6 00:16:52.843296 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:16:52.923055 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 6 00:16:52.923256 kernel: scsi host1: ata_piix Sep 6 00:16:52.923414 kernel: scsi host2: ata_piix Sep 6 00:16:52.923526 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 6 00:16:52.923539 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 6 00:16:52.923551 kernel: ACPI: bus type USB registered Sep 6 00:16:52.923562 kernel: usbcore: registered new interface driver usbfs Sep 6 00:16:52.923573 kernel: usbcore: registered new interface driver hub Sep 6 00:16:52.923585 kernel: usbcore: registered new device driver usb Sep 6 00:16:52.923601 disk-uuid[456]: Primary Header is updated. Sep 6 00:16:52.923601 disk-uuid[456]: Secondary Entries is updated. Sep 6 00:16:52.923601 disk-uuid[456]: Secondary Header is updated. Sep 6 00:16:52.927020 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:16:53.033421 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 6 00:16:53.036420 kernel: ehci-pci: EHCI PCI platform driver Sep 6 00:16:53.040414 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 6 00:16:53.059037 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 6 00:16:53.062269 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 6 00:16:53.062414 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 6 00:16:53.062530 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 6 00:16:53.062629 kernel: hub 1-0:1.0: USB hub found Sep 6 00:16:53.062778 kernel: hub 1-0:1.0: 2 ports detected Sep 6 00:16:53.855417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:16:53.855987 disk-uuid[458]: The operation has completed successfully. Sep 6 00:16:53.860805 kernel: block device autoloading is deprecated and will be removed. Sep 6 00:16:53.898915 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:16:53.899019 systemd[1]: Finished disk-uuid.service. Sep 6 00:16:53.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:53.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:53.900334 systemd[1]: Starting verity-setup.service... Sep 6 00:16:53.917414 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:16:53.967350 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:16:53.968867 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:16:53.971445 systemd[1]: Finished verity-setup.service. Sep 6 00:16:53.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.054430 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:16:54.055522 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:16:54.056022 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:16:54.057025 systemd[1]: Starting ignition-setup.service... Sep 6 00:16:54.058262 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:16:54.072919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:16:54.072979 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:16:54.072992 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:16:54.093436 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:16:54.100183 systemd[1]: Finished ignition-setup.service. Sep 6 00:16:54.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.101812 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:16:54.218366 ignition[610]: Ignition 2.14.0 Sep 6 00:16:54.218382 ignition[610]: Stage: fetch-offline Sep 6 00:16:54.218482 ignition[610]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:54.218528 ignition[610]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:54.220708 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:16:54.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.221000 audit: BPF prog-id=9 op=LOAD Sep 6 00:16:54.223155 systemd[1]: Starting systemd-networkd.service... Sep 6 00:16:54.225428 ignition[610]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:54.226075 ignition[610]: parsed url from cmdline: "" Sep 6 00:16:54.226136 ignition[610]: no config URL provided Sep 6 00:16:54.226538 ignition[610]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:16:54.226555 ignition[610]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:16:54.226561 ignition[610]: failed to fetch config: resource requires networking Sep 6 00:16:54.228045 ignition[610]: Ignition finished successfully Sep 6 00:16:54.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.229657 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:16:54.248239 systemd-networkd[690]: lo: Link UP Sep 6 00:16:54.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.248250 systemd-networkd[690]: lo: Gained carrier Sep 6 00:16:54.248820 systemd-networkd[690]: Enumeration completed Sep 6 00:16:54.249186 systemd-networkd[690]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:16:54.249352 systemd[1]: Started systemd-networkd.service. Sep 6 00:16:54.250160 systemd[1]: Reached target network.target. Sep 6 00:16:54.250490 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 6 00:16:54.251361 systemd[1]: Starting ignition-fetch.service... Sep 6 00:16:54.252533 systemd[1]: Starting iscsiuio.service... Sep 6 00:16:54.262538 systemd-networkd[690]: eth1: Link UP Sep 6 00:16:54.262560 systemd-networkd[690]: eth1: Gained carrier Sep 6 00:16:54.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.275423 ignition[692]: Ignition 2.14.0 Sep 6 00:16:54.269760 systemd[1]: Started iscsiuio.service. Sep 6 00:16:54.275431 ignition[692]: Stage: fetch Sep 6 00:16:54.279365 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:16:54.279365 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:16:54.279365 iscsid[700]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:16:54.279365 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:16:54.279365 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:16:54.279365 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:16:54.279365 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:16:54.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.271082 systemd[1]: Starting iscsid.service... Sep 6 00:16:54.275559 ignition[692]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:54.278139 systemd-networkd[690]: eth0: Link UP Sep 6 00:16:54.275578 ignition[692]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:54.278143 systemd-networkd[690]: eth0: Gained carrier Sep 6 00:16:54.277621 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:54.279471 systemd[1]: Started iscsid.service. Sep 6 00:16:54.277747 ignition[692]: parsed url from cmdline: "" Sep 6 00:16:54.281660 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:16:54.277752 ignition[692]: no config URL provided Sep 6 00:16:54.291498 systemd-networkd[690]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Sep 6 00:16:54.277760 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:16:54.277771 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:16:54.277802 ignition[692]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 6 00:16:54.295524 systemd-networkd[690]: eth0: DHCPv4 address 143.198.146.98/20, gateway 143.198.144.1 acquired from 169.254.169.253 Sep 6 00:16:54.299560 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:16:54.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.300109 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:16:54.300820 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:16:54.301348 systemd[1]: Reached target remote-fs.target. Sep 6 00:16:54.302841 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:16:54.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.313373 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:16:54.325398 ignition[692]: GET result: OK Sep 6 00:16:54.325554 ignition[692]: parsing config with SHA512: 1332c7330478c51dc3ab75983f5b8da805bb035dac766e2b5a1da13f1e7b4dd29e43d8c893878bd18eeef36cab3fadcc6bfad0e26a1e7f68e76dcaa7fe96a8d9 Sep 6 00:16:54.336056 unknown[692]: fetched base config from "system" Sep 6 00:16:54.336068 unknown[692]: fetched base config from "system" Sep 6 00:16:54.336576 ignition[692]: fetch: fetch complete Sep 6 00:16:54.336075 unknown[692]: fetched user config from "digitalocean" Sep 6 00:16:54.336582 ignition[692]: fetch: fetch passed Sep 6 00:16:54.336651 ignition[692]: Ignition finished successfully Sep 6 00:16:54.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.340847 systemd[1]: Finished ignition-fetch.service. Sep 6 00:16:54.342138 systemd[1]: Starting ignition-kargs.service... Sep 6 00:16:54.360150 ignition[715]: Ignition 2.14.0 Sep 6 00:16:54.360835 ignition[715]: Stage: kargs Sep 6 00:16:54.361337 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:54.361843 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:54.364066 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:54.367155 ignition[715]: kargs: kargs passed Sep 6 00:16:54.367655 ignition[715]: Ignition finished successfully Sep 6 00:16:54.369008 systemd[1]: Finished ignition-kargs.service. Sep 6 00:16:54.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.370411 systemd[1]: Starting ignition-disks.service... Sep 6 00:16:54.378952 ignition[721]: Ignition 2.14.0 Sep 6 00:16:54.378962 ignition[721]: Stage: disks Sep 6 00:16:54.379081 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:54.379116 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:54.380917 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:54.382146 ignition[721]: disks: disks passed Sep 6 00:16:54.382199 ignition[721]: Ignition finished successfully Sep 6 00:16:54.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.383329 systemd[1]: Finished ignition-disks.service. Sep 6 00:16:54.383794 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:16:54.384099 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:16:54.384373 systemd[1]: Reached target local-fs.target. Sep 6 00:16:54.384689 systemd[1]: Reached target sysinit.target. Sep 6 00:16:54.385289 systemd[1]: Reached target basic.target. Sep 6 00:16:54.387046 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:16:54.405675 systemd-fsck[729]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:16:54.408378 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:16:54.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.409619 systemd[1]: Mounting sysroot.mount... Sep 6 00:16:54.420415 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:16:54.421124 systemd[1]: Mounted sysroot.mount. Sep 6 00:16:54.421570 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:16:54.423589 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:16:54.425034 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 6 00:16:54.426916 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:16:54.427330 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:16:54.427377 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:16:54.433370 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:16:54.440060 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:16:54.446901 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:16:54.461523 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:16:54.470327 initrd-setup-root[759]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:16:54.478664 initrd-setup-root[769]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:16:54.542949 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:16:54.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.545108 systemd[1]: Starting ignition-mount.service... Sep 6 00:16:54.547084 systemd[1]: Starting sysroot-boot.service... Sep 6 00:16:54.552003 coreos-metadata[735]: Sep 06 00:16:54.551 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:54.562855 bash[787]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:16:54.570663 coreos-metadata[735]: Sep 06 00:16:54.569 INFO Fetch successful Sep 6 00:16:54.576244 coreos-metadata[736]: Sep 06 00:16:54.576 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:54.577765 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 6 00:16:54.577859 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 6 00:16:54.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.585012 ignition[788]: INFO : Ignition 2.14.0 Sep 6 00:16:54.585012 ignition[788]: INFO : Stage: mount Sep 6 00:16:54.585882 ignition[788]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:54.585882 ignition[788]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:54.586940 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:54.587521 ignition[788]: INFO : mount: mount passed Sep 6 00:16:54.587877 ignition[788]: INFO : Ignition finished successfully Sep 6 00:16:54.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.591571 coreos-metadata[736]: Sep 06 00:16:54.589 INFO Fetch successful Sep 6 00:16:54.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.588314 systemd[1]: Finished ignition-mount.service. Sep 6 00:16:54.591340 systemd[1]: Finished sysroot-boot.service. Sep 6 00:16:54.594760 coreos-metadata[736]: Sep 06 00:16:54.594 INFO wrote hostname ci-3510.3.8-n-81199f28b8 to /sysroot/etc/hostname Sep 6 00:16:54.596255 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:16:54.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:54.987182 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:16:54.995691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (795) Sep 6 00:16:55.003405 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:16:55.003470 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:16:55.003483 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:16:55.008089 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:16:55.009714 systemd[1]: Starting ignition-files.service... Sep 6 00:16:55.028477 ignition[815]: INFO : Ignition 2.14.0 Sep 6 00:16:55.029170 ignition[815]: INFO : Stage: files Sep 6 00:16:55.029684 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:55.030241 ignition[815]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:55.033020 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:55.036032 ignition[815]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:16:55.037712 ignition[815]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:16:55.037712 ignition[815]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:16:55.041590 ignition[815]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:16:55.042331 ignition[815]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:16:55.043326 unknown[815]: wrote ssh authorized keys file for user: core Sep 6 00:16:55.044047 ignition[815]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:16:55.044768 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:16:55.045309 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:16:55.045309 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:16:55.045309 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:16:55.113426 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:16:55.467928 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:16:55.468735 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:16:55.468735 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:16:55.669510 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 6 00:16:55.760677 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:16:55.760677 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:55.761949 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:55.769111 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:16:56.155596 systemd-networkd[690]: eth1: Gained IPv6LL Sep 6 00:16:56.214527 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 6 00:16:56.283511 systemd-networkd[690]: eth0: Gained IPv6LL Sep 6 00:16:56.557133 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:56.557133 ignition[815]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:16:56.557133 ignition[815]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:16:56.557133 ignition[815]: INFO : files: op(e): [started] processing unit "containerd.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(e): [finished] processing unit "containerd.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:16:56.559918 ignition[815]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:16:56.566988 ignition[815]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:16:56.567693 ignition[815]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:16:56.567693 ignition[815]: INFO : files: files passed Sep 6 00:16:56.567693 ignition[815]: INFO : Ignition finished successfully Sep 6 00:16:56.569927 systemd[1]: Finished ignition-files.service. Sep 6 00:16:56.576555 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:16:56.576585 kernel: audit: type=1130 audit(1757117816.569:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.571652 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:16:56.573764 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:16:56.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.575069 systemd[1]: Starting ignition-quench.service... Sep 6 00:16:56.590646 kernel: audit: type=1130 audit(1757117816.579:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.590695 kernel: audit: type=1131 audit(1757117816.579:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.590730 kernel: audit: type=1130 audit(1757117816.585:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.579717 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:16:56.592084 initrd-setup-root-after-ignition[840]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:16:56.579837 systemd[1]: Finished ignition-quench.service. Sep 6 00:16:56.585662 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:16:56.586327 systemd[1]: Reached target ignition-complete.target. Sep 6 00:16:56.590928 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:16:56.614857 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:16:56.622687 kernel: audit: type=1130 audit(1757117816.615:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.622740 kernel: audit: type=1131 audit(1757117816.615:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.614992 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:16:56.615678 systemd[1]: Reached target initrd-fs.target. Sep 6 00:16:56.624029 systemd[1]: Reached target initrd.target. Sep 6 00:16:56.624873 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:16:56.626595 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:16:56.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.645844 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:16:56.657868 kernel: audit: type=1130 audit(1757117816.646:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.648143 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:16:56.666613 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:16:56.667124 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:16:56.667947 systemd[1]: Stopped target timers.target. Sep 6 00:16:56.668607 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:16:56.668750 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:16:56.672252 kernel: audit: type=1131 audit(1757117816.668:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.669282 systemd[1]: Stopped target initrd.target. Sep 6 00:16:56.671932 systemd[1]: Stopped target basic.target. Sep 6 00:16:56.672533 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:16:56.673133 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:16:56.673797 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:16:56.674476 systemd[1]: Stopped target remote-fs.target. Sep 6 00:16:56.675059 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:16:56.675803 systemd[1]: Stopped target sysinit.target. Sep 6 00:16:56.676356 systemd[1]: Stopped target local-fs.target. Sep 6 00:16:56.677036 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:16:56.677576 systemd[1]: Stopped target swap.target. Sep 6 00:16:56.681446 kernel: audit: type=1131 audit(1757117816.678:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.678091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:16:56.678268 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:16:56.678886 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:16:56.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.685412 kernel: audit: type=1131 audit(1757117816.682:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.681803 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:16:56.681985 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:16:56.683017 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:16:56.683253 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:16:56.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.687798 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:16:56.688440 systemd[1]: Stopped ignition-files.service. Sep 6 00:16:56.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.689456 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:16:56.690119 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:16:56.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.692527 systemd[1]: Stopping ignition-mount.service... Sep 6 00:16:56.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.692931 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:16:56.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.693055 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:16:56.694699 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:16:56.695100 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:16:56.695296 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:16:56.695850 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:16:56.695962 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:16:56.708420 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:16:56.708678 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:16:56.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.711531 ignition[853]: INFO : Ignition 2.14.0 Sep 6 00:16:56.711531 ignition[853]: INFO : Stage: umount Sep 6 00:16:56.711531 ignition[853]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:56.711531 ignition[853]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:56.715480 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:56.718265 ignition[853]: INFO : umount: umount passed Sep 6 00:16:56.718867 ignition[853]: INFO : Ignition finished successfully Sep 6 00:16:56.721901 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:16:56.722512 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:16:56.722614 systemd[1]: Stopped ignition-mount.service. Sep 6 00:16:56.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.723598 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:16:56.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.723656 systemd[1]: Stopped ignition-disks.service. Sep 6 00:16:56.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.724141 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:16:56.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.724181 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:16:56.724905 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:16:56.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.724946 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:16:56.725564 systemd[1]: Stopped target network.target. Sep 6 00:16:56.726262 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:16:56.726334 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:16:56.727102 systemd[1]: Stopped target paths.target. Sep 6 00:16:56.728597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:16:56.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.732532 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:16:56.732957 systemd[1]: Stopped target slices.target. Sep 6 00:16:56.733231 systemd[1]: Stopped target sockets.target. Sep 6 00:16:56.733575 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:16:56.733612 systemd[1]: Closed iscsid.socket. Sep 6 00:16:56.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.733908 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:16:56.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.733934 systemd[1]: Closed iscsiuio.socket. Sep 6 00:16:56.734278 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:16:56.734353 systemd[1]: Stopped ignition-setup.service. Sep 6 00:16:56.734976 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:16:56.735863 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:16:56.736581 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:16:56.736687 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:16:56.737316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:16:56.737365 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:16:56.739889 systemd-networkd[690]: eth1: DHCPv6 lease lost Sep 6 00:16:56.743599 systemd-networkd[690]: eth0: DHCPv6 lease lost Sep 6 00:16:56.745541 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:16:56.745686 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:16:56.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.747306 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:16:56.747463 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:16:56.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.748000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:16:56.748000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:16:56.749006 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:16:56.749065 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:16:56.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.751267 systemd[1]: Stopping network-cleanup.service... Sep 6 00:16:56.751767 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:16:56.751867 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:16:56.752584 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:16:56.752663 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:16:56.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.753214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:16:56.753270 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:16:56.753852 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:16:56.757135 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:16:56.763015 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:16:56.763243 systemd[1]: Stopped network-cleanup.service. Sep 6 00:16:56.770557 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:16:56.770779 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:16:56.771884 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:16:56.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.771958 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:16:56.773065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:16:56.773122 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:16:56.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.781403 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:16:56.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.781530 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:16:56.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.782247 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:16:56.782335 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:16:56.782953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:16:56.783023 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:16:56.785168 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:16:56.785917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:16:56.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.786021 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:16:56.797842 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:16:56.798046 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:16:56.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:56.799176 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:16:56.801603 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:16:56.812333 systemd[1]: Switching root. Sep 6 00:16:56.813000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:16:56.813000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:16:56.821000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:16:56.821000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:16:56.821000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:16:56.841205 iscsid[700]: iscsid shutting down. Sep 6 00:16:56.842030 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 6 00:16:56.842122 systemd-journald[184]: Journal stopped Sep 6 00:17:00.132493 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:17:00.132581 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:17:00.132602 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:17:00.132636 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:17:00.132650 kernel: SELinux: policy capability open_perms=1 Sep 6 00:17:00.132662 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:17:00.132684 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:17:00.132700 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:17:00.132722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:17:00.132735 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:17:00.132749 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:17:00.132763 systemd[1]: Successfully loaded SELinux policy in 47.053ms. Sep 6 00:17:00.132786 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.558ms. Sep 6 00:17:00.132799 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:17:00.132813 systemd[1]: Detected virtualization kvm. Sep 6 00:17:00.132830 systemd[1]: Detected architecture x86-64. Sep 6 00:17:00.132842 systemd[1]: Detected first boot. Sep 6 00:17:00.132855 systemd[1]: Hostname set to . Sep 6 00:17:00.132869 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:17:00.132882 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:17:00.132894 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:17:00.132907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:17:00.132930 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:17:00.132944 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:17:00.132965 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:17:00.132981 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:17:00.132994 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:17:00.133006 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:17:00.133018 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:17:00.133032 systemd[1]: Created slice system-getty.slice. Sep 6 00:17:00.133043 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:17:00.133056 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:17:00.133068 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:17:00.133081 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:17:00.133097 systemd[1]: Created slice user.slice. Sep 6 00:17:00.133109 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:17:00.133121 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:17:00.133133 systemd[1]: Set up automount boot.automount. Sep 6 00:17:00.133146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:17:00.133157 systemd[1]: Reached target integritysetup.target. Sep 6 00:17:00.133170 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:17:00.133186 systemd[1]: Reached target remote-fs.target. Sep 6 00:17:00.133203 systemd[1]: Reached target slices.target. Sep 6 00:17:00.133215 systemd[1]: Reached target swap.target. Sep 6 00:17:00.133228 systemd[1]: Reached target torcx.target. Sep 6 00:17:00.133246 systemd[1]: Reached target veritysetup.target. Sep 6 00:17:00.133259 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:17:00.133271 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:17:00.133283 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:17:00.133295 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:17:00.133310 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:17:00.133323 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:17:00.133335 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:17:00.133348 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:17:00.133360 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:17:00.133373 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:17:00.133397 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:17:00.133409 systemd[1]: Mounting media.mount... Sep 6 00:17:00.133422 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:00.133437 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:17:00.133449 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:17:00.133461 systemd[1]: Mounting tmp.mount... Sep 6 00:17:00.133473 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:17:00.133485 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:17:00.133498 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:17:00.133509 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:17:00.137480 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:17:00.137542 systemd[1]: Starting modprobe@drm.service... Sep 6 00:17:00.137563 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:17:00.137577 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:17:00.137589 systemd[1]: Starting modprobe@loop.service... Sep 6 00:17:00.137603 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:17:00.137615 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:17:00.137629 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:17:00.137641 systemd[1]: Starting systemd-journald.service... Sep 6 00:17:00.137654 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:17:00.137668 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:17:00.137684 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:17:00.137697 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:17:00.137710 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:00.137723 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:17:00.137735 kernel: fuse: init (API version 7.34) Sep 6 00:17:00.137749 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:17:00.137761 systemd[1]: Mounted media.mount. Sep 6 00:17:00.137773 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:17:00.137785 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:17:00.137801 systemd[1]: Mounted tmp.mount. Sep 6 00:17:00.137814 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:17:00.137827 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:17:00.137840 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:17:00.137853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:00.137865 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:17:00.137878 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:17:00.137890 systemd[1]: Finished modprobe@drm.service. Sep 6 00:17:00.137903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:00.137924 systemd-journald[998]: Journal started Sep 6 00:17:00.137994 systemd-journald[998]: Runtime Journal (/run/log/journal/43351ea2911b4ef780462f8f16a14f0a) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:16:59.976000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:17:00.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.119000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:17:00.119000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffee86d1ca0 a2=4000 a3=7ffee86d1d3c items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:17:00.119000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:17:00.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.144543 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:17:00.144617 systemd[1]: Started systemd-journald.service. Sep 6 00:17:00.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.144359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:17:00.150821 kernel: loop: module loaded Sep 6 00:17:00.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.145199 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:17:00.146311 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:00.148765 systemd[1]: Finished modprobe@loop.service. Sep 6 00:17:00.149528 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:17:00.150158 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:17:00.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.155925 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:17:00.156859 systemd[1]: Reached target network-pre.target. Sep 6 00:17:00.160106 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:17:00.161733 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:17:00.162107 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:17:00.168465 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:17:00.171384 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:17:00.173884 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:00.175364 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:17:00.180859 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:17:00.182246 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:17:00.185632 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:17:00.186049 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:17:00.199927 systemd-journald[998]: Time spent on flushing to /var/log/journal/43351ea2911b4ef780462f8f16a14f0a is 68.647ms for 1079 entries. Sep 6 00:17:00.199927 systemd-journald[998]: System Journal (/var/log/journal/43351ea2911b4ef780462f8f16a14f0a) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:17:00.277854 systemd-journald[998]: Received client request to flush runtime journal. Sep 6 00:17:00.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.203449 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:17:00.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.204017 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:17:00.220671 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:17:00.234582 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:17:00.236615 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:17:00.278996 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:17:00.282207 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:17:00.284036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:17:00.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.299364 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:17:00.301363 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:17:00.315956 udevadm[1051]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:17:00.321373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:17:00.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.851091 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:17:00.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.852994 systemd[1]: Starting systemd-udevd.service... Sep 6 00:17:00.879788 systemd-udevd[1054]: Using default interface naming scheme 'v252'. Sep 6 00:17:00.904477 systemd[1]: Started systemd-udevd.service. Sep 6 00:17:00.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.906918 systemd[1]: Starting systemd-networkd.service... Sep 6 00:17:00.927428 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:17:00.974363 systemd[1]: Found device dev-ttyS0.device. Sep 6 00:17:00.978217 systemd[1]: Started systemd-userdbd.service. Sep 6 00:17:00.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:00.988820 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:00.989116 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:17:00.990504 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:17:00.993882 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:17:00.995670 systemd[1]: Starting modprobe@loop.service... Sep 6 00:17:00.996147 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:17:00.996255 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:17:00.996416 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:00.997064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:00.997244 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:17:01.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.005566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:01.005794 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:17:01.006487 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:01.006695 systemd[1]: Finished modprobe@loop.service. Sep 6 00:17:01.007274 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:01.007341 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:17:01.074740 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:17:01.111314 systemd-networkd[1055]: lo: Link UP Sep 6 00:17:01.111716 systemd-networkd[1055]: lo: Gained carrier Sep 6 00:17:01.112330 systemd-networkd[1055]: Enumeration completed Sep 6 00:17:01.112688 systemd[1]: Started systemd-networkd.service. Sep 6 00:17:01.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.113779 systemd-networkd[1055]: eth1: Configuring with /run/systemd/network/10-c6:53:98:1c:77:64.network. Sep 6 00:17:01.115642 systemd-networkd[1055]: eth0: Configuring with /run/systemd/network/10-12:91:4c:c0:ef:98.network. Sep 6 00:17:01.116805 systemd-networkd[1055]: eth1: Link UP Sep 6 00:17:01.116920 systemd-networkd[1055]: eth1: Gained carrier Sep 6 00:17:01.120419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:17:01.121724 systemd-networkd[1055]: eth0: Link UP Sep 6 00:17:01.121733 systemd-networkd[1055]: eth0: Gained carrier Sep 6 00:17:01.144446 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:17:01.133000 audit[1066]: AVC avc: denied { confidentiality } for pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:17:01.133000 audit[1066]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c160dfe5a0 a1=338ec a2=7fe349446bc5 a3=5 items=110 ppid=1054 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:17:01.133000 audit: CWD cwd="/" Sep 6 00:17:01.133000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=1 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=2 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=3 name=(null) inode=13830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=4 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=5 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=6 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=7 name=(null) inode=13832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=8 name=(null) inode=13832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=9 name=(null) inode=13833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=10 name=(null) inode=13832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=11 name=(null) inode=13834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=12 name=(null) inode=13832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=13 name=(null) inode=13835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=14 name=(null) inode=13832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=15 name=(null) inode=13836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=16 name=(null) inode=13832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=17 name=(null) inode=13837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=18 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=19 name=(null) inode=13838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=20 name=(null) inode=13838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=21 name=(null) inode=13839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=22 name=(null) inode=13838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=23 name=(null) inode=13840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=24 name=(null) inode=13838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=25 name=(null) inode=13841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=26 name=(null) inode=13838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=27 name=(null) inode=13842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=28 name=(null) inode=13838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=29 name=(null) inode=13843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=30 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=31 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=32 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=33 name=(null) inode=13845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=34 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=35 name=(null) inode=13846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=36 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=37 name=(null) inode=13847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=38 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=39 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=40 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=41 name=(null) inode=13849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=42 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=43 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=44 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=45 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=46 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=47 name=(null) inode=13852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=48 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=49 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=50 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=51 name=(null) inode=13854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=52 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=53 name=(null) inode=13855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=55 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=56 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=57 name=(null) inode=13857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=58 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=59 name=(null) inode=13858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=60 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=61 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=62 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=63 name=(null) inode=13860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=64 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=65 name=(null) inode=13861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=66 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=67 name=(null) inode=13862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=68 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=69 name=(null) inode=13863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=70 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=71 name=(null) inode=13864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=72 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=73 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=74 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=75 name=(null) inode=13866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=76 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=77 name=(null) inode=13867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=78 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=79 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=80 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=81 name=(null) inode=13869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=82 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=83 name=(null) inode=13870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=84 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=85 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=86 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=87 name=(null) inode=13872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=88 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=89 name=(null) inode=13873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=90 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=91 name=(null) inode=13874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=92 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=93 name=(null) inode=13875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=94 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=95 name=(null) inode=13876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=96 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=97 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=98 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=99 name=(null) inode=13878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=100 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=101 name=(null) inode=13879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=102 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=103 name=(null) inode=13880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=104 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=105 name=(null) inode=13881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=106 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=107 name=(null) inode=13882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PATH item=109 name=(null) inode=13883 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:17:01.133000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:17:01.195436 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 6 00:17:01.200419 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:17:01.205493 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:17:01.316424 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:17:01.338990 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:17:01.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.341427 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:17:01.361235 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:17:01.388704 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:17:01.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.389184 systemd[1]: Reached target cryptsetup.target. Sep 6 00:17:01.390959 systemd[1]: Starting lvm2-activation.service... Sep 6 00:17:01.397419 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:17:01.426762 systemd[1]: Finished lvm2-activation.service. Sep 6 00:17:01.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.427322 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:17:01.429336 systemd[1]: Mounting media-configdrive.mount... Sep 6 00:17:01.429736 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:17:01.429787 systemd[1]: Reached target machines.target. Sep 6 00:17:01.431451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:17:01.445341 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:17:01.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.454413 kernel: ISO 9660 Extensions: RRIP_1991A Sep 6 00:17:01.457262 systemd[1]: Mounted media-configdrive.mount. Sep 6 00:17:01.457922 systemd[1]: Reached target local-fs.target. Sep 6 00:17:01.460502 systemd[1]: Starting ldconfig.service... Sep 6 00:17:01.462176 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:17:01.462303 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:01.467648 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:17:01.471082 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:17:01.476946 systemd[1]: Starting systemd-sysext.service... Sep 6 00:17:01.483365 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Sep 6 00:17:01.485303 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:17:01.496870 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:17:01.503182 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:17:01.503606 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:17:01.540455 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:17:01.560889 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:17:01.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.566423 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:17:01.609427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:17:01.616014 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) Sep 6 00:17:01.616014 systemd-fsck[1119]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:17:01.626887 kernel: kauditd_printk_skb: 205 callbacks suppressed Sep 6 00:17:01.627021 kernel: audit: type=1130 audit(1757117821.621:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.621172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:17:01.623650 systemd[1]: Mounting boot.mount... Sep 6 00:17:01.642476 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:17:01.642932 systemd[1]: Mounted boot.mount. Sep 6 00:17:01.669956 (sd-sysext)[1126]: Using extensions 'kubernetes'. Sep 6 00:17:01.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.671429 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:17:01.674655 (sd-sysext)[1126]: Merged extensions into '/usr'. Sep 6 00:17:01.675432 kernel: audit: type=1130 audit(1757117821.671:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.711222 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:01.714627 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:17:01.715741 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:17:01.717830 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:17:01.721017 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:17:01.727950 systemd[1]: Starting modprobe@loop.service... Sep 6 00:17:01.730298 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:17:01.731755 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:01.733656 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:01.751721 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:17:01.756003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:01.756616 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:17:01.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.759065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:01.759583 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:17:01.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.763861 kernel: audit: type=1130 audit(1757117821.757:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.763975 kernel: audit: type=1131 audit(1757117821.757:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.766128 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:01.766660 systemd[1]: Finished modprobe@loop.service. Sep 6 00:17:01.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.768474 kernel: audit: type=1130 audit(1757117821.765:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.768516 kernel: audit: type=1131 audit(1757117821.765:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.771996 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:01.772109 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:17:01.773359 systemd[1]: Finished systemd-sysext.service. Sep 6 00:17:01.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.778466 kernel: audit: type=1130 audit(1757117821.771:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.778810 systemd[1]: Starting ensure-sysext.service... Sep 6 00:17:01.787383 kernel: audit: type=1131 audit(1757117821.771:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.780693 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:17:01.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.793856 kernel: audit: type=1130 audit(1757117821.773:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:01.793616 systemd[1]: Reloading. Sep 6 00:17:01.833843 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:17:01.836262 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:17:01.838725 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:17:01.939076 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-09-06T00:17:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:17:01.939107 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-09-06T00:17:01Z" level=info msg="torcx already run" Sep 6 00:17:01.941570 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:17:02.049749 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:17:02.049772 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:17:02.069449 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:17:02.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.136337 systemd[1]: Finished ldconfig.service. Sep 6 00:17:02.144556 kernel: audit: type=1130 audit(1757117822.136:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.139867 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:17:02.146723 systemd[1]: Starting audit-rules.service... Sep 6 00:17:02.149436 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:17:02.152783 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:17:02.157152 systemd[1]: Starting systemd-resolved.service... Sep 6 00:17:02.166942 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:17:02.169694 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:17:02.174896 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:17:02.175000 audit[1229]: SYSTEM_BOOT pid=1229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.193998 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.196641 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:17:02.203145 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:17:02.210071 systemd[1]: Starting modprobe@loop.service... Sep 6 00:17:02.211412 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.211590 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:02.211732 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:17:02.213903 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:17:02.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.214775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:02.214941 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:17:02.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.219740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:02.219912 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:17:02.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.226310 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.229306 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:17:02.231516 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.231660 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:02.231762 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:02.231839 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:17:02.232646 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:02.232824 systemd[1]: Finished modprobe@loop.service. Sep 6 00:17:02.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.233708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:02.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.233887 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:17:02.234547 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.239346 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.241228 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:17:02.248377 systemd[1]: Starting modprobe@drm.service... Sep 6 00:17:02.252355 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:17:02.256772 systemd[1]: Starting modprobe@loop.service... Sep 6 00:17:02.257367 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.257545 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:02.259905 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:17:02.260461 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:17:02.261845 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:17:02.262043 systemd[1]: Finished modprobe@drm.service. Sep 6 00:17:02.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.266831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:02.267022 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:17:02.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.268157 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:02.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.271933 systemd[1]: Finished ensure-sysext.service. Sep 6 00:17:02.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.282817 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:17:02.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.283558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:02.283717 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:17:02.286175 systemd[1]: Starting systemd-update-done.service... Sep 6 00:17:02.287285 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:02.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.287525 systemd[1]: Finished modprobe@loop.service. Sep 6 00:17:02.288056 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:17:02.299870 systemd[1]: Finished systemd-update-done.service. Sep 6 00:17:02.335000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:17:02.335000 audit[1262]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeffb73fa0 a2=420 a3=0 items=0 ppid=1217 pid=1262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:17:02.335000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:17:02.336383 augenrules[1262]: No rules Sep 6 00:17:02.336665 systemd[1]: Finished audit-rules.service. Sep 6 00:17:02.350731 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:17:02.351421 systemd[1]: Reached target time-set.target. Sep 6 00:17:02.351956 systemd-resolved[1221]: Positive Trust Anchors: Sep 6 00:17:02.351978 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:17:02.352028 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:17:02.357093 systemd-timesyncd[1228]: Contacted time server 173.255.255.133:123 (0.flatcar.pool.ntp.org). Sep 6 00:17:02.357540 systemd-timesyncd[1228]: Initial clock synchronization to Sat 2025-09-06 00:17:02.162140 UTC. Sep 6 00:17:02.358062 systemd-resolved[1221]: Using system hostname 'ci-3510.3.8-n-81199f28b8'. Sep 6 00:17:02.360020 systemd[1]: Started systemd-resolved.service. Sep 6 00:17:02.360448 systemd[1]: Reached target network.target. Sep 6 00:17:02.360726 systemd[1]: Reached target nss-lookup.target. Sep 6 00:17:02.361016 systemd[1]: Reached target sysinit.target. Sep 6 00:17:02.361378 systemd[1]: Started motdgen.path. Sep 6 00:17:02.361699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:17:02.362199 systemd[1]: Started logrotate.timer. Sep 6 00:17:02.362565 systemd[1]: Started mdadm.timer. Sep 6 00:17:02.362837 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:17:02.363138 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:17:02.363179 systemd[1]: Reached target paths.target. Sep 6 00:17:02.363454 systemd[1]: Reached target timers.target. Sep 6 00:17:02.364050 systemd[1]: Listening on dbus.socket. Sep 6 00:17:02.365742 systemd[1]: Starting docker.socket... Sep 6 00:17:02.367580 systemd[1]: Listening on sshd.socket. Sep 6 00:17:02.367984 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:02.368342 systemd[1]: Listening on docker.socket. Sep 6 00:17:02.368656 systemd[1]: Reached target sockets.target. Sep 6 00:17:02.368927 systemd[1]: Reached target basic.target. Sep 6 00:17:02.369349 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:17:02.369395 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.369419 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:17:02.370634 systemd[1]: Starting containerd.service... Sep 6 00:17:02.372231 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:17:02.374366 systemd[1]: Starting dbus.service... Sep 6 00:17:02.380254 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:17:02.382759 systemd[1]: Starting extend-filesystems.service... Sep 6 00:17:02.383671 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:17:02.386354 systemd[1]: Starting motdgen.service... Sep 6 00:17:02.387975 jq[1275]: false Sep 6 00:17:02.394478 systemd[1]: Starting prepare-helm.service... Sep 6 00:17:02.400224 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:17:02.403887 systemd[1]: Starting sshd-keygen.service... Sep 6 00:17:02.411225 systemd[1]: Starting systemd-logind.service... Sep 6 00:17:02.412523 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:17:02.412647 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:17:02.415893 systemd[1]: Starting update-engine.service... Sep 6 00:17:02.429553 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:17:02.431129 dbus-daemon[1273]: [system] SELinux support is enabled Sep 6 00:17:02.432546 systemd[1]: Started dbus.service. Sep 6 00:17:02.438994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:17:02.439285 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:17:02.440711 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:17:02.441026 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:17:02.444135 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:17:02.444203 systemd[1]: Reached target system-config.target. Sep 6 00:17:02.444644 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:17:02.444670 systemd[1]: Reached target user-config.target. Sep 6 00:17:02.450683 jq[1293]: true Sep 6 00:17:02.457165 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:02.457204 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:17:02.471812 tar[1295]: linux-amd64/helm Sep 6 00:17:02.473532 extend-filesystems[1276]: Found loop1 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda1 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda2 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda3 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found usr Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda4 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda6 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda7 Sep 6 00:17:02.474267 extend-filesystems[1276]: Found vda9 Sep 6 00:17:02.474267 extend-filesystems[1276]: Checking size of /dev/vda9 Sep 6 00:17:02.493615 jq[1300]: true Sep 6 00:17:02.510042 extend-filesystems[1276]: Resized partition /dev/vda9 Sep 6 00:17:02.533570 extend-filesystems[1318]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:17:02.541411 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 6 00:17:02.544880 update_engine[1288]: I0906 00:17:02.544443 1288 main.cc:92] Flatcar Update Engine starting Sep 6 00:17:02.548160 systemd[1]: Started update-engine.service. Sep 6 00:17:02.551255 update_engine[1288]: I0906 00:17:02.548199 1288 update_check_scheduler.cc:74] Next update check in 10m7s Sep 6 00:17:02.550334 systemd[1]: Started locksmithd.service. Sep 6 00:17:02.561058 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:17:02.561579 systemd[1]: Finished motdgen.service. Sep 6 00:17:02.591645 bash[1333]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:17:02.592740 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:17:02.620108 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 00:17:02.632995 extend-filesystems[1318]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:17:02.632995 extend-filesystems[1318]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 00:17:02.632995 extend-filesystems[1318]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 00:17:02.635584 extend-filesystems[1276]: Resized filesystem in /dev/vda9 Sep 6 00:17:02.635584 extend-filesystems[1276]: Found vdb Sep 6 00:17:02.633886 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:17:02.634141 systemd[1]: Finished extend-filesystems.service. Sep 6 00:17:02.670144 env[1298]: time="2025-09-06T00:17:02.670073638Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:17:02.671997 coreos-metadata[1272]: Sep 06 00:17:02.671 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:17:02.683669 systemd-networkd[1055]: eth1: Gained IPv6LL Sep 6 00:17:02.686798 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:17:02.687285 systemd[1]: Reached target network-online.target. Sep 6 00:17:02.689439 systemd[1]: Starting kubelet.service... Sep 6 00:17:02.700173 systemd-logind[1285]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:17:02.701220 systemd-logind[1285]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:17:02.704841 systemd-logind[1285]: New seat seat0. Sep 6 00:17:02.706462 coreos-metadata[1272]: Sep 06 00:17:02.706 INFO Fetch successful Sep 6 00:17:02.711110 unknown[1272]: wrote ssh authorized keys file for user: core Sep 6 00:17:02.719108 update-ssh-keys[1346]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:17:02.719959 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:17:02.722495 systemd[1]: Started systemd-logind.service. Sep 6 00:17:02.740809 env[1298]: time="2025-09-06T00:17:02.740748868Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:17:02.741114 env[1298]: time="2025-09-06T00:17:02.741092019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:02.749196 env[1298]: time="2025-09-06T00:17:02.749135873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:02.749374 env[1298]: time="2025-09-06T00:17:02.749356125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:02.749782 env[1298]: time="2025-09-06T00:17:02.749753643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:02.749870 env[1298]: time="2025-09-06T00:17:02.749854540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:02.749936 env[1298]: time="2025-09-06T00:17:02.749920477Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:17:02.749991 env[1298]: time="2025-09-06T00:17:02.749977943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:02.750141 env[1298]: time="2025-09-06T00:17:02.750126566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:02.750486 env[1298]: time="2025-09-06T00:17:02.750463815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:02.750783 env[1298]: time="2025-09-06T00:17:02.750760382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:02.750856 env[1298]: time="2025-09-06T00:17:02.750841399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:17:02.750964 env[1298]: time="2025-09-06T00:17:02.750947520Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:17:02.751042 env[1298]: time="2025-09-06T00:17:02.751028227Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753407860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753477274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753492116Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753569705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753586998Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753610444Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753623033Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753640951Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753654379Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753666812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753688544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753703455Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753841474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:17:02.756641 env[1298]: time="2025-09-06T00:17:02.753936424Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754446353Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754480582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754505314Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754553647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754566753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754590189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754602436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754614712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754628727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754659655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754675473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754689126Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754835958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754850332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757059 env[1298]: time="2025-09-06T00:17:02.754862804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757469 env[1298]: time="2025-09-06T00:17:02.754886848Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:17:02.757469 env[1298]: time="2025-09-06T00:17:02.754903560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:17:02.757469 env[1298]: time="2025-09-06T00:17:02.754914087Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:17:02.757469 env[1298]: time="2025-09-06T00:17:02.754934564Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:17:02.757469 env[1298]: time="2025-09-06T00:17:02.754983583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:17:02.757591 env[1298]: time="2025-09-06T00:17:02.755247218Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:17:02.757591 env[1298]: time="2025-09-06T00:17:02.755318755Z" level=info msg="Connect containerd service" Sep 6 00:17:02.757591 env[1298]: time="2025-09-06T00:17:02.755372566Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758054607Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758179874Z" level=info msg="Start subscribing containerd event" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758221136Z" level=info msg="Start recovering state" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758294186Z" level=info msg="Start event monitor" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758305307Z" level=info msg="Start snapshots syncer" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758323266Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:17:02.760320 env[1298]: time="2025-09-06T00:17:02.758332345Z" level=info msg="Start streaming server" Sep 6 00:17:02.762903 env[1298]: time="2025-09-06T00:17:02.762851251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:17:02.763135 env[1298]: time="2025-09-06T00:17:02.763118378Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:17:02.763467 systemd[1]: Started containerd.service. Sep 6 00:17:02.770117 env[1298]: time="2025-09-06T00:17:02.770071808Z" level=info msg="containerd successfully booted in 0.101521s" Sep 6 00:17:02.812284 systemd-networkd[1055]: eth0: Gained IPv6LL Sep 6 00:17:03.125481 sshd_keygen[1309]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:17:03.204719 systemd[1]: Finished sshd-keygen.service. Sep 6 00:17:03.206972 systemd[1]: Starting issuegen.service... Sep 6 00:17:03.222750 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:17:03.222991 systemd[1]: Finished issuegen.service. Sep 6 00:17:03.225254 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:17:03.249143 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:17:03.251540 systemd[1]: Started getty@tty1.service. Sep 6 00:17:03.258379 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:17:03.265957 systemd[1]: Reached target getty.target. Sep 6 00:17:03.290379 locksmithd[1332]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:17:03.423268 tar[1295]: linux-amd64/LICENSE Sep 6 00:17:03.423268 tar[1295]: linux-amd64/README.md Sep 6 00:17:03.427816 systemd[1]: Finished prepare-helm.service. Sep 6 00:17:03.889025 systemd[1]: Started kubelet.service. Sep 6 00:17:03.889994 systemd[1]: Reached target multi-user.target. Sep 6 00:17:03.892642 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:17:03.908020 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:17:03.908278 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:17:03.908849 systemd[1]: Startup finished in 6.093s (kernel) + 6.914s (userspace) = 13.008s. Sep 6 00:17:04.506942 kubelet[1382]: E0906 00:17:04.506850 1382 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:17:04.509354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:17:04.509605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:17:06.223910 systemd[1]: Created slice system-sshd.slice. Sep 6 00:17:06.225651 systemd[1]: Started sshd@0-143.198.146.98:22-147.75.109.163:53030.service. Sep 6 00:17:06.283949 sshd[1391]: Accepted publickey for core from 147.75.109.163 port 53030 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:06.286437 sshd[1391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:06.297095 systemd[1]: Created slice user-500.slice. Sep 6 00:17:06.298378 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:17:06.302131 systemd-logind[1285]: New session 1 of user core. Sep 6 00:17:06.309493 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:17:06.311064 systemd[1]: Starting user@500.service... Sep 6 00:17:06.318085 (systemd)[1396]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:06.407139 systemd[1396]: Queued start job for default target default.target. Sep 6 00:17:06.408199 systemd[1396]: Reached target paths.target. Sep 6 00:17:06.408338 systemd[1396]: Reached target sockets.target. Sep 6 00:17:06.408467 systemd[1396]: Reached target timers.target. Sep 6 00:17:06.408559 systemd[1396]: Reached target basic.target. Sep 6 00:17:06.408765 systemd[1]: Started user@500.service. Sep 6 00:17:06.409769 systemd[1]: Started session-1.scope. Sep 6 00:17:06.410476 systemd[1396]: Reached target default.target. Sep 6 00:17:06.410764 systemd[1396]: Startup finished in 84ms. Sep 6 00:17:06.469348 systemd[1]: Started sshd@1-143.198.146.98:22-147.75.109.163:53044.service. Sep 6 00:17:06.522738 sshd[1405]: Accepted publickey for core from 147.75.109.163 port 53044 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:06.524749 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:06.530228 systemd[1]: Started session-2.scope. Sep 6 00:17:06.531415 systemd-logind[1285]: New session 2 of user core. Sep 6 00:17:06.593175 sshd[1405]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:06.596951 systemd[1]: sshd@1-143.198.146.98:22-147.75.109.163:53044.service: Deactivated successfully. Sep 6 00:17:06.598971 systemd[1]: Started sshd@2-143.198.146.98:22-147.75.109.163:53050.service. Sep 6 00:17:06.599494 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:17:06.600817 systemd-logind[1285]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:17:06.603600 systemd-logind[1285]: Removed session 2. Sep 6 00:17:06.652481 sshd[1412]: Accepted publickey for core from 147.75.109.163 port 53050 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:06.654634 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:06.659691 systemd[1]: Started session-3.scope. Sep 6 00:17:06.660405 systemd-logind[1285]: New session 3 of user core. Sep 6 00:17:06.718082 sshd[1412]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:06.730399 systemd[1]: Started sshd@3-143.198.146.98:22-147.75.109.163:53060.service. Sep 6 00:17:06.731194 systemd[1]: sshd@2-143.198.146.98:22-147.75.109.163:53050.service: Deactivated successfully. Sep 6 00:17:06.732251 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:17:06.735295 systemd-logind[1285]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:17:06.741247 systemd-logind[1285]: Removed session 3. Sep 6 00:17:06.785842 sshd[1417]: Accepted publickey for core from 147.75.109.163 port 53060 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:06.787166 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:06.794199 systemd[1]: Started session-4.scope. Sep 6 00:17:06.795633 systemd-logind[1285]: New session 4 of user core. Sep 6 00:17:06.862050 sshd[1417]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:06.867502 systemd[1]: Started sshd@4-143.198.146.98:22-147.75.109.163:53066.service. Sep 6 00:17:06.873002 systemd[1]: sshd@3-143.198.146.98:22-147.75.109.163:53060.service: Deactivated successfully. Sep 6 00:17:06.874166 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:17:06.875116 systemd-logind[1285]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:17:06.876306 systemd-logind[1285]: Removed session 4. Sep 6 00:17:06.924874 sshd[1424]: Accepted publickey for core from 147.75.109.163 port 53066 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:06.927145 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:06.932854 systemd-logind[1285]: New session 5 of user core. Sep 6 00:17:06.934082 systemd[1]: Started session-5.scope. Sep 6 00:17:07.002890 sudo[1430]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:17:07.003550 sudo[1430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:17:07.033107 systemd[1]: Starting docker.service... Sep 6 00:17:07.085062 env[1440]: time="2025-09-06T00:17:07.084273467Z" level=info msg="Starting up" Sep 6 00:17:07.086927 env[1440]: time="2025-09-06T00:17:07.086881234Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:17:07.086927 env[1440]: time="2025-09-06T00:17:07.086915260Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:17:07.087063 env[1440]: time="2025-09-06T00:17:07.086937305Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:17:07.087063 env[1440]: time="2025-09-06T00:17:07.086948567Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:17:07.089426 env[1440]: time="2025-09-06T00:17:07.089368513Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:17:07.089589 env[1440]: time="2025-09-06T00:17:07.089566213Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:17:07.089985 env[1440]: time="2025-09-06T00:17:07.089960034Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:17:07.090081 env[1440]: time="2025-09-06T00:17:07.090066911Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:17:07.097277 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2304496580-merged.mount: Deactivated successfully. Sep 6 00:17:07.201161 env[1440]: time="2025-09-06T00:17:07.201120193Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:17:07.201413 env[1440]: time="2025-09-06T00:17:07.201366676Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:17:07.201677 env[1440]: time="2025-09-06T00:17:07.201656919Z" level=info msg="Loading containers: start." Sep 6 00:17:07.363414 kernel: Initializing XFRM netlink socket Sep 6 00:17:07.402297 env[1440]: time="2025-09-06T00:17:07.402261116Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:17:07.483214 systemd-networkd[1055]: docker0: Link UP Sep 6 00:17:07.501016 env[1440]: time="2025-09-06T00:17:07.500950072Z" level=info msg="Loading containers: done." Sep 6 00:17:07.521608 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1151749337-merged.mount: Deactivated successfully. Sep 6 00:17:07.523247 env[1440]: time="2025-09-06T00:17:07.523184049Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:17:07.523505 env[1440]: time="2025-09-06T00:17:07.523467291Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:17:07.523635 env[1440]: time="2025-09-06T00:17:07.523610733Z" level=info msg="Daemon has completed initialization" Sep 6 00:17:07.538734 systemd[1]: Started docker.service. Sep 6 00:17:07.548047 env[1440]: time="2025-09-06T00:17:07.547938525Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:17:07.575733 systemd[1]: Starting coreos-metadata.service... Sep 6 00:17:07.650872 coreos-metadata[1558]: Sep 06 00:17:07.650 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:17:07.662776 coreos-metadata[1558]: Sep 06 00:17:07.662 INFO Fetch successful Sep 6 00:17:07.681160 systemd[1]: Finished coreos-metadata.service. Sep 6 00:17:08.496437 env[1298]: time="2025-09-06T00:17:08.496366724Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:17:09.049409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854718584.mount: Deactivated successfully. Sep 6 00:17:10.384545 env[1298]: time="2025-09-06T00:17:10.384492630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.386800 env[1298]: time="2025-09-06T00:17:10.386763780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.392279 env[1298]: time="2025-09-06T00:17:10.392216659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.393943 env[1298]: time="2025-09-06T00:17:10.393910334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.396356 env[1298]: time="2025-09-06T00:17:10.396312925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:17:10.397341 env[1298]: time="2025-09-06T00:17:10.397306694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:17:12.066262 env[1298]: time="2025-09-06T00:17:12.066189384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:12.068370 env[1298]: time="2025-09-06T00:17:12.068319893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:12.070062 env[1298]: time="2025-09-06T00:17:12.069977684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:12.072098 env[1298]: time="2025-09-06T00:17:12.072061586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:12.073355 env[1298]: time="2025-09-06T00:17:12.073318537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:17:12.073975 env[1298]: time="2025-09-06T00:17:12.073943019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:17:13.429056 env[1298]: time="2025-09-06T00:17:13.428983602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:13.430482 env[1298]: time="2025-09-06T00:17:13.430422684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:13.432133 env[1298]: time="2025-09-06T00:17:13.432102614Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:13.433924 env[1298]: time="2025-09-06T00:17:13.433886767Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:13.435711 env[1298]: time="2025-09-06T00:17:13.435674898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:17:13.436477 env[1298]: time="2025-09-06T00:17:13.436446913Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:17:14.606612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount719609579.mount: Deactivated successfully. Sep 6 00:17:14.607870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:17:14.608024 systemd[1]: Stopped kubelet.service. Sep 6 00:17:14.609971 systemd[1]: Starting kubelet.service... Sep 6 00:17:14.733254 systemd[1]: Started kubelet.service. Sep 6 00:17:14.818549 kubelet[1585]: E0906 00:17:14.818498 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:17:14.821317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:17:14.821525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:17:15.342830 env[1298]: time="2025-09-06T00:17:15.342768956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:15.345107 env[1298]: time="2025-09-06T00:17:15.345022944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:15.345725 env[1298]: time="2025-09-06T00:17:15.345698766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:15.347433 env[1298]: time="2025-09-06T00:17:15.347363444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:15.347920 env[1298]: time="2025-09-06T00:17:15.347893854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:17:15.348861 env[1298]: time="2025-09-06T00:17:15.348834277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:17:15.859576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033887840.mount: Deactivated successfully. Sep 6 00:17:16.825338 env[1298]: time="2025-09-06T00:17:16.825265846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:16.826353 env[1298]: time="2025-09-06T00:17:16.826319172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:16.828271 env[1298]: time="2025-09-06T00:17:16.828212641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:16.830855 env[1298]: time="2025-09-06T00:17:16.830825781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:16.831517 env[1298]: time="2025-09-06T00:17:16.831487121Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:17:16.832738 env[1298]: time="2025-09-06T00:17:16.832708326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:17:17.388609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679686662.mount: Deactivated successfully. Sep 6 00:17:17.392478 env[1298]: time="2025-09-06T00:17:17.392427807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:17.394476 env[1298]: time="2025-09-06T00:17:17.394427063Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:17.395761 env[1298]: time="2025-09-06T00:17:17.395732762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:17.396963 env[1298]: time="2025-09-06T00:17:17.396935941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:17.397941 env[1298]: time="2025-09-06T00:17:17.397910239Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:17:17.398635 env[1298]: time="2025-09-06T00:17:17.398604951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:17:18.010332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796578839.mount: Deactivated successfully. Sep 6 00:17:20.279808 env[1298]: time="2025-09-06T00:17:20.279754163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:20.281641 env[1298]: time="2025-09-06T00:17:20.281605584Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:20.283784 env[1298]: time="2025-09-06T00:17:20.283750582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:20.286007 env[1298]: time="2025-09-06T00:17:20.285978702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:20.286790 env[1298]: time="2025-09-06T00:17:20.286763156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:17:23.320759 systemd[1]: Stopped kubelet.service. Sep 6 00:17:23.323096 systemd[1]: Starting kubelet.service... Sep 6 00:17:23.367847 systemd[1]: Reloading. Sep 6 00:17:23.472701 /usr/lib/systemd/system-generators/torcx-generator[1639]: time="2025-09-06T00:17:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:17:23.474459 /usr/lib/systemd/system-generators/torcx-generator[1639]: time="2025-09-06T00:17:23Z" level=info msg="torcx already run" Sep 6 00:17:23.596160 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:17:23.596181 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:17:23.616272 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:17:23.710746 systemd[1]: Started kubelet.service. Sep 6 00:17:23.714498 systemd[1]: Stopping kubelet.service... Sep 6 00:17:23.715102 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:17:23.715644 systemd[1]: Stopped kubelet.service. Sep 6 00:17:23.720560 systemd[1]: Starting kubelet.service... Sep 6 00:17:23.861637 systemd[1]: Started kubelet.service. Sep 6 00:17:23.918351 kubelet[1706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:17:23.918787 kubelet[1706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:17:23.918847 kubelet[1706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:17:23.919015 kubelet[1706]: I0906 00:17:23.918979 1706 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:17:24.407410 kubelet[1706]: I0906 00:17:24.407341 1706 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:17:24.407622 kubelet[1706]: I0906 00:17:24.407605 1706 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:17:24.408004 kubelet[1706]: I0906 00:17:24.407987 1706 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:17:24.432026 kubelet[1706]: I0906 00:17:24.431991 1706 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:17:24.433081 kubelet[1706]: E0906 00:17:24.433041 1706 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.146.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:24.439565 kubelet[1706]: E0906 00:17:24.439497 1706 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:17:24.439565 kubelet[1706]: I0906 00:17:24.439555 1706 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:17:24.445531 kubelet[1706]: I0906 00:17:24.445494 1706 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:17:24.445872 kubelet[1706]: I0906 00:17:24.445848 1706 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:17:24.446041 kubelet[1706]: I0906 00:17:24.446005 1706 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:17:24.446239 kubelet[1706]: I0906 00:17:24.446043 1706 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-81199f28b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:17:24.446335 kubelet[1706]: I0906 00:17:24.446258 1706 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:17:24.446335 kubelet[1706]: I0906 00:17:24.446269 1706 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:17:24.446411 kubelet[1706]: I0906 00:17:24.446376 1706 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:17:24.450047 kubelet[1706]: I0906 00:17:24.450014 1706 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:17:24.450047 kubelet[1706]: I0906 00:17:24.450050 1706 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:17:24.450219 kubelet[1706]: I0906 00:17:24.450086 1706 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:17:24.450219 kubelet[1706]: I0906 00:17:24.450108 1706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:17:24.455583 kubelet[1706]: W0906 00:17:24.455530 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.146.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-81199f28b8&limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:24.455784 kubelet[1706]: E0906 00:17:24.455764 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.146.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-81199f28b8&limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:24.455947 kubelet[1706]: I0906 00:17:24.455928 1706 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:17:24.456458 kubelet[1706]: I0906 00:17:24.456438 1706 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:17:24.456642 kubelet[1706]: W0906 00:17:24.456595 1706 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:17:24.465095 kubelet[1706]: I0906 00:17:24.465052 1706 server.go:1274] "Started kubelet" Sep 6 00:17:24.470137 kubelet[1706]: W0906 00:17:24.469773 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.146.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:24.470137 kubelet[1706]: E0906 00:17:24.469834 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.146.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:24.470137 kubelet[1706]: I0906 00:17:24.469869 1706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:17:24.471310 kubelet[1706]: I0906 00:17:24.470829 1706 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:17:24.477123 kubelet[1706]: I0906 00:17:24.477050 1706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:17:24.477493 kubelet[1706]: I0906 00:17:24.477475 1706 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:17:24.477707 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:17:24.479044 kubelet[1706]: I0906 00:17:24.478901 1706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:17:24.479304 kubelet[1706]: E0906 00:17:24.477856 1706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.146.98:6443/api/v1/namespaces/default/events\": dial tcp 143.198.146.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-81199f28b8.1862895eb3d6282d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-81199f28b8,UID:ci-3510.3.8-n-81199f28b8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-81199f28b8,},FirstTimestamp:2025-09-06 00:17:24.465002541 +0000 UTC m=+0.589742031,LastTimestamp:2025-09-06 00:17:24.465002541 +0000 UTC m=+0.589742031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-81199f28b8,}" Sep 6 00:17:24.480977 kubelet[1706]: I0906 00:17:24.480894 1706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:17:24.483517 kubelet[1706]: I0906 00:17:24.483498 1706 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:17:24.483793 kubelet[1706]: I0906 00:17:24.483774 1706 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:17:24.483916 kubelet[1706]: I0906 00:17:24.483906 1706 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:17:24.484369 kubelet[1706]: W0906 00:17:24.484332 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.146.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:24.484516 kubelet[1706]: E0906 00:17:24.484495 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.146.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:24.484775 kubelet[1706]: I0906 00:17:24.484750 1706 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:17:24.484959 kubelet[1706]: I0906 00:17:24.484940 1706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:17:24.486152 kubelet[1706]: E0906 00:17:24.486133 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:24.486447 kubelet[1706]: E0906 00:17:24.486424 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-81199f28b8?timeout=10s\": dial tcp 143.198.146.98:6443: connect: connection refused" interval="200ms" Sep 6 00:17:24.486613 kubelet[1706]: E0906 00:17:24.486598 1706 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:17:24.486914 kubelet[1706]: I0906 00:17:24.486892 1706 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:17:24.505804 kubelet[1706]: I0906 00:17:24.505675 1706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:17:24.516522 kubelet[1706]: I0906 00:17:24.516487 1706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:17:24.516838 kubelet[1706]: I0906 00:17:24.516826 1706 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:17:24.516993 kubelet[1706]: I0906 00:17:24.516979 1706 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:17:24.517211 kubelet[1706]: E0906 00:17:24.517190 1706 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:17:24.518038 kubelet[1706]: I0906 00:17:24.517375 1706 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:17:24.518209 kubelet[1706]: I0906 00:17:24.518187 1706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:17:24.518326 kubelet[1706]: I0906 00:17:24.518312 1706 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:17:24.519320 kubelet[1706]: W0906 00:17:24.519259 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.146.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:24.519455 kubelet[1706]: E0906 00:17:24.519375 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.146.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:24.521583 kubelet[1706]: I0906 00:17:24.521562 1706 policy_none.go:49] "None policy: Start" Sep 6 00:17:24.522514 kubelet[1706]: I0906 00:17:24.522494 1706 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:17:24.522622 kubelet[1706]: I0906 00:17:24.522611 1706 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:17:24.530772 kubelet[1706]: I0906 00:17:24.530732 1706 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:17:24.531368 kubelet[1706]: I0906 00:17:24.531349 1706 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:17:24.531610 kubelet[1706]: I0906 00:17:24.531574 1706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:17:24.532002 kubelet[1706]: I0906 00:17:24.531989 1706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:17:24.533009 kubelet[1706]: E0906 00:17:24.532990 1706 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:24.633516 kubelet[1706]: I0906 00:17:24.633471 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.634114 kubelet[1706]: E0906 00:17:24.634081 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.146.98:6443/api/v1/nodes\": dial tcp 143.198.146.98:6443: connect: connection refused" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.685590 kubelet[1706]: I0906 00:17:24.685460 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3abc66df651327801cc809cac424384d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" (UID: \"3abc66df651327801cc809cac424384d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.685590 kubelet[1706]: I0906 00:17:24.685500 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3abc66df651327801cc809cac424384d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" (UID: \"3abc66df651327801cc809cac424384d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.685590 kubelet[1706]: I0906 00:17:24.685522 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.685590 kubelet[1706]: I0906 00:17:24.685547 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.685590 kubelet[1706]: I0906 00:17:24.685564 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37532bfb0cf529afe053fc6094a4238e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-81199f28b8\" (UID: \"37532bfb0cf529afe053fc6094a4238e\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.687419 kubelet[1706]: I0906 00:17:24.687367 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3abc66df651327801cc809cac424384d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" (UID: \"3abc66df651327801cc809cac424384d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.687419 kubelet[1706]: I0906 00:17:24.687420 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.687622 kubelet[1706]: I0906 00:17:24.687440 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.687622 kubelet[1706]: I0906 00:17:24.687456 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.688327 kubelet[1706]: E0906 00:17:24.688249 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-81199f28b8?timeout=10s\": dial tcp 143.198.146.98:6443: connect: connection refused" interval="400ms" Sep 6 00:17:24.836110 kubelet[1706]: I0906 00:17:24.836076 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.836554 kubelet[1706]: E0906 00:17:24.836519 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.146.98:6443/api/v1/nodes\": dial tcp 143.198.146.98:6443: connect: connection refused" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:24.927689 kubelet[1706]: E0906 00:17:24.927627 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:24.928828 env[1298]: time="2025-09-06T00:17:24.928761601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-81199f28b8,Uid:37532bfb0cf529afe053fc6094a4238e,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:24.929432 kubelet[1706]: E0906 00:17:24.929370 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:24.930456 env[1298]: time="2025-09-06T00:17:24.930286700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-81199f28b8,Uid:049e06043a694990699b393a9a52b1dd,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:24.933265 kubelet[1706]: E0906 00:17:24.933242 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:24.934029 env[1298]: time="2025-09-06T00:17:24.933894655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-81199f28b8,Uid:3abc66df651327801cc809cac424384d,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:25.089886 kubelet[1706]: E0906 00:17:25.089760 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-81199f28b8?timeout=10s\": dial tcp 143.198.146.98:6443: connect: connection refused" interval="800ms" Sep 6 00:17:25.238900 kubelet[1706]: I0906 00:17:25.238859 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:25.239540 kubelet[1706]: E0906 00:17:25.239512 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.146.98:6443/api/v1/nodes\": dial tcp 143.198.146.98:6443: connect: connection refused" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:25.444055 kubelet[1706]: W0906 00:17:25.443877 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.146.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:25.444055 kubelet[1706]: E0906 00:17:25.443958 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.146.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:25.537452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187351627.mount: Deactivated successfully. Sep 6 00:17:25.542736 env[1298]: time="2025-09-06T00:17:25.542673398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.547347 env[1298]: time="2025-09-06T00:17:25.547294978Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.548440 env[1298]: time="2025-09-06T00:17:25.548371614Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.549737 env[1298]: time="2025-09-06T00:17:25.549696809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.550847 env[1298]: time="2025-09-06T00:17:25.550800996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.554798 env[1298]: time="2025-09-06T00:17:25.554757460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.561668 env[1298]: time="2025-09-06T00:17:25.561618184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.563583 env[1298]: time="2025-09-06T00:17:25.563530812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.564318 env[1298]: time="2025-09-06T00:17:25.564283612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.565165 env[1298]: time="2025-09-06T00:17:25.565134026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.566020 env[1298]: time="2025-09-06T00:17:25.565989015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.566599 env[1298]: time="2025-09-06T00:17:25.566568549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:25.591638 env[1298]: time="2025-09-06T00:17:25.591539192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:25.591900 env[1298]: time="2025-09-06T00:17:25.591605915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:25.591900 env[1298]: time="2025-09-06T00:17:25.591617630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:25.591900 env[1298]: time="2025-09-06T00:17:25.591798392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f10509feec621702215eba37a40388247f3368dbc1b6414263c4143b3aff1457 pid=1745 runtime=io.containerd.runc.v2 Sep 6 00:17:25.600507 env[1298]: time="2025-09-06T00:17:25.600415779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:25.600507 env[1298]: time="2025-09-06T00:17:25.600467236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:25.600710 env[1298]: time="2025-09-06T00:17:25.600497064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:25.600768 env[1298]: time="2025-09-06T00:17:25.600715986Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de1ade25b61388f536b162370ef3588cfadb487dc7f4bbf129dcdbdad9c052de pid=1772 runtime=io.containerd.runc.v2 Sep 6 00:17:25.605782 env[1298]: time="2025-09-06T00:17:25.605683927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:25.605990 env[1298]: time="2025-09-06T00:17:25.605964517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:25.606091 env[1298]: time="2025-09-06T00:17:25.606069349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:25.606413 env[1298]: time="2025-09-06T00:17:25.606355670Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef391f5d0e4faa38e6de8d84031ad34d255d41433cf0e6b57af3abbeced39526 pid=1762 runtime=io.containerd.runc.v2 Sep 6 00:17:25.698949 env[1298]: time="2025-09-06T00:17:25.698831717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-81199f28b8,Uid:3abc66df651327801cc809cac424384d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f10509feec621702215eba37a40388247f3368dbc1b6414263c4143b3aff1457\"" Sep 6 00:17:25.700777 kubelet[1706]: E0906 00:17:25.700543 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:25.707856 env[1298]: time="2025-09-06T00:17:25.707810421Z" level=info msg="CreateContainer within sandbox \"f10509feec621702215eba37a40388247f3368dbc1b6414263c4143b3aff1457\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:17:25.716497 env[1298]: time="2025-09-06T00:17:25.716453838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-81199f28b8,Uid:049e06043a694990699b393a9a52b1dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"de1ade25b61388f536b162370ef3588cfadb487dc7f4bbf129dcdbdad9c052de\"" Sep 6 00:17:25.719305 kubelet[1706]: E0906 00:17:25.719106 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:25.722471 env[1298]: time="2025-09-06T00:17:25.722418997Z" level=info msg="CreateContainer within sandbox \"de1ade25b61388f536b162370ef3588cfadb487dc7f4bbf129dcdbdad9c052de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:17:25.745108 env[1298]: time="2025-09-06T00:17:25.745057370Z" level=info msg="CreateContainer within sandbox \"de1ade25b61388f536b162370ef3588cfadb487dc7f4bbf129dcdbdad9c052de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ada78edc1f0fbeb7248b30fc0d62f45b64bbcecb79f4553ac5a034f7adceda56\"" Sep 6 00:17:25.746206 env[1298]: time="2025-09-06T00:17:25.746174190Z" level=info msg="StartContainer for \"ada78edc1f0fbeb7248b30fc0d62f45b64bbcecb79f4553ac5a034f7adceda56\"" Sep 6 00:17:25.748410 env[1298]: time="2025-09-06T00:17:25.748347436Z" level=info msg="CreateContainer within sandbox \"f10509feec621702215eba37a40388247f3368dbc1b6414263c4143b3aff1457\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24fc629fca749c084e2dc61bf7f45c41de8a897b1d5062c526b4c5db1e2ada14\"" Sep 6 00:17:25.748706 env[1298]: time="2025-09-06T00:17:25.748412687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-81199f28b8,Uid:37532bfb0cf529afe053fc6094a4238e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef391f5d0e4faa38e6de8d84031ad34d255d41433cf0e6b57af3abbeced39526\"" Sep 6 00:17:25.749356 env[1298]: time="2025-09-06T00:17:25.749331055Z" level=info msg="StartContainer for \"24fc629fca749c084e2dc61bf7f45c41de8a897b1d5062c526b4c5db1e2ada14\"" Sep 6 00:17:25.749907 kubelet[1706]: E0906 00:17:25.749737 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:25.751885 env[1298]: time="2025-09-06T00:17:25.751844458Z" level=info msg="CreateContainer within sandbox \"ef391f5d0e4faa38e6de8d84031ad34d255d41433cf0e6b57af3abbeced39526\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:17:25.763857 env[1298]: time="2025-09-06T00:17:25.763789319Z" level=info msg="CreateContainer within sandbox \"ef391f5d0e4faa38e6de8d84031ad34d255d41433cf0e6b57af3abbeced39526\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d38df8178c6ad9dfa11b1fce9e2d5ffe1de509847e921e2bcd892515a7f38a85\"" Sep 6 00:17:25.764589 env[1298]: time="2025-09-06T00:17:25.764551722Z" level=info msg="StartContainer for \"d38df8178c6ad9dfa11b1fce9e2d5ffe1de509847e921e2bcd892515a7f38a85\"" Sep 6 00:17:25.796431 kubelet[1706]: W0906 00:17:25.795655 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.146.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-81199f28b8&limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:25.796431 kubelet[1706]: E0906 00:17:25.795744 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.146.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-81199f28b8&limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:25.827871 kubelet[1706]: W0906 00:17:25.827720 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.146.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:25.827871 kubelet[1706]: E0906 00:17:25.827817 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.146.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:25.872801 kubelet[1706]: W0906 00:17:25.872698 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.146.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.146.98:6443: connect: connection refused Sep 6 00:17:25.872801 kubelet[1706]: E0906 00:17:25.872760 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.146.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.146.98:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:17:25.893758 env[1298]: time="2025-09-06T00:17:25.891734109Z" level=info msg="StartContainer for \"24fc629fca749c084e2dc61bf7f45c41de8a897b1d5062c526b4c5db1e2ada14\" returns successfully" Sep 6 00:17:25.893913 kubelet[1706]: E0906 00:17:25.893277 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.146.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-81199f28b8?timeout=10s\": dial tcp 143.198.146.98:6443: connect: connection refused" interval="1.6s" Sep 6 00:17:25.904412 env[1298]: time="2025-09-06T00:17:25.902840669Z" level=info msg="StartContainer for \"ada78edc1f0fbeb7248b30fc0d62f45b64bbcecb79f4553ac5a034f7adceda56\" returns successfully" Sep 6 00:17:25.919903 env[1298]: time="2025-09-06T00:17:25.919855040Z" level=info msg="StartContainer for \"d38df8178c6ad9dfa11b1fce9e2d5ffe1de509847e921e2bcd892515a7f38a85\" returns successfully" Sep 6 00:17:26.042084 kubelet[1706]: I0906 00:17:26.041578 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:26.042084 kubelet[1706]: E0906 00:17:26.041969 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.146.98:6443/api/v1/nodes\": dial tcp 143.198.146.98:6443: connect: connection refused" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:26.539094 kubelet[1706]: E0906 00:17:26.538946 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:26.541087 kubelet[1706]: E0906 00:17:26.541059 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:26.563865 kubelet[1706]: E0906 00:17:26.563828 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:27.559837 kubelet[1706]: E0906 00:17:27.559804 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:27.643275 kubelet[1706]: I0906 00:17:27.643240 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:27.932581 kubelet[1706]: E0906 00:17:27.932454 1706 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-81199f28b8\" not found" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:28.071668 kubelet[1706]: I0906 00:17:28.071615 1706 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:28.071842 kubelet[1706]: E0906 00:17:28.071681 1706 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-81199f28b8\": node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:28.091603 kubelet[1706]: E0906 00:17:28.091557 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:28.192286 kubelet[1706]: E0906 00:17:28.192155 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:28.293237 kubelet[1706]: E0906 00:17:28.293180 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:28.393714 kubelet[1706]: E0906 00:17:28.393678 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:28.494254 kubelet[1706]: E0906 00:17:28.494139 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-81199f28b8\" not found" Sep 6 00:17:29.471784 kubelet[1706]: I0906 00:17:29.471523 1706 apiserver.go:52] "Watching apiserver" Sep 6 00:17:29.484804 kubelet[1706]: I0906 00:17:29.484736 1706 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:17:29.827847 systemd[1]: Reloading. Sep 6 00:17:29.829459 kubelet[1706]: W0906 00:17:29.829423 1706 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:17:29.829928 kubelet[1706]: E0906 00:17:29.829908 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:29.905859 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-09-06T00:17:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:17:29.905889 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-09-06T00:17:29Z" level=info msg="torcx already run" Sep 6 00:17:30.034012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:17:30.034041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:17:30.060554 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:17:30.190221 systemd[1]: Stopping kubelet.service... Sep 6 00:17:30.208297 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:17:30.209042 systemd[1]: Stopped kubelet.service. Sep 6 00:17:30.213948 systemd[1]: Starting kubelet.service... Sep 6 00:17:31.177788 systemd[1]: Started kubelet.service. Sep 6 00:17:31.253516 kubelet[2065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:17:31.254149 kubelet[2065]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:17:31.254311 kubelet[2065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:17:31.254596 kubelet[2065]: I0906 00:17:31.254545 2065 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:17:31.269426 kubelet[2065]: I0906 00:17:31.269361 2065 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:17:31.269426 kubelet[2065]: I0906 00:17:31.269407 2065 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:17:31.270016 kubelet[2065]: I0906 00:17:31.269993 2065 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:17:31.279557 kubelet[2065]: I0906 00:17:31.279224 2065 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:17:31.284576 kubelet[2065]: I0906 00:17:31.284238 2065 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:17:31.296261 sudo[2080]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:17:31.297708 sudo[2080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:17:31.299082 kubelet[2065]: E0906 00:17:31.298978 2065 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:17:31.299082 kubelet[2065]: I0906 00:17:31.299009 2065 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:17:31.302904 kubelet[2065]: I0906 00:17:31.302878 2065 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:17:31.304182 kubelet[2065]: I0906 00:17:31.303980 2065 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:17:31.304291 kubelet[2065]: I0906 00:17:31.304167 2065 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:17:31.304441 kubelet[2065]: I0906 00:17:31.304203 2065 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-81199f28b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:17:31.304441 kubelet[2065]: I0906 00:17:31.304435 2065 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:17:31.304925 kubelet[2065]: I0906 00:17:31.304447 2065 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:17:31.304925 kubelet[2065]: I0906 00:17:31.304478 2065 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:17:31.304925 kubelet[2065]: I0906 00:17:31.304614 2065 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:17:31.304925 kubelet[2065]: I0906 00:17:31.304627 2065 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:17:31.304925 kubelet[2065]: I0906 00:17:31.304657 2065 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:17:31.304925 kubelet[2065]: I0906 00:17:31.304680 2065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:17:31.312469 kubelet[2065]: I0906 00:17:31.307190 2065 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:17:31.312469 kubelet[2065]: I0906 00:17:31.309134 2065 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:17:31.320288 kubelet[2065]: I0906 00:17:31.320258 2065 server.go:1274] "Started kubelet" Sep 6 00:17:31.337839 kubelet[2065]: I0906 00:17:31.337800 2065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:17:31.339237 kubelet[2065]: I0906 00:17:31.339164 2065 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:17:31.340707 kubelet[2065]: I0906 00:17:31.340680 2065 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:17:31.344414 kubelet[2065]: I0906 00:17:31.343615 2065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:17:31.344414 kubelet[2065]: I0906 00:17:31.343822 2065 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:17:31.369380 kubelet[2065]: I0906 00:17:31.369347 2065 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:17:31.371423 kubelet[2065]: I0906 00:17:31.371222 2065 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:17:31.374017 kubelet[2065]: I0906 00:17:31.373701 2065 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:17:31.374017 kubelet[2065]: I0906 00:17:31.373891 2065 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:17:31.383099 kubelet[2065]: I0906 00:17:31.383066 2065 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:17:31.383254 kubelet[2065]: I0906 00:17:31.383190 2065 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:17:31.383599 kubelet[2065]: E0906 00:17:31.383574 2065 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:17:31.386176 kubelet[2065]: I0906 00:17:31.386143 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:17:31.390624 kubelet[2065]: I0906 00:17:31.390581 2065 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:17:31.400421 kubelet[2065]: I0906 00:17:31.400006 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:17:31.400421 kubelet[2065]: I0906 00:17:31.400042 2065 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:17:31.400421 kubelet[2065]: I0906 00:17:31.400065 2065 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:17:31.400421 kubelet[2065]: E0906 00:17:31.400117 2065 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:17:31.468973 kubelet[2065]: I0906 00:17:31.468862 2065 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:17:31.468973 kubelet[2065]: I0906 00:17:31.468906 2065 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:17:31.468973 kubelet[2065]: I0906 00:17:31.468927 2065 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:17:31.469237 kubelet[2065]: I0906 00:17:31.469216 2065 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:17:31.469280 kubelet[2065]: I0906 00:17:31.469235 2065 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:17:31.469280 kubelet[2065]: I0906 00:17:31.469255 2065 policy_none.go:49] "None policy: Start" Sep 6 00:17:31.469847 kubelet[2065]: I0906 00:17:31.469828 2065 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:17:31.469921 kubelet[2065]: I0906 00:17:31.469852 2065 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:17:31.470003 kubelet[2065]: I0906 00:17:31.469991 2065 state_mem.go:75] "Updated machine memory state" Sep 6 00:17:31.471216 kubelet[2065]: I0906 00:17:31.471192 2065 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:17:31.471452 kubelet[2065]: I0906 00:17:31.471436 2065 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:17:31.471521 kubelet[2065]: I0906 00:17:31.471451 2065 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:17:31.474300 kubelet[2065]: I0906 00:17:31.474280 2065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:17:31.512998 kubelet[2065]: W0906 00:17:31.512918 2065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:17:31.513255 kubelet[2065]: W0906 00:17:31.513228 2065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:17:31.513332 kubelet[2065]: E0906 00:17:31.513290 2065 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.516051 kubelet[2065]: W0906 00:17:31.516022 2065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:17:31.583767 kubelet[2065]: I0906 00:17:31.583733 2065 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.595286 kubelet[2065]: I0906 00:17:31.595248 2065 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.595482 kubelet[2065]: I0906 00:17:31.595338 2065 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.674718 kubelet[2065]: I0906 00:17:31.674667 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.674718 kubelet[2065]: I0906 00:17:31.674716 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.674960 kubelet[2065]: I0906 00:17:31.674738 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37532bfb0cf529afe053fc6094a4238e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-81199f28b8\" (UID: \"37532bfb0cf529afe053fc6094a4238e\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.674960 kubelet[2065]: I0906 00:17:31.674756 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3abc66df651327801cc809cac424384d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" (UID: \"3abc66df651327801cc809cac424384d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.674960 kubelet[2065]: I0906 00:17:31.674776 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3abc66df651327801cc809cac424384d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" (UID: \"3abc66df651327801cc809cac424384d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.675107 kubelet[2065]: I0906 00:17:31.674968 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.675107 kubelet[2065]: I0906 00:17:31.674993 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.675107 kubelet[2065]: I0906 00:17:31.675019 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3abc66df651327801cc809cac424384d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" (UID: \"3abc66df651327801cc809cac424384d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.675107 kubelet[2065]: I0906 00:17:31.675042 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/049e06043a694990699b393a9a52b1dd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-81199f28b8\" (UID: \"049e06043a694990699b393a9a52b1dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:31.814012 kubelet[2065]: E0906 00:17:31.813903 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:31.815736 kubelet[2065]: E0906 00:17:31.815701 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:31.817403 kubelet[2065]: E0906 00:17:31.817361 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:31.984008 sudo[2080]: pam_unix(sudo:session): session closed for user root Sep 6 00:17:32.306552 kubelet[2065]: I0906 00:17:32.306455 2065 apiserver.go:52] "Watching apiserver" Sep 6 00:17:32.374972 kubelet[2065]: I0906 00:17:32.374910 2065 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:17:32.422850 kubelet[2065]: E0906 00:17:32.422812 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:32.424660 kubelet[2065]: E0906 00:17:32.424628 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:32.439173 kubelet[2065]: W0906 00:17:32.439137 2065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:17:32.439522 kubelet[2065]: E0906 00:17:32.439476 2065 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-81199f28b8\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" Sep 6 00:17:32.439799 kubelet[2065]: E0906 00:17:32.439783 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:32.491446 kubelet[2065]: I0906 00:17:32.491369 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-81199f28b8" podStartSLOduration=1.491340558 podStartE2EDuration="1.491340558s" podCreationTimestamp="2025-09-06 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:32.48291799 +0000 UTC m=+1.290302230" watchObservedRunningTime="2025-09-06 00:17:32.491340558 +0000 UTC m=+1.298724791" Sep 6 00:17:32.499766 kubelet[2065]: I0906 00:17:32.499693 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-81199f28b8" podStartSLOduration=3.499671643 podStartE2EDuration="3.499671643s" podCreationTimestamp="2025-09-06 00:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:32.49163861 +0000 UTC m=+1.299022843" watchObservedRunningTime="2025-09-06 00:17:32.499671643 +0000 UTC m=+1.307055878" Sep 6 00:17:33.424653 kubelet[2065]: E0906 00:17:33.424618 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:33.649518 sudo[1430]: pam_unix(sudo:session): session closed for user root Sep 6 00:17:33.654664 sshd[1424]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:33.657938 systemd-logind[1285]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:17:33.658988 systemd[1]: sshd@4-143.198.146.98:22-147.75.109.163:53066.service: Deactivated successfully. Sep 6 00:17:33.659873 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:17:33.660869 systemd-logind[1285]: Removed session 5. Sep 6 00:17:34.426330 kubelet[2065]: E0906 00:17:34.426289 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:36.924065 kubelet[2065]: I0906 00:17:36.924024 2065 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:17:36.925031 env[1298]: time="2025-09-06T00:17:36.924983876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:17:36.925842 kubelet[2065]: I0906 00:17:36.925810 2065 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:17:37.726299 kubelet[2065]: I0906 00:17:37.726213 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-81199f28b8" podStartSLOduration=6.726193528 podStartE2EDuration="6.726193528s" podCreationTimestamp="2025-09-06 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:32.50035278 +0000 UTC m=+1.307737022" watchObservedRunningTime="2025-09-06 00:17:37.726193528 +0000 UTC m=+6.533577766" Sep 6 00:17:37.753557 kubelet[2065]: W0906 00:17:37.753524 2065 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.8-n-81199f28b8" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-81199f28b8' and this object Sep 6 00:17:37.753800 kubelet[2065]: W0906 00:17:37.753523 2065 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.8-n-81199f28b8" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-81199f28b8' and this object Sep 6 00:17:37.753859 kubelet[2065]: E0906 00:17:37.753819 2065 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.8-n-81199f28b8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-81199f28b8' and this object" logger="UnhandledError" Sep 6 00:17:37.753930 kubelet[2065]: E0906 00:17:37.753905 2065 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.8-n-81199f28b8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-81199f28b8' and this object" logger="UnhandledError" Sep 6 00:17:37.754664 kubelet[2065]: W0906 00:17:37.754637 2065 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.8-n-81199f28b8" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-81199f28b8' and this object Sep 6 00:17:37.754782 kubelet[2065]: E0906 00:17:37.754682 2065 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.8-n-81199f28b8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-81199f28b8' and this object" logger="UnhandledError" Sep 6 00:17:37.812092 kubelet[2065]: I0906 00:17:37.812032 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-run\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812092 kubelet[2065]: I0906 00:17:37.812085 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-hostproc\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812092 kubelet[2065]: I0906 00:17:37.812100 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-lib-modules\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812339 kubelet[2065]: I0906 00:17:37.812130 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33b995f6-b813-451b-9a96-0e536b5017b1-lib-modules\") pod \"kube-proxy-8cfcj\" (UID: \"33b995f6-b813-451b-9a96-0e536b5017b1\") " pod="kube-system/kube-proxy-8cfcj" Sep 6 00:17:37.812339 kubelet[2065]: I0906 00:17:37.812149 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-kernel\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812339 kubelet[2065]: I0906 00:17:37.812168 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33b995f6-b813-451b-9a96-0e536b5017b1-xtables-lock\") pod \"kube-proxy-8cfcj\" (UID: \"33b995f6-b813-451b-9a96-0e536b5017b1\") " pod="kube-system/kube-proxy-8cfcj" Sep 6 00:17:37.812339 kubelet[2065]: I0906 00:17:37.812187 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-net\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812339 kubelet[2065]: I0906 00:17:37.812212 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-bpf-maps\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812561 kubelet[2065]: I0906 00:17:37.812228 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5bfl\" (UniqueName: \"kubernetes.io/projected/33b995f6-b813-451b-9a96-0e536b5017b1-kube-api-access-w5bfl\") pod \"kube-proxy-8cfcj\" (UID: \"33b995f6-b813-451b-9a96-0e536b5017b1\") " pod="kube-system/kube-proxy-8cfcj" Sep 6 00:17:37.812561 kubelet[2065]: I0906 00:17:37.812245 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-etc-cni-netd\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812561 kubelet[2065]: I0906 00:17:37.812277 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-config-path\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812561 kubelet[2065]: I0906 00:17:37.812291 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-hubble-tls\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812561 kubelet[2065]: I0906 00:17:37.812309 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-xtables-lock\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812561 kubelet[2065]: I0906 00:17:37.812324 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cni-path\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812740 kubelet[2065]: I0906 00:17:37.812406 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33b995f6-b813-451b-9a96-0e536b5017b1-kube-proxy\") pod \"kube-proxy-8cfcj\" (UID: \"33b995f6-b813-451b-9a96-0e536b5017b1\") " pod="kube-system/kube-proxy-8cfcj" Sep 6 00:17:37.812740 kubelet[2065]: I0906 00:17:37.812425 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-cgroup\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812740 kubelet[2065]: I0906 00:17:37.812444 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19ed708a-c9b2-4304-930f-c5241cedba3e-clustermesh-secrets\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.812740 kubelet[2065]: I0906 00:17:37.812619 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xwwr\" (UniqueName: \"kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-kube-api-access-2xwwr\") pod \"cilium-w7bvl\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " pod="kube-system/cilium-w7bvl" Sep 6 00:17:37.922038 kubelet[2065]: I0906 00:17:37.921998 2065 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:17:38.014289 kubelet[2065]: I0906 00:17:38.014174 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpddc\" (UniqueName: \"kubernetes.io/projected/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-kube-api-access-dpddc\") pod \"cilium-operator-5d85765b45-mcrkn\" (UID: \"31a90b57-68ae-4e33-86a7-0bb3993ea9ce\") " pod="kube-system/cilium-operator-5d85765b45-mcrkn" Sep 6 00:17:38.014911 kubelet[2065]: I0906 00:17:38.014879 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-cilium-config-path\") pod \"cilium-operator-5d85765b45-mcrkn\" (UID: \"31a90b57-68ae-4e33-86a7-0bb3993ea9ce\") " pod="kube-system/cilium-operator-5d85765b45-mcrkn" Sep 6 00:17:38.030338 kubelet[2065]: E0906 00:17:38.030297 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:38.031334 env[1298]: time="2025-09-06T00:17:38.031277516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8cfcj,Uid:33b995f6-b813-451b-9a96-0e536b5017b1,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:38.053169 env[1298]: time="2025-09-06T00:17:38.053013369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:38.053169 env[1298]: time="2025-09-06T00:17:38.053091919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:38.053169 env[1298]: time="2025-09-06T00:17:38.053103273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:38.053796 env[1298]: time="2025-09-06T00:17:38.053726511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b09796543040d1ebb3412088529bfaa98220f5bba4cc9b7f2c1cae0d5d021ac pid=2143 runtime=io.containerd.runc.v2 Sep 6 00:17:38.111046 env[1298]: time="2025-09-06T00:17:38.110987140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8cfcj,Uid:33b995f6-b813-451b-9a96-0e536b5017b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b09796543040d1ebb3412088529bfaa98220f5bba4cc9b7f2c1cae0d5d021ac\"" Sep 6 00:17:38.112089 kubelet[2065]: E0906 00:17:38.112063 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:38.119724 env[1298]: time="2025-09-06T00:17:38.117089143Z" level=info msg="CreateContainer within sandbox \"1b09796543040d1ebb3412088529bfaa98220f5bba4cc9b7f2c1cae0d5d021ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:17:38.137945 env[1298]: time="2025-09-06T00:17:38.137893258Z" level=info msg="CreateContainer within sandbox \"1b09796543040d1ebb3412088529bfaa98220f5bba4cc9b7f2c1cae0d5d021ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05223c97bed0df5f9e308b6b4af54c98a33cc732aec441e8fbcdbc436d946b5e\"" Sep 6 00:17:38.140709 env[1298]: time="2025-09-06T00:17:38.140669929Z" level=info msg="StartContainer for \"05223c97bed0df5f9e308b6b4af54c98a33cc732aec441e8fbcdbc436d946b5e\"" Sep 6 00:17:38.197780 env[1298]: time="2025-09-06T00:17:38.197736023Z" level=info msg="StartContainer for \"05223c97bed0df5f9e308b6b4af54c98a33cc732aec441e8fbcdbc436d946b5e\" returns successfully" Sep 6 00:17:38.221420 kubelet[2065]: E0906 00:17:38.219810 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:38.434136 kubelet[2065]: E0906 00:17:38.434028 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:38.434977 kubelet[2065]: E0906 00:17:38.434949 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:38.562626 kubelet[2065]: E0906 00:17:38.562586 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:38.577283 kubelet[2065]: I0906 00:17:38.577221 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8cfcj" podStartSLOduration=1.577202154 podStartE2EDuration="1.577202154s" podCreationTimestamp="2025-09-06 00:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:38.455465274 +0000 UTC m=+7.262849517" watchObservedRunningTime="2025-09-06 00:17:38.577202154 +0000 UTC m=+7.384586393" Sep 6 00:17:38.915251 kubelet[2065]: E0906 00:17:38.915107 2065 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 6 00:17:38.915251 kubelet[2065]: E0906 00:17:38.915224 2065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19ed708a-c9b2-4304-930f-c5241cedba3e-clustermesh-secrets podName:19ed708a-c9b2-4304-930f-c5241cedba3e nodeName:}" failed. No retries permitted until 2025-09-06 00:17:39.415204254 +0000 UTC m=+8.222588487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/19ed708a-c9b2-4304-930f-c5241cedba3e-clustermesh-secrets") pod "cilium-w7bvl" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:17:38.916221 kubelet[2065]: E0906 00:17:38.916189 2065 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:38.916493 kubelet[2065]: E0906 00:17:38.916475 2065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-config-path podName:19ed708a-c9b2-4304-930f-c5241cedba3e nodeName:}" failed. No retries permitted until 2025-09-06 00:17:39.416457486 +0000 UTC m=+8.223841705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-config-path") pod "cilium-w7bvl" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:38.929883 systemd[1]: run-containerd-runc-k8s.io-1b09796543040d1ebb3412088529bfaa98220f5bba4cc9b7f2c1cae0d5d021ac-runc.0wTFMG.mount: Deactivated successfully. Sep 6 00:17:39.116015 kubelet[2065]: E0906 00:17:39.115968 2065 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:39.116829 kubelet[2065]: E0906 00:17:39.116805 2065 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-cilium-config-path podName:31a90b57-68ae-4e33-86a7-0bb3993ea9ce nodeName:}" failed. No retries permitted until 2025-09-06 00:17:39.616778643 +0000 UTC m=+8.424162860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-cilium-config-path") pod "cilium-operator-5d85765b45-mcrkn" (UID: "31a90b57-68ae-4e33-86a7-0bb3993ea9ce") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:39.436074 kubelet[2065]: E0906 00:17:39.435823 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:39.550622 kubelet[2065]: E0906 00:17:39.550573 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:39.553850 env[1298]: time="2025-09-06T00:17:39.553076850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7bvl,Uid:19ed708a-c9b2-4304-930f-c5241cedba3e,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:39.572591 env[1298]: time="2025-09-06T00:17:39.572290826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:39.572591 env[1298]: time="2025-09-06T00:17:39.572343138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:39.572591 env[1298]: time="2025-09-06T00:17:39.572353736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:39.573258 env[1298]: time="2025-09-06T00:17:39.573170856Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42 pid=2352 runtime=io.containerd.runc.v2 Sep 6 00:17:39.633052 env[1298]: time="2025-09-06T00:17:39.632999043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7bvl,Uid:19ed708a-c9b2-4304-930f-c5241cedba3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\"" Sep 6 00:17:39.633784 kubelet[2065]: E0906 00:17:39.633758 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:39.637541 env[1298]: time="2025-09-06T00:17:39.636543235Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:17:39.810125 kubelet[2065]: E0906 00:17:39.810025 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:39.810789 env[1298]: time="2025-09-06T00:17:39.810749151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mcrkn,Uid:31a90b57-68ae-4e33-86a7-0bb3993ea9ce,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:39.827848 env[1298]: time="2025-09-06T00:17:39.827747535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:39.829380 env[1298]: time="2025-09-06T00:17:39.829189048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:39.829380 env[1298]: time="2025-09-06T00:17:39.829258060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:39.829756 env[1298]: time="2025-09-06T00:17:39.829703298Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988 pid=2392 runtime=io.containerd.runc.v2 Sep 6 00:17:39.890747 env[1298]: time="2025-09-06T00:17:39.890688764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mcrkn,Uid:31a90b57-68ae-4e33-86a7-0bb3993ea9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\"" Sep 6 00:17:39.891664 kubelet[2065]: E0906 00:17:39.891480 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:43.122317 kubelet[2065]: E0906 00:17:43.122262 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:44.217568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378475507.mount: Deactivated successfully. Sep 6 00:17:47.195507 env[1298]: time="2025-09-06T00:17:47.195446247Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:47.197182 env[1298]: time="2025-09-06T00:17:47.197134266Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:47.198864 env[1298]: time="2025-09-06T00:17:47.198822937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:47.199644 env[1298]: time="2025-09-06T00:17:47.199606443Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:17:47.205308 env[1298]: time="2025-09-06T00:17:47.204645843Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:17:47.209619 env[1298]: time="2025-09-06T00:17:47.207038068Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:17:47.218744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197174512.mount: Deactivated successfully. Sep 6 00:17:47.230030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448517328.mount: Deactivated successfully. Sep 6 00:17:47.234533 env[1298]: time="2025-09-06T00:17:47.234479679Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\"" Sep 6 00:17:47.238279 env[1298]: time="2025-09-06T00:17:47.236802125Z" level=info msg="StartContainer for \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\"" Sep 6 00:17:47.305695 env[1298]: time="2025-09-06T00:17:47.305637582Z" level=info msg="StartContainer for \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\" returns successfully" Sep 6 00:17:47.364690 env[1298]: time="2025-09-06T00:17:47.364628441Z" level=info msg="shim disconnected" id=d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0 Sep 6 00:17:47.364690 env[1298]: time="2025-09-06T00:17:47.364685719Z" level=warning msg="cleaning up after shim disconnected" id=d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0 namespace=k8s.io Sep 6 00:17:47.364690 env[1298]: time="2025-09-06T00:17:47.364696738Z" level=info msg="cleaning up dead shim" Sep 6 00:17:47.374874 env[1298]: time="2025-09-06T00:17:47.374820397Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2480 runtime=io.containerd.runc.v2\n" Sep 6 00:17:47.468512 kubelet[2065]: E0906 00:17:47.468279 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:47.482422 env[1298]: time="2025-09-06T00:17:47.476127722Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:17:47.499334 env[1298]: time="2025-09-06T00:17:47.499272895Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\"" Sep 6 00:17:47.501330 env[1298]: time="2025-09-06T00:17:47.501266322Z" level=info msg="StartContainer for \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\"" Sep 6 00:17:47.568486 env[1298]: time="2025-09-06T00:17:47.568431841Z" level=info msg="StartContainer for \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\" returns successfully" Sep 6 00:17:47.576572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:17:47.576835 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:17:47.576977 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:17:47.580649 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:17:47.598699 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:17:47.614625 env[1298]: time="2025-09-06T00:17:47.614560941Z" level=info msg="shim disconnected" id=2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a Sep 6 00:17:47.614625 env[1298]: time="2025-09-06T00:17:47.614618651Z" level=warning msg="cleaning up after shim disconnected" id=2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a namespace=k8s.io Sep 6 00:17:47.614625 env[1298]: time="2025-09-06T00:17:47.614630361Z" level=info msg="cleaning up dead shim" Sep 6 00:17:47.626517 env[1298]: time="2025-09-06T00:17:47.626455720Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545 runtime=io.containerd.runc.v2\n" Sep 6 00:17:47.915358 update_engine[1288]: I0906 00:17:47.915294 1288 update_attempter.cc:509] Updating boot flags... Sep 6 00:17:48.213880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0-rootfs.mount: Deactivated successfully. Sep 6 00:17:48.471902 kubelet[2065]: E0906 00:17:48.471722 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:48.498628 env[1298]: time="2025-09-06T00:17:48.498582780Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:17:48.539513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161686341.mount: Deactivated successfully. Sep 6 00:17:48.564025 env[1298]: time="2025-09-06T00:17:48.563955349Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\"" Sep 6 00:17:48.564844 env[1298]: time="2025-09-06T00:17:48.564812214Z" level=info msg="StartContainer for \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\"" Sep 6 00:17:48.688939 env[1298]: time="2025-09-06T00:17:48.688895948Z" level=info msg="StartContainer for \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\" returns successfully" Sep 6 00:17:48.717012 env[1298]: time="2025-09-06T00:17:48.716958061Z" level=info msg="shim disconnected" id=e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f Sep 6 00:17:48.717012 env[1298]: time="2025-09-06T00:17:48.717006748Z" level=warning msg="cleaning up after shim disconnected" id=e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f namespace=k8s.io Sep 6 00:17:48.717012 env[1298]: time="2025-09-06T00:17:48.717015973Z" level=info msg="cleaning up dead shim" Sep 6 00:17:48.737049 env[1298]: time="2025-09-06T00:17:48.736926649Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2618 runtime=io.containerd.runc.v2\n" Sep 6 00:17:49.213484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f-rootfs.mount: Deactivated successfully. Sep 6 00:17:49.321737 env[1298]: time="2025-09-06T00:17:49.321691415Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:49.323411 env[1298]: time="2025-09-06T00:17:49.323365533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:49.324480 env[1298]: time="2025-09-06T00:17:49.324447817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:49.325091 env[1298]: time="2025-09-06T00:17:49.325059799Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:17:49.328869 env[1298]: time="2025-09-06T00:17:49.328838744Z" level=info msg="CreateContainer within sandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:17:49.339892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259246534.mount: Deactivated successfully. Sep 6 00:17:49.356753 env[1298]: time="2025-09-06T00:17:49.356700826Z" level=info msg="CreateContainer within sandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\"" Sep 6 00:17:49.358701 env[1298]: time="2025-09-06T00:17:49.358664786Z" level=info msg="StartContainer for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\"" Sep 6 00:17:49.430568 env[1298]: time="2025-09-06T00:17:49.430521442Z" level=info msg="StartContainer for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" returns successfully" Sep 6 00:17:49.474808 kubelet[2065]: E0906 00:17:49.474460 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:49.478700 kubelet[2065]: E0906 00:17:49.478514 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:49.483416 env[1298]: time="2025-09-06T00:17:49.482338489Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:17:49.495917 env[1298]: time="2025-09-06T00:17:49.495866294Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\"" Sep 6 00:17:49.496629 env[1298]: time="2025-09-06T00:17:49.496600731Z" level=info msg="StartContainer for \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\"" Sep 6 00:17:49.528967 kubelet[2065]: I0906 00:17:49.526640 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mcrkn" podStartSLOduration=3.093190873 podStartE2EDuration="12.526605939s" podCreationTimestamp="2025-09-06 00:17:37 +0000 UTC" firstStartedPulling="2025-09-06 00:17:39.893099097 +0000 UTC m=+8.700483327" lastFinishedPulling="2025-09-06 00:17:49.32651416 +0000 UTC m=+18.133898393" observedRunningTime="2025-09-06 00:17:49.491480895 +0000 UTC m=+18.298865134" watchObservedRunningTime="2025-09-06 00:17:49.526605939 +0000 UTC m=+18.333990179" Sep 6 00:17:49.616825 env[1298]: time="2025-09-06T00:17:49.616754095Z" level=info msg="StartContainer for \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\" returns successfully" Sep 6 00:17:49.665866 env[1298]: time="2025-09-06T00:17:49.665813468Z" level=info msg="shim disconnected" id=85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e Sep 6 00:17:49.666222 env[1298]: time="2025-09-06T00:17:49.666198605Z" level=warning msg="cleaning up after shim disconnected" id=85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e namespace=k8s.io Sep 6 00:17:49.666337 env[1298]: time="2025-09-06T00:17:49.666315779Z" level=info msg="cleaning up dead shim" Sep 6 00:17:49.677983 env[1298]: time="2025-09-06T00:17:49.677922348Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2709 runtime=io.containerd.runc.v2\n" Sep 6 00:17:50.481887 kubelet[2065]: E0906 00:17:50.481833 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:50.482610 kubelet[2065]: E0906 00:17:50.482445 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:50.485013 env[1298]: time="2025-09-06T00:17:50.484963183Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:17:50.503579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407412570.mount: Deactivated successfully. Sep 6 00:17:50.515906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692911878.mount: Deactivated successfully. Sep 6 00:17:50.524168 env[1298]: time="2025-09-06T00:17:50.524118353Z" level=info msg="CreateContainer within sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\"" Sep 6 00:17:50.525110 env[1298]: time="2025-09-06T00:17:50.525059613Z" level=info msg="StartContainer for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\"" Sep 6 00:17:50.602414 env[1298]: time="2025-09-06T00:17:50.602330370Z" level=info msg="StartContainer for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" returns successfully" Sep 6 00:17:50.747434 kubelet[2065]: I0906 00:17:50.747267 2065 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:17:50.820895 kubelet[2065]: I0906 00:17:50.820852 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fks7h\" (UniqueName: \"kubernetes.io/projected/06f31362-bef0-4c0b-a437-451fac3af25e-kube-api-access-fks7h\") pod \"coredns-7c65d6cfc9-48t7g\" (UID: \"06f31362-bef0-4c0b-a437-451fac3af25e\") " pod="kube-system/coredns-7c65d6cfc9-48t7g" Sep 6 00:17:50.821133 kubelet[2065]: I0906 00:17:50.821115 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06f31362-bef0-4c0b-a437-451fac3af25e-config-volume\") pod \"coredns-7c65d6cfc9-48t7g\" (UID: \"06f31362-bef0-4c0b-a437-451fac3af25e\") " pod="kube-system/coredns-7c65d6cfc9-48t7g" Sep 6 00:17:50.821247 kubelet[2065]: I0906 00:17:50.821234 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c951f8a-fcf6-4a00-8c40-bd7b26c66d92-config-volume\") pod \"coredns-7c65d6cfc9-c6bqj\" (UID: \"9c951f8a-fcf6-4a00-8c40-bd7b26c66d92\") " pod="kube-system/coredns-7c65d6cfc9-c6bqj" Sep 6 00:17:50.821377 kubelet[2065]: I0906 00:17:50.821361 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfwj\" (UniqueName: \"kubernetes.io/projected/9c951f8a-fcf6-4a00-8c40-bd7b26c66d92-kube-api-access-4qfwj\") pod \"coredns-7c65d6cfc9-c6bqj\" (UID: \"9c951f8a-fcf6-4a00-8c40-bd7b26c66d92\") " pod="kube-system/coredns-7c65d6cfc9-c6bqj" Sep 6 00:17:51.081561 kubelet[2065]: E0906 00:17:51.081523 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:51.082883 env[1298]: time="2025-09-06T00:17:51.082836308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6bqj,Uid:9c951f8a-fcf6-4a00-8c40-bd7b26c66d92,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:51.089645 kubelet[2065]: E0906 00:17:51.089618 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:51.130132 env[1298]: time="2025-09-06T00:17:51.130082687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-48t7g,Uid:06f31362-bef0-4c0b-a437-451fac3af25e,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:51.488278 kubelet[2065]: E0906 00:17:51.488122 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:52.490369 kubelet[2065]: E0906 00:17:52.490312 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:52.872455 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:17:52.872614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:17:52.871127 systemd-networkd[1055]: cilium_host: Link UP Sep 6 00:17:52.871310 systemd-networkd[1055]: cilium_net: Link UP Sep 6 00:17:52.872128 systemd-networkd[1055]: cilium_net: Gained carrier Sep 6 00:17:52.872653 systemd-networkd[1055]: cilium_host: Gained carrier Sep 6 00:17:52.887592 systemd-networkd[1055]: cilium_net: Gained IPv6LL Sep 6 00:17:53.004181 systemd-networkd[1055]: cilium_vxlan: Link UP Sep 6 00:17:53.004189 systemd-networkd[1055]: cilium_vxlan: Gained carrier Sep 6 00:17:53.338544 kernel: NET: Registered PF_ALG protocol family Sep 6 00:17:53.492225 kubelet[2065]: E0906 00:17:53.492185 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:53.820625 systemd-networkd[1055]: cilium_host: Gained IPv6LL Sep 6 00:17:54.134119 systemd-networkd[1055]: lxc_health: Link UP Sep 6 00:17:54.145695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:17:54.142845 systemd-networkd[1055]: lxc_health: Gained carrier Sep 6 00:17:54.670637 systemd-networkd[1055]: lxcb1833255d3c1: Link UP Sep 6 00:17:54.679438 kernel: eth0: renamed from tmpbac86 Sep 6 00:17:54.688046 systemd-networkd[1055]: lxcb1833255d3c1: Gained carrier Sep 6 00:17:54.688504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb1833255d3c1: link becomes ready Sep 6 00:17:54.712884 systemd-networkd[1055]: lxcd0b83b15b4ac: Link UP Sep 6 00:17:54.739413 kernel: eth0: renamed from tmpdb9d9 Sep 6 00:17:54.739573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd0b83b15b4ac: link becomes ready Sep 6 00:17:54.732333 systemd-networkd[1055]: lxcd0b83b15b4ac: Gained carrier Sep 6 00:17:54.843625 systemd-networkd[1055]: cilium_vxlan: Gained IPv6LL Sep 6 00:17:55.227635 systemd-networkd[1055]: lxc_health: Gained IPv6LL Sep 6 00:17:55.552331 kubelet[2065]: E0906 00:17:55.552261 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:55.574556 kubelet[2065]: I0906 00:17:55.574494 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w7bvl" podStartSLOduration=11.0090843 podStartE2EDuration="18.574473651s" podCreationTimestamp="2025-09-06 00:17:37 +0000 UTC" firstStartedPulling="2025-09-06 00:17:39.635526673 +0000 UTC m=+8.442910895" lastFinishedPulling="2025-09-06 00:17:47.200916015 +0000 UTC m=+16.008300246" observedRunningTime="2025-09-06 00:17:51.508847693 +0000 UTC m=+20.316231932" watchObservedRunningTime="2025-09-06 00:17:55.574473651 +0000 UTC m=+24.381857890" Sep 6 00:17:55.932728 systemd-networkd[1055]: lxcb1833255d3c1: Gained IPv6LL Sep 6 00:17:56.379639 systemd-networkd[1055]: lxcd0b83b15b4ac: Gained IPv6LL Sep 6 00:17:56.498863 kubelet[2065]: E0906 00:17:56.498832 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:58.795500 env[1298]: time="2025-09-06T00:17:58.793699144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:58.795500 env[1298]: time="2025-09-06T00:17:58.793788873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:58.795500 env[1298]: time="2025-09-06T00:17:58.793816846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:58.795500 env[1298]: time="2025-09-06T00:17:58.794111867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bac86d618be494fa982ad47b959a784b5ea22e16b41f49fa265850441555a0b0 pid=3252 runtime=io.containerd.runc.v2 Sep 6 00:17:58.836945 systemd[1]: run-containerd-runc-k8s.io-bac86d618be494fa982ad47b959a784b5ea22e16b41f49fa265850441555a0b0-runc.SjIUcX.mount: Deactivated successfully. Sep 6 00:17:58.841216 env[1298]: time="2025-09-06T00:17:58.816523131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:58.841216 env[1298]: time="2025-09-06T00:17:58.816571602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:58.841216 env[1298]: time="2025-09-06T00:17:58.816581872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:58.841216 env[1298]: time="2025-09-06T00:17:58.816734000Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db9d90253bdf48528ae5071fcfc71bf670c4a25cd9c522beedfbcb0e56e64dee pid=3269 runtime=io.containerd.runc.v2 Sep 6 00:17:58.955569 env[1298]: time="2025-09-06T00:17:58.953663463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c6bqj,Uid:9c951f8a-fcf6-4a00-8c40-bd7b26c66d92,Namespace:kube-system,Attempt:0,} returns sandbox id \"bac86d618be494fa982ad47b959a784b5ea22e16b41f49fa265850441555a0b0\"" Sep 6 00:17:58.955739 kubelet[2065]: E0906 00:17:58.954645 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:58.957716 env[1298]: time="2025-09-06T00:17:58.957665575Z" level=info msg="CreateContainer within sandbox \"bac86d618be494fa982ad47b959a784b5ea22e16b41f49fa265850441555a0b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:17:58.980245 env[1298]: time="2025-09-06T00:17:58.980173758Z" level=info msg="CreateContainer within sandbox \"bac86d618be494fa982ad47b959a784b5ea22e16b41f49fa265850441555a0b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b241bf3b0498794860dc16cb7019e2a7b8c9b14fe9bcda8c4d1b0f45f08e5b3a\"" Sep 6 00:17:58.990698 env[1298]: time="2025-09-06T00:17:58.981891067Z" level=info msg="StartContainer for \"b241bf3b0498794860dc16cb7019e2a7b8c9b14fe9bcda8c4d1b0f45f08e5b3a\"" Sep 6 00:17:58.991063 env[1298]: time="2025-09-06T00:17:58.991011720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-48t7g,Uid:06f31362-bef0-4c0b-a437-451fac3af25e,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9d90253bdf48528ae5071fcfc71bf670c4a25cd9c522beedfbcb0e56e64dee\"" Sep 6 00:17:58.992990 kubelet[2065]: E0906 00:17:58.992427 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:58.996970 env[1298]: time="2025-09-06T00:17:58.996910896Z" level=info msg="CreateContainer within sandbox \"db9d90253bdf48528ae5071fcfc71bf670c4a25cd9c522beedfbcb0e56e64dee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:17:59.014987 env[1298]: time="2025-09-06T00:17:59.014933599Z" level=info msg="CreateContainer within sandbox \"db9d90253bdf48528ae5071fcfc71bf670c4a25cd9c522beedfbcb0e56e64dee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c740408f6b11bedc9fed3a1b5693f79e83b26284dfa70bfcffd6926e41b7c960\"" Sep 6 00:17:59.016375 env[1298]: time="2025-09-06T00:17:59.016322671Z" level=info msg="StartContainer for \"c740408f6b11bedc9fed3a1b5693f79e83b26284dfa70bfcffd6926e41b7c960\"" Sep 6 00:17:59.092902 env[1298]: time="2025-09-06T00:17:59.092800256Z" level=info msg="StartContainer for \"b241bf3b0498794860dc16cb7019e2a7b8c9b14fe9bcda8c4d1b0f45f08e5b3a\" returns successfully" Sep 6 00:17:59.094042 env[1298]: time="2025-09-06T00:17:59.093993344Z" level=info msg="StartContainer for \"c740408f6b11bedc9fed3a1b5693f79e83b26284dfa70bfcffd6926e41b7c960\" returns successfully" Sep 6 00:17:59.506859 kubelet[2065]: E0906 00:17:59.506718 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:59.510678 kubelet[2065]: E0906 00:17:59.510650 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:59.541101 kubelet[2065]: I0906 00:17:59.541033 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c6bqj" podStartSLOduration=22.540994820999998 podStartE2EDuration="22.540994821s" podCreationTimestamp="2025-09-06 00:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:59.539730654 +0000 UTC m=+28.347114895" watchObservedRunningTime="2025-09-06 00:17:59.540994821 +0000 UTC m=+28.348379062" Sep 6 00:17:59.541362 kubelet[2065]: I0906 00:17:59.541193 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-48t7g" podStartSLOduration=22.541183166 podStartE2EDuration="22.541183166s" podCreationTimestamp="2025-09-06 00:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:59.526762028 +0000 UTC m=+28.334146266" watchObservedRunningTime="2025-09-06 00:17:59.541183166 +0000 UTC m=+28.348567427" Sep 6 00:18:00.512784 kubelet[2065]: E0906 00:18:00.512726 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:00.513460 kubelet[2065]: E0906 00:18:00.513416 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:01.515141 kubelet[2065]: E0906 00:18:01.515101 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:01.516095 kubelet[2065]: E0906 00:18:01.516052 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:08.585342 systemd[1]: Started sshd@5-143.198.146.98:22-147.75.109.163:56014.service. Sep 6 00:18:08.652564 sshd[3409]: Accepted publickey for core from 147.75.109.163 port 56014 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:08.654740 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:08.663637 systemd-logind[1285]: New session 6 of user core. Sep 6 00:18:08.663877 systemd[1]: Started session-6.scope. Sep 6 00:18:08.882062 sshd[3409]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:08.886025 systemd-logind[1285]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:18:08.887345 systemd[1]: sshd@5-143.198.146.98:22-147.75.109.163:56014.service: Deactivated successfully. Sep 6 00:18:08.888210 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:18:08.889568 systemd-logind[1285]: Removed session 6. Sep 6 00:18:13.886944 systemd[1]: Started sshd@6-143.198.146.98:22-147.75.109.163:39926.service. Sep 6 00:18:13.934546 sshd[3423]: Accepted publickey for core from 147.75.109.163 port 39926 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:13.936838 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:13.942227 systemd[1]: Started session-7.scope. Sep 6 00:18:13.942701 systemd-logind[1285]: New session 7 of user core. Sep 6 00:18:14.065687 sshd[3423]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:14.069176 systemd[1]: sshd@6-143.198.146.98:22-147.75.109.163:39926.service: Deactivated successfully. Sep 6 00:18:14.070510 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:18:14.070888 systemd-logind[1285]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:18:14.072116 systemd-logind[1285]: Removed session 7. Sep 6 00:18:19.071257 systemd[1]: Started sshd@7-143.198.146.98:22-147.75.109.163:39942.service. Sep 6 00:18:19.123577 sshd[3437]: Accepted publickey for core from 147.75.109.163 port 39942 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:19.125264 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:19.131329 systemd-logind[1285]: New session 8 of user core. Sep 6 00:18:19.131560 systemd[1]: Started session-8.scope. Sep 6 00:18:19.263806 sshd[3437]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:19.268020 systemd-logind[1285]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:18:19.268160 systemd[1]: sshd@7-143.198.146.98:22-147.75.109.163:39942.service: Deactivated successfully. Sep 6 00:18:19.269046 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:18:19.270545 systemd-logind[1285]: Removed session 8. Sep 6 00:18:24.270064 systemd[1]: Started sshd@8-143.198.146.98:22-147.75.109.163:37850.service. Sep 6 00:18:24.324293 sshd[3451]: Accepted publickey for core from 147.75.109.163 port 37850 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:24.327000 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:24.334162 systemd[1]: Started session-9.scope. Sep 6 00:18:24.335450 systemd-logind[1285]: New session 9 of user core. Sep 6 00:18:24.486959 sshd[3451]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:24.493918 systemd[1]: Started sshd@9-143.198.146.98:22-147.75.109.163:37852.service. Sep 6 00:18:24.502902 systemd-logind[1285]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:18:24.504181 systemd[1]: sshd@8-143.198.146.98:22-147.75.109.163:37850.service: Deactivated successfully. Sep 6 00:18:24.505116 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:18:24.506845 systemd-logind[1285]: Removed session 9. Sep 6 00:18:24.549647 sshd[3463]: Accepted publickey for core from 147.75.109.163 port 37852 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:24.551751 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:24.558682 systemd[1]: Started session-10.scope. Sep 6 00:18:24.558970 systemd-logind[1285]: New session 10 of user core. Sep 6 00:18:24.785151 systemd[1]: Started sshd@10-143.198.146.98:22-147.75.109.163:37854.service. Sep 6 00:18:24.787871 sshd[3463]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:24.795922 systemd[1]: sshd@9-143.198.146.98:22-147.75.109.163:37852.service: Deactivated successfully. Sep 6 00:18:24.796865 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:18:24.797441 systemd-logind[1285]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:18:24.798834 systemd-logind[1285]: Removed session 10. Sep 6 00:18:24.849754 sshd[3473]: Accepted publickey for core from 147.75.109.163 port 37854 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:24.850906 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:24.856671 systemd[1]: Started session-11.scope. Sep 6 00:18:24.857051 systemd-logind[1285]: New session 11 of user core. Sep 6 00:18:25.017792 sshd[3473]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:25.023962 systemd-logind[1285]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:18:25.024638 systemd[1]: sshd@10-143.198.146.98:22-147.75.109.163:37854.service: Deactivated successfully. Sep 6 00:18:25.025447 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:18:25.028038 systemd-logind[1285]: Removed session 11. Sep 6 00:18:30.024513 systemd[1]: Started sshd@11-143.198.146.98:22-147.75.109.163:43738.service. Sep 6 00:18:30.085371 sshd[3487]: Accepted publickey for core from 147.75.109.163 port 43738 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:30.087162 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:30.093832 systemd-logind[1285]: New session 12 of user core. Sep 6 00:18:30.094225 systemd[1]: Started session-12.scope. Sep 6 00:18:30.255493 sshd[3487]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:30.260109 systemd[1]: sshd@11-143.198.146.98:22-147.75.109.163:43738.service: Deactivated successfully. Sep 6 00:18:30.262296 systemd-logind[1285]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:18:30.262408 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:18:30.265036 systemd-logind[1285]: Removed session 12. Sep 6 00:18:35.260567 systemd[1]: Started sshd@12-143.198.146.98:22-147.75.109.163:43750.service. Sep 6 00:18:35.314309 sshd[3502]: Accepted publickey for core from 147.75.109.163 port 43750 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:35.316186 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:35.323721 systemd[1]: Started session-13.scope. Sep 6 00:18:35.324072 systemd-logind[1285]: New session 13 of user core. Sep 6 00:18:35.476545 sshd[3502]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:35.479515 systemd[1]: sshd@12-143.198.146.98:22-147.75.109.163:43750.service: Deactivated successfully. Sep 6 00:18:35.480813 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:18:35.482242 systemd-logind[1285]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:18:35.484164 systemd-logind[1285]: Removed session 13. Sep 6 00:18:40.481909 systemd[1]: Started sshd@13-143.198.146.98:22-147.75.109.163:49372.service. Sep 6 00:18:40.531225 sshd[3517]: Accepted publickey for core from 147.75.109.163 port 49372 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:40.533657 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:40.539643 systemd-logind[1285]: New session 14 of user core. Sep 6 00:18:40.540238 systemd[1]: Started session-14.scope. Sep 6 00:18:40.683603 sshd[3517]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:40.688763 systemd[1]: Started sshd@14-143.198.146.98:22-147.75.109.163:49378.service. Sep 6 00:18:40.690522 systemd[1]: sshd@13-143.198.146.98:22-147.75.109.163:49372.service: Deactivated successfully. Sep 6 00:18:40.692668 systemd-logind[1285]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:18:40.693677 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:18:40.696376 systemd-logind[1285]: Removed session 14. Sep 6 00:18:40.744374 sshd[3528]: Accepted publickey for core from 147.75.109.163 port 49378 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:40.746529 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:40.753825 systemd[1]: Started session-15.scope. Sep 6 00:18:40.754558 systemd-logind[1285]: New session 15 of user core. Sep 6 00:18:41.117995 sshd[3528]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:41.119465 systemd[1]: Started sshd@15-143.198.146.98:22-147.75.109.163:49392.service. Sep 6 00:18:41.125669 systemd-logind[1285]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:18:41.125746 systemd[1]: sshd@14-143.198.146.98:22-147.75.109.163:49378.service: Deactivated successfully. Sep 6 00:18:41.127033 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:18:41.127655 systemd-logind[1285]: Removed session 15. Sep 6 00:18:41.182325 sshd[3539]: Accepted publickey for core from 147.75.109.163 port 49392 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:41.184134 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:41.190472 systemd-logind[1285]: New session 16 of user core. Sep 6 00:18:41.191008 systemd[1]: Started session-16.scope. Sep 6 00:18:42.609177 sshd[3539]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:42.614595 systemd[1]: Started sshd@16-143.198.146.98:22-147.75.109.163:49402.service. Sep 6 00:18:42.621444 systemd-logind[1285]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:18:42.624206 systemd[1]: sshd@15-143.198.146.98:22-147.75.109.163:49392.service: Deactivated successfully. Sep 6 00:18:42.625159 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:18:42.625726 systemd-logind[1285]: Removed session 16. Sep 6 00:18:42.677352 sshd[3555]: Accepted publickey for core from 147.75.109.163 port 49402 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:42.679427 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:42.684407 systemd-logind[1285]: New session 17 of user core. Sep 6 00:18:42.685001 systemd[1]: Started session-17.scope. Sep 6 00:18:43.030051 sshd[3555]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:43.035040 systemd[1]: Started sshd@17-143.198.146.98:22-147.75.109.163:49410.service. Sep 6 00:18:43.042919 systemd[1]: sshd@16-143.198.146.98:22-147.75.109.163:49402.service: Deactivated successfully. Sep 6 00:18:43.044682 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:18:43.044747 systemd-logind[1285]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:18:43.048258 systemd-logind[1285]: Removed session 17. Sep 6 00:18:43.098691 sshd[3567]: Accepted publickey for core from 147.75.109.163 port 49410 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:43.100786 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:43.105778 systemd-logind[1285]: New session 18 of user core. Sep 6 00:18:43.106710 systemd[1]: Started session-18.scope. Sep 6 00:18:43.250082 sshd[3567]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:43.254474 systemd-logind[1285]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:18:43.255106 systemd[1]: sshd@17-143.198.146.98:22-147.75.109.163:49410.service: Deactivated successfully. Sep 6 00:18:43.256005 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:18:43.257428 systemd-logind[1285]: Removed session 18. Sep 6 00:18:43.402748 kubelet[2065]: E0906 00:18:43.402135 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:48.256117 systemd[1]: Started sshd@18-143.198.146.98:22-147.75.109.163:49414.service. Sep 6 00:18:48.307459 sshd[3582]: Accepted publickey for core from 147.75.109.163 port 49414 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:48.309714 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:48.316249 systemd[1]: Started session-19.scope. Sep 6 00:18:48.316711 systemd-logind[1285]: New session 19 of user core. Sep 6 00:18:48.454868 sshd[3582]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:48.458008 systemd[1]: sshd@18-143.198.146.98:22-147.75.109.163:49414.service: Deactivated successfully. Sep 6 00:18:48.459279 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:18:48.459722 systemd-logind[1285]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:18:48.461265 systemd-logind[1285]: Removed session 19. Sep 6 00:18:53.459081 systemd[1]: Started sshd@19-143.198.146.98:22-147.75.109.163:34674.service. Sep 6 00:18:53.511501 sshd[3598]: Accepted publickey for core from 147.75.109.163 port 34674 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:53.513776 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:53.519079 systemd-logind[1285]: New session 20 of user core. Sep 6 00:18:53.519355 systemd[1]: Started session-20.scope. Sep 6 00:18:53.649667 sshd[3598]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:53.652755 systemd-logind[1285]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:18:53.653032 systemd[1]: sshd@19-143.198.146.98:22-147.75.109.163:34674.service: Deactivated successfully. Sep 6 00:18:53.654008 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:18:53.654493 systemd-logind[1285]: Removed session 20. Sep 6 00:18:56.401667 kubelet[2065]: E0906 00:18:56.401620 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:56.402210 kubelet[2065]: E0906 00:18:56.401707 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:57.401719 kubelet[2065]: E0906 00:18:57.401670 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:58.654999 systemd[1]: Started sshd@20-143.198.146.98:22-147.75.109.163:34682.service. Sep 6 00:18:58.707799 sshd[3611]: Accepted publickey for core from 147.75.109.163 port 34682 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:58.710334 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:58.715289 systemd-logind[1285]: New session 21 of user core. Sep 6 00:18:58.716928 systemd[1]: Started session-21.scope. Sep 6 00:18:58.855789 sshd[3611]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:58.858786 systemd-logind[1285]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:18:58.859081 systemd[1]: sshd@20-143.198.146.98:22-147.75.109.163:34682.service: Deactivated successfully. Sep 6 00:18:58.860018 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:18:58.860903 systemd-logind[1285]: Removed session 21. Sep 6 00:19:03.859566 systemd[1]: Started sshd@21-143.198.146.98:22-147.75.109.163:40958.service. Sep 6 00:19:03.909019 sshd[3624]: Accepted publickey for core from 147.75.109.163 port 40958 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:03.911280 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:03.917018 systemd[1]: Started session-22.scope. Sep 6 00:19:03.917425 systemd-logind[1285]: New session 22 of user core. Sep 6 00:19:04.041935 sshd[3624]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:04.045282 systemd[1]: sshd@21-143.198.146.98:22-147.75.109.163:40958.service: Deactivated successfully. Sep 6 00:19:04.046262 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:19:04.047426 systemd-logind[1285]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:19:04.048931 systemd-logind[1285]: Removed session 22. Sep 6 00:19:05.403263 kubelet[2065]: E0906 00:19:05.403219 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:06.400816 kubelet[2065]: E0906 00:19:06.400768 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:07.401586 kubelet[2065]: E0906 00:19:07.401541 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:09.046608 systemd[1]: Started sshd@22-143.198.146.98:22-147.75.109.163:40974.service. Sep 6 00:19:09.096151 sshd[3639]: Accepted publickey for core from 147.75.109.163 port 40974 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:09.099288 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:09.105212 systemd-logind[1285]: New session 23 of user core. Sep 6 00:19:09.106292 systemd[1]: Started session-23.scope. Sep 6 00:19:09.254790 sshd[3639]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:09.258748 systemd[1]: sshd@22-143.198.146.98:22-147.75.109.163:40974.service: Deactivated successfully. Sep 6 00:19:09.261589 systemd-logind[1285]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:19:09.263236 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:19:09.265270 systemd-logind[1285]: Removed session 23. Sep 6 00:19:10.401267 kubelet[2065]: E0906 00:19:10.401219 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:14.261783 systemd[1]: Started sshd@23-143.198.146.98:22-147.75.109.163:54418.service. Sep 6 00:19:14.317596 sshd[3652]: Accepted publickey for core from 147.75.109.163 port 54418 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:14.320305 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:14.325975 systemd-logind[1285]: New session 24 of user core. Sep 6 00:19:14.326038 systemd[1]: Started session-24.scope. Sep 6 00:19:14.466117 sshd[3652]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:14.472429 systemd[1]: Started sshd@24-143.198.146.98:22-147.75.109.163:54430.service. Sep 6 00:19:14.475501 systemd-logind[1285]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:19:14.475949 systemd[1]: sshd@23-143.198.146.98:22-147.75.109.163:54418.service: Deactivated successfully. Sep 6 00:19:14.477827 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:19:14.478950 systemd-logind[1285]: Removed session 24. Sep 6 00:19:14.529891 sshd[3663]: Accepted publickey for core from 147.75.109.163 port 54430 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:14.531870 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:14.537481 systemd-logind[1285]: New session 25 of user core. Sep 6 00:19:14.537693 systemd[1]: Started session-25.scope. Sep 6 00:19:16.062808 systemd[1]: run-containerd-runc-k8s.io-3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763-runc.k69zbb.mount: Deactivated successfully. Sep 6 00:19:16.084858 env[1298]: time="2025-09-06T00:19:16.084800393Z" level=info msg="StopContainer for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" with timeout 30 (s)" Sep 6 00:19:16.085773 env[1298]: time="2025-09-06T00:19:16.085729513Z" level=info msg="Stop container \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" with signal terminated" Sep 6 00:19:16.099752 env[1298]: time="2025-09-06T00:19:16.099678634Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:19:16.104546 env[1298]: time="2025-09-06T00:19:16.104500445Z" level=info msg="StopContainer for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" with timeout 2 (s)" Sep 6 00:19:16.105015 env[1298]: time="2025-09-06T00:19:16.104982729Z" level=info msg="Stop container \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" with signal terminated" Sep 6 00:19:16.119044 systemd-networkd[1055]: lxc_health: Link DOWN Sep 6 00:19:16.119054 systemd-networkd[1055]: lxc_health: Lost carrier Sep 6 00:19:16.146990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd-rootfs.mount: Deactivated successfully. Sep 6 00:19:16.153001 env[1298]: time="2025-09-06T00:19:16.152924708Z" level=info msg="shim disconnected" id=f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd Sep 6 00:19:16.153361 env[1298]: time="2025-09-06T00:19:16.153336769Z" level=warning msg="cleaning up after shim disconnected" id=f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd namespace=k8s.io Sep 6 00:19:16.153610 env[1298]: time="2025-09-06T00:19:16.153591364Z" level=info msg="cleaning up dead shim" Sep 6 00:19:16.166627 env[1298]: time="2025-09-06T00:19:16.166570545Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3724 runtime=io.containerd.runc.v2\n" Sep 6 00:19:16.168532 env[1298]: time="2025-09-06T00:19:16.168490302Z" level=info msg="StopContainer for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" returns successfully" Sep 6 00:19:16.169414 env[1298]: time="2025-09-06T00:19:16.169351186Z" level=info msg="StopPodSandbox for \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\"" Sep 6 00:19:16.169696 env[1298]: time="2025-09-06T00:19:16.169665413Z" level=info msg="Container to stop \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:16.172255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988-shm.mount: Deactivated successfully. Sep 6 00:19:16.187189 env[1298]: time="2025-09-06T00:19:16.187033744Z" level=info msg="shim disconnected" id=3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763 Sep 6 00:19:16.187605 env[1298]: time="2025-09-06T00:19:16.187570216Z" level=warning msg="cleaning up after shim disconnected" id=3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763 namespace=k8s.io Sep 6 00:19:16.187605 env[1298]: time="2025-09-06T00:19:16.187595904Z" level=info msg="cleaning up dead shim" Sep 6 00:19:16.205215 env[1298]: time="2025-09-06T00:19:16.205170211Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n" Sep 6 00:19:16.206773 env[1298]: time="2025-09-06T00:19:16.206737821Z" level=info msg="StopContainer for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" returns successfully" Sep 6 00:19:16.207571 env[1298]: time="2025-09-06T00:19:16.207549141Z" level=info msg="StopPodSandbox for \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\"" Sep 6 00:19:16.207727 env[1298]: time="2025-09-06T00:19:16.207706886Z" level=info msg="Container to stop \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:16.207802 env[1298]: time="2025-09-06T00:19:16.207785868Z" level=info msg="Container to stop \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:16.207867 env[1298]: time="2025-09-06T00:19:16.207851897Z" level=info msg="Container to stop \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:16.207938 env[1298]: time="2025-09-06T00:19:16.207922815Z" level=info msg="Container to stop \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:16.208003 env[1298]: time="2025-09-06T00:19:16.207986312Z" level=info msg="Container to stop \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:16.239698 env[1298]: time="2025-09-06T00:19:16.239356955Z" level=info msg="shim disconnected" id=c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988 Sep 6 00:19:16.239954 env[1298]: time="2025-09-06T00:19:16.239931377Z" level=warning msg="cleaning up after shim disconnected" id=c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988 namespace=k8s.io Sep 6 00:19:16.240025 env[1298]: time="2025-09-06T00:19:16.240010371Z" level=info msg="cleaning up dead shim" Sep 6 00:19:16.256629 env[1298]: time="2025-09-06T00:19:16.256577133Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Sep 6 00:19:16.257406 env[1298]: time="2025-09-06T00:19:16.257357984Z" level=info msg="TearDown network for sandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" successfully" Sep 6 00:19:16.257598 env[1298]: time="2025-09-06T00:19:16.257579461Z" level=info msg="StopPodSandbox for \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" returns successfully" Sep 6 00:19:16.260522 env[1298]: time="2025-09-06T00:19:16.259751824Z" level=info msg="shim disconnected" id=f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42 Sep 6 00:19:16.260522 env[1298]: time="2025-09-06T00:19:16.259789349Z" level=warning msg="cleaning up after shim disconnected" id=f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42 namespace=k8s.io Sep 6 00:19:16.260522 env[1298]: time="2025-09-06T00:19:16.259798848Z" level=info msg="cleaning up dead shim" Sep 6 00:19:16.272560 env[1298]: time="2025-09-06T00:19:16.272512161Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3816 runtime=io.containerd.runc.v2\n" Sep 6 00:19:16.273067 env[1298]: time="2025-09-06T00:19:16.273035833Z" level=info msg="TearDown network for sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" successfully" Sep 6 00:19:16.273199 env[1298]: time="2025-09-06T00:19:16.273179627Z" level=info msg="StopPodSandbox for \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" returns successfully" Sep 6 00:19:16.401654 kubelet[2065]: I0906 00:19:16.401532 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-net\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.401654 kubelet[2065]: I0906 00:19:16.401581 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-hostproc\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.401654 kubelet[2065]: I0906 00:19:16.401599 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-kernel\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.401654 kubelet[2065]: I0906 00:19:16.401624 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-run\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.402840 kubelet[2065]: I0906 00:19:16.402464 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xwwr\" (UniqueName: \"kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-kube-api-access-2xwwr\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.402840 kubelet[2065]: I0906 00:19:16.402495 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19ed708a-c9b2-4304-930f-c5241cedba3e-clustermesh-secrets\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.402840 kubelet[2065]: I0906 00:19:16.402512 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-cilium-config-path\") pod \"31a90b57-68ae-4e33-86a7-0bb3993ea9ce\" (UID: \"31a90b57-68ae-4e33-86a7-0bb3993ea9ce\") " Sep 6 00:19:16.402840 kubelet[2065]: I0906 00:19:16.402530 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-config-path\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.402840 kubelet[2065]: I0906 00:19:16.402583 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-xtables-lock\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.402840 kubelet[2065]: I0906 00:19:16.402606 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cni-path\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.403059 kubelet[2065]: I0906 00:19:16.402624 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpddc\" (UniqueName: \"kubernetes.io/projected/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-kube-api-access-dpddc\") pod \"31a90b57-68ae-4e33-86a7-0bb3993ea9ce\" (UID: \"31a90b57-68ae-4e33-86a7-0bb3993ea9ce\") " Sep 6 00:19:16.403059 kubelet[2065]: I0906 00:19:16.402640 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-bpf-maps\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.403059 kubelet[2065]: I0906 00:19:16.402654 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-etc-cni-netd\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.403059 kubelet[2065]: I0906 00:19:16.402670 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-lib-modules\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.403059 kubelet[2065]: I0906 00:19:16.402690 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-hubble-tls\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.403059 kubelet[2065]: I0906 00:19:16.402707 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-cgroup\") pod \"19ed708a-c9b2-4304-930f-c5241cedba3e\" (UID: \"19ed708a-c9b2-4304-930f-c5241cedba3e\") " Sep 6 00:19:16.411356 kubelet[2065]: I0906 00:19:16.405772 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.411356 kubelet[2065]: I0906 00:19:16.410791 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.411356 kubelet[2065]: I0906 00:19:16.410812 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cni-path" (OuterVolumeSpecName: "cni-path") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.411356 kubelet[2065]: I0906 00:19:16.411090 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:19:16.411356 kubelet[2065]: I0906 00:19:16.411232 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.412076 kubelet[2065]: I0906 00:19:16.411251 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-hostproc" (OuterVolumeSpecName: "hostproc") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.412076 kubelet[2065]: I0906 00:19:16.411266 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.412076 kubelet[2065]: I0906 00:19:16.411283 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.415219 kubelet[2065]: I0906 00:19:16.415176 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-kube-api-access-dpddc" (OuterVolumeSpecName: "kube-api-access-dpddc") pod "31a90b57-68ae-4e33-86a7-0bb3993ea9ce" (UID: "31a90b57-68ae-4e33-86a7-0bb3993ea9ce"). InnerVolumeSpecName "kube-api-access-dpddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:19:16.415417 kubelet[2065]: I0906 00:19:16.415230 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-kube-api-access-2xwwr" (OuterVolumeSpecName: "kube-api-access-2xwwr") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "kube-api-access-2xwwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:19:16.415527 kubelet[2065]: I0906 00:19:16.415511 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.415611 kubelet[2065]: I0906 00:19:16.415598 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.415690 kubelet[2065]: I0906 00:19:16.415678 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:16.418010 kubelet[2065]: I0906 00:19:16.417969 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ed708a-c9b2-4304-930f-c5241cedba3e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:19:16.418773 kubelet[2065]: I0906 00:19:16.418746 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19ed708a-c9b2-4304-930f-c5241cedba3e" (UID: "19ed708a-c9b2-4304-930f-c5241cedba3e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:19:16.420190 kubelet[2065]: I0906 00:19:16.420133 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31a90b57-68ae-4e33-86a7-0bb3993ea9ce" (UID: "31a90b57-68ae-4e33-86a7-0bb3993ea9ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:19:16.503253 kubelet[2065]: I0906 00:19:16.503206 2065 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-lib-modules\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503536 kubelet[2065]: I0906 00:19:16.503519 2065 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-hubble-tls\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503623 kubelet[2065]: I0906 00:19:16.503605 2065 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-bpf-maps\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503702 kubelet[2065]: I0906 00:19:16.503690 2065 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-etc-cni-netd\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503779 kubelet[2065]: I0906 00:19:16.503759 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-cgroup\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503845 kubelet[2065]: I0906 00:19:16.503834 2065 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-hostproc\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503919 kubelet[2065]: I0906 00:19:16.503904 2065 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.503989 kubelet[2065]: I0906 00:19:16.503973 2065 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-host-proc-sys-net\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504071 kubelet[2065]: I0906 00:19:16.504055 2065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xwwr\" (UniqueName: \"kubernetes.io/projected/19ed708a-c9b2-4304-930f-c5241cedba3e-kube-api-access-2xwwr\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504135 kubelet[2065]: I0906 00:19:16.504124 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-run\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504199 kubelet[2065]: I0906 00:19:16.504187 2065 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19ed708a-c9b2-4304-930f-c5241cedba3e-clustermesh-secrets\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504306 kubelet[2065]: I0906 00:19:16.504292 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-cilium-config-path\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504481 kubelet[2065]: I0906 00:19:16.504457 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19ed708a-c9b2-4304-930f-c5241cedba3e-cilium-config-path\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504624 kubelet[2065]: I0906 00:19:16.504603 2065 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-xtables-lock\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504712 kubelet[2065]: I0906 00:19:16.504697 2065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpddc\" (UniqueName: \"kubernetes.io/projected/31a90b57-68ae-4e33-86a7-0bb3993ea9ce-kube-api-access-dpddc\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.504817 kubelet[2065]: I0906 00:19:16.504796 2065 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19ed708a-c9b2-4304-930f-c5241cedba3e-cni-path\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:16.525876 kubelet[2065]: E0906 00:19:16.523081 2065 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:19:16.691364 kubelet[2065]: I0906 00:19:16.691254 2065 scope.go:117] "RemoveContainer" containerID="f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd" Sep 6 00:19:16.697728 env[1298]: time="2025-09-06T00:19:16.697674895Z" level=info msg="RemoveContainer for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\"" Sep 6 00:19:16.704971 env[1298]: time="2025-09-06T00:19:16.704927242Z" level=info msg="RemoveContainer for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" returns successfully" Sep 6 00:19:16.718535 kubelet[2065]: I0906 00:19:16.718490 2065 scope.go:117] "RemoveContainer" containerID="f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd" Sep 6 00:19:16.721715 env[1298]: time="2025-09-06T00:19:16.720760707Z" level=error msg="ContainerStatus for \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\": not found" Sep 6 00:19:16.724310 kubelet[2065]: E0906 00:19:16.724259 2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\": not found" containerID="f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd" Sep 6 00:19:16.725428 kubelet[2065]: I0906 00:19:16.725293 2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd"} err="failed to get container status \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\": rpc error: code = NotFound desc = an error occurred when try to find container \"f642a977b56ec280f9a91456ff9b4dbe2c5b91f93f33a03c05bbd7d612f87ebd\": not found" Sep 6 00:19:16.725428 kubelet[2065]: I0906 00:19:16.725432 2065 scope.go:117] "RemoveContainer" containerID="3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763" Sep 6 00:19:16.729278 env[1298]: time="2025-09-06T00:19:16.728928204Z" level=info msg="RemoveContainer for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\"" Sep 6 00:19:16.732193 env[1298]: time="2025-09-06T00:19:16.732154403Z" level=info msg="RemoveContainer for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" returns successfully" Sep 6 00:19:16.733544 kubelet[2065]: I0906 00:19:16.733519 2065 scope.go:117] "RemoveContainer" containerID="85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e" Sep 6 00:19:16.734868 env[1298]: time="2025-09-06T00:19:16.734831275Z" level=info msg="RemoveContainer for \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\"" Sep 6 00:19:16.737217 env[1298]: time="2025-09-06T00:19:16.737171124Z" level=info msg="RemoveContainer for \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\" returns successfully" Sep 6 00:19:16.737482 kubelet[2065]: I0906 00:19:16.737463 2065 scope.go:117] "RemoveContainer" containerID="e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f" Sep 6 00:19:16.738728 env[1298]: time="2025-09-06T00:19:16.738697528Z" level=info msg="RemoveContainer for \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\"" Sep 6 00:19:16.740944 env[1298]: time="2025-09-06T00:19:16.740896233Z" level=info msg="RemoveContainer for \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\" returns successfully" Sep 6 00:19:16.741178 kubelet[2065]: I0906 00:19:16.741159 2065 scope.go:117] "RemoveContainer" containerID="2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a" Sep 6 00:19:16.742352 env[1298]: time="2025-09-06T00:19:16.742317129Z" level=info msg="RemoveContainer for \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\"" Sep 6 00:19:16.744574 env[1298]: time="2025-09-06T00:19:16.744484039Z" level=info msg="RemoveContainer for \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\" returns successfully" Sep 6 00:19:16.744792 kubelet[2065]: I0906 00:19:16.744772 2065 scope.go:117] "RemoveContainer" containerID="d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0" Sep 6 00:19:16.745915 env[1298]: time="2025-09-06T00:19:16.745889973Z" level=info msg="RemoveContainer for \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\"" Sep 6 00:19:16.749047 env[1298]: time="2025-09-06T00:19:16.748976347Z" level=info msg="RemoveContainer for \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\" returns successfully" Sep 6 00:19:16.749410 kubelet[2065]: I0906 00:19:16.749357 2065 scope.go:117] "RemoveContainer" containerID="3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763" Sep 6 00:19:16.750101 env[1298]: time="2025-09-06T00:19:16.750013614Z" level=error msg="ContainerStatus for \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\": not found" Sep 6 00:19:16.750399 kubelet[2065]: E0906 00:19:16.750349 2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\": not found" containerID="3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763" Sep 6 00:19:16.750585 kubelet[2065]: I0906 00:19:16.750547 2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763"} err="failed to get container status \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763\": not found" Sep 6 00:19:16.750713 kubelet[2065]: I0906 00:19:16.750693 2065 scope.go:117] "RemoveContainer" containerID="85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e" Sep 6 00:19:16.751200 env[1298]: time="2025-09-06T00:19:16.751065296Z" level=error msg="ContainerStatus for \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\": not found" Sep 6 00:19:16.751456 kubelet[2065]: E0906 00:19:16.751426 2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\": not found" containerID="85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e" Sep 6 00:19:16.751593 kubelet[2065]: I0906 00:19:16.751563 2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e"} err="failed to get container status \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\": rpc error: code = NotFound desc = an error occurred when try to find container \"85ad73af8792d04ee285d31c0846bf6d3b3049df0ab0c732efdff1f5f697e43e\": not found" Sep 6 00:19:16.751683 kubelet[2065]: I0906 00:19:16.751668 2065 scope.go:117] "RemoveContainer" containerID="e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f" Sep 6 00:19:16.752163 env[1298]: time="2025-09-06T00:19:16.752047609Z" level=error msg="ContainerStatus for \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\": not found" Sep 6 00:19:16.752450 kubelet[2065]: E0906 00:19:16.752426 2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\": not found" containerID="e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f" Sep 6 00:19:16.752599 kubelet[2065]: I0906 00:19:16.752568 2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f"} err="failed to get container status \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e02f7c04061317650b31a5f739fc3e1ff83bceb1b4edcb03c73f69aa7d39c86f\": not found" Sep 6 00:19:16.752702 kubelet[2065]: I0906 00:19:16.752684 2065 scope.go:117] "RemoveContainer" containerID="2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a" Sep 6 00:19:16.753128 env[1298]: time="2025-09-06T00:19:16.753011716Z" level=error msg="ContainerStatus for \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\": not found" Sep 6 00:19:16.753325 kubelet[2065]: E0906 00:19:16.753299 2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\": not found" containerID="2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a" Sep 6 00:19:16.753490 kubelet[2065]: I0906 00:19:16.753460 2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a"} err="failed to get container status \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2901b6746b27ffd8a64ad63a69fa934c48bcd96b79896da7bc8a9ed27ec1db3a\": not found" Sep 6 00:19:16.753666 kubelet[2065]: I0906 00:19:16.753642 2065 scope.go:117] "RemoveContainer" containerID="d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0" Sep 6 00:19:16.754014 env[1298]: time="2025-09-06T00:19:16.753956555Z" level=error msg="ContainerStatus for \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\": not found" Sep 6 00:19:16.754217 kubelet[2065]: E0906 00:19:16.754192 2065 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\": not found" containerID="d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0" Sep 6 00:19:16.754356 kubelet[2065]: I0906 00:19:16.754323 2065 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0"} err="failed to get container status \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d75e5ed1c3be2b253af8b24611d39d8cf1fe3ac90eb1f50cc85e91250f3f84d0\": not found" Sep 6 00:19:17.055185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fbe5114581d533414c2e369cca27e8001b44a82c92489fe6684699eeabd9763-rootfs.mount: Deactivated successfully. Sep 6 00:19:17.055346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988-rootfs.mount: Deactivated successfully. Sep 6 00:19:17.055504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42-rootfs.mount: Deactivated successfully. Sep 6 00:19:17.055642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42-shm.mount: Deactivated successfully. Sep 6 00:19:17.055741 systemd[1]: var-lib-kubelet-pods-19ed708a\x2dc9b2\x2d4304\x2d930f\x2dc5241cedba3e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:19:17.055837 systemd[1]: var-lib-kubelet-pods-19ed708a\x2dc9b2\x2d4304\x2d930f\x2dc5241cedba3e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:19:17.055925 systemd[1]: var-lib-kubelet-pods-31a90b57\x2d68ae\x2d4e33\x2d86a7\x2d0bb3993ea9ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpddc.mount: Deactivated successfully. Sep 6 00:19:17.056022 systemd[1]: var-lib-kubelet-pods-19ed708a\x2dc9b2\x2d4304\x2d930f\x2dc5241cedba3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xwwr.mount: Deactivated successfully. Sep 6 00:19:17.403746 kubelet[2065]: I0906 00:19:17.403642 2065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" path="/var/lib/kubelet/pods/19ed708a-c9b2-4304-930f-c5241cedba3e/volumes" Sep 6 00:19:17.405475 kubelet[2065]: I0906 00:19:17.405440 2065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a90b57-68ae-4e33-86a7-0bb3993ea9ce" path="/var/lib/kubelet/pods/31a90b57-68ae-4e33-86a7-0bb3993ea9ce/volumes" Sep 6 00:19:17.986341 sshd[3663]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:17.990985 systemd[1]: Started sshd@25-143.198.146.98:22-147.75.109.163:54446.service. Sep 6 00:19:17.991635 systemd[1]: sshd@24-143.198.146.98:22-147.75.109.163:54430.service: Deactivated successfully. Sep 6 00:19:17.994335 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:19:17.994663 systemd-logind[1285]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:19:17.996920 systemd-logind[1285]: Removed session 25. Sep 6 00:19:18.049806 sshd[3834]: Accepted publickey for core from 147.75.109.163 port 54446 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:18.051969 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:18.058625 systemd[1]: Started session-26.scope. Sep 6 00:19:18.059505 systemd-logind[1285]: New session 26 of user core. Sep 6 00:19:18.985732 sshd[3834]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:18.990744 systemd[1]: Started sshd@26-143.198.146.98:22-147.75.109.163:54456.service. Sep 6 00:19:18.999719 systemd[1]: sshd@25-143.198.146.98:22-147.75.109.163:54446.service: Deactivated successfully. Sep 6 00:19:19.001402 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:19:19.001527 systemd-logind[1285]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:19:19.005485 systemd-logind[1285]: Removed session 26. Sep 6 00:19:19.025655 kubelet[2065]: E0906 00:19:19.023231 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" containerName="mount-cgroup" Sep 6 00:19:19.025655 kubelet[2065]: E0906 00:19:19.023276 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" containerName="clean-cilium-state" Sep 6 00:19:19.025655 kubelet[2065]: E0906 00:19:19.023285 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" containerName="cilium-agent" Sep 6 00:19:19.025655 kubelet[2065]: E0906 00:19:19.023294 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" containerName="apply-sysctl-overwrites" Sep 6 00:19:19.025655 kubelet[2065]: E0906 00:19:19.023302 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" containerName="mount-bpf-fs" Sep 6 00:19:19.025655 kubelet[2065]: E0906 00:19:19.023308 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31a90b57-68ae-4e33-86a7-0bb3993ea9ce" containerName="cilium-operator" Sep 6 00:19:19.025655 kubelet[2065]: I0906 00:19:19.023366 2065 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ed708a-c9b2-4304-930f-c5241cedba3e" containerName="cilium-agent" Sep 6 00:19:19.025655 kubelet[2065]: I0906 00:19:19.023382 2065 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a90b57-68ae-4e33-86a7-0bb3993ea9ce" containerName="cilium-operator" Sep 6 00:19:19.070483 sshd[3844]: Accepted publickey for core from 147.75.109.163 port 54456 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:19.073674 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:19.098504 systemd-logind[1285]: New session 27 of user core. Sep 6 00:19:19.098642 systemd[1]: Started session-27.scope. Sep 6 00:19:19.130416 kubelet[2065]: I0906 00:19:19.130343 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hostproc\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130648 kubelet[2065]: I0906 00:19:19.130427 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hubble-tls\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130648 kubelet[2065]: I0906 00:19:19.130538 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-428q6\" (UniqueName: \"kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-kube-api-access-428q6\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130648 kubelet[2065]: I0906 00:19:19.130581 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-etc-cni-netd\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130648 kubelet[2065]: I0906 00:19:19.130614 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-run\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130648 kubelet[2065]: I0906 00:19:19.130642 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-xtables-lock\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130910 kubelet[2065]: I0906 00:19:19.130670 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-net\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130910 kubelet[2065]: I0906 00:19:19.130697 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-kernel\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130910 kubelet[2065]: I0906 00:19:19.130721 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-bpf-maps\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130910 kubelet[2065]: I0906 00:19:19.130746 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-lib-modules\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130910 kubelet[2065]: I0906 00:19:19.130768 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-clustermesh-secrets\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.130910 kubelet[2065]: I0906 00:19:19.130791 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-config-path\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.131188 kubelet[2065]: I0906 00:19:19.130813 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-cgroup\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.131188 kubelet[2065]: I0906 00:19:19.130833 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cni-path\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.131188 kubelet[2065]: I0906 00:19:19.130965 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-ipsec-secrets\") pod \"cilium-zdwgz\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " pod="kube-system/cilium-zdwgz" Sep 6 00:19:19.342662 sshd[3844]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:19.347838 systemd[1]: Started sshd@27-143.198.146.98:22-147.75.109.163:54466.service. Sep 6 00:19:19.356674 systemd[1]: sshd@26-143.198.146.98:22-147.75.109.163:54456.service: Deactivated successfully. Sep 6 00:19:19.357570 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:19:19.372910 systemd-logind[1285]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:19:19.375603 kubelet[2065]: E0906 00:19:19.374368 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:19.376968 env[1298]: time="2025-09-06T00:19:19.375137368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdwgz,Uid:0b5484d0-00e4-4ce7-a5ef-0abaefa8655c,Namespace:kube-system,Attempt:0,}" Sep 6 00:19:19.377686 systemd-logind[1285]: Removed session 27. Sep 6 00:19:19.398877 env[1298]: time="2025-09-06T00:19:19.398310400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:19:19.398877 env[1298]: time="2025-09-06T00:19:19.398419069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:19:19.398877 env[1298]: time="2025-09-06T00:19:19.398445438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:19:19.399075 env[1298]: time="2025-09-06T00:19:19.398935355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da pid=3873 runtime=io.containerd.runc.v2 Sep 6 00:19:19.438336 sshd[3861]: Accepted publickey for core from 147.75.109.163 port 54466 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:19:19.442139 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:19:19.452783 systemd[1]: Started session-28.scope. Sep 6 00:19:19.454110 systemd-logind[1285]: New session 28 of user core. Sep 6 00:19:19.487069 env[1298]: time="2025-09-06T00:19:19.487018573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdwgz,Uid:0b5484d0-00e4-4ce7-a5ef-0abaefa8655c,Namespace:kube-system,Attempt:0,} returns sandbox id \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\"" Sep 6 00:19:19.490321 kubelet[2065]: E0906 00:19:19.489982 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:19.493422 env[1298]: time="2025-09-06T00:19:19.493359703Z" level=info msg="CreateContainer within sandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:19:19.503072 env[1298]: time="2025-09-06T00:19:19.503005199Z" level=info msg="CreateContainer within sandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9\"" Sep 6 00:19:19.504330 env[1298]: time="2025-09-06T00:19:19.504236620Z" level=info msg="StartContainer for \"60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9\"" Sep 6 00:19:19.584421 env[1298]: time="2025-09-06T00:19:19.580633652Z" level=info msg="StartContainer for \"60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9\" returns successfully" Sep 6 00:19:19.641055 env[1298]: time="2025-09-06T00:19:19.640186165Z" level=info msg="shim disconnected" id=60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9 Sep 6 00:19:19.641055 env[1298]: time="2025-09-06T00:19:19.640305627Z" level=warning msg="cleaning up after shim disconnected" id=60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9 namespace=k8s.io Sep 6 00:19:19.641055 env[1298]: time="2025-09-06T00:19:19.640323739Z" level=info msg="cleaning up dead shim" Sep 6 00:19:19.655276 env[1298]: time="2025-09-06T00:19:19.655220742Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3966 runtime=io.containerd.runc.v2\n" Sep 6 00:19:19.717527 env[1298]: time="2025-09-06T00:19:19.717465012Z" level=info msg="StopPodSandbox for \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\"" Sep 6 00:19:19.717852 env[1298]: time="2025-09-06T00:19:19.717824179Z" level=info msg="Container to stop \"60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:19:19.761094 env[1298]: time="2025-09-06T00:19:19.761023910Z" level=info msg="shim disconnected" id=99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da Sep 6 00:19:19.762039 env[1298]: time="2025-09-06T00:19:19.761885959Z" level=warning msg="cleaning up after shim disconnected" id=99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da namespace=k8s.io Sep 6 00:19:19.762182 env[1298]: time="2025-09-06T00:19:19.762161865Z" level=info msg="cleaning up dead shim" Sep 6 00:19:19.774301 env[1298]: time="2025-09-06T00:19:19.774248079Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3999 runtime=io.containerd.runc.v2\n" Sep 6 00:19:19.775096 env[1298]: time="2025-09-06T00:19:19.775045966Z" level=info msg="TearDown network for sandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" successfully" Sep 6 00:19:19.775254 env[1298]: time="2025-09-06T00:19:19.775230527Z" level=info msg="StopPodSandbox for \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" returns successfully" Sep 6 00:19:19.938699 kubelet[2065]: I0906 00:19:19.938580 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-bpf-maps\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.938974 kubelet[2065]: I0906 00:19:19.938946 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cni-path\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.939138 kubelet[2065]: I0906 00:19:19.939111 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-cgroup\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.939279 kubelet[2065]: I0906 00:19:19.939263 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-net\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.939488 kubelet[2065]: I0906 00:19:19.939475 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-run\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.939644 kubelet[2065]: I0906 00:19:19.939632 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-ipsec-secrets\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.940061 kubelet[2065]: I0906 00:19:19.940042 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-kernel\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.940196 kubelet[2065]: I0906 00:19:19.940180 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hostproc\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.940328 kubelet[2065]: I0906 00:19:19.940312 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-config-path\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.940934 kubelet[2065]: I0906 00:19:19.940916 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-xtables-lock\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.941173 kubelet[2065]: I0906 00:19:19.941156 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-lib-modules\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.941261 kubelet[2065]: I0906 00:19:19.941248 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hubble-tls\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.941356 kubelet[2065]: I0906 00:19:19.941342 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-428q6\" (UniqueName: \"kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-kube-api-access-428q6\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.943307 kubelet[2065]: I0906 00:19:19.943268 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-etc-cni-netd\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.943421 kubelet[2065]: I0906 00:19:19.943323 2065 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-clustermesh-secrets\") pod \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\" (UID: \"0b5484d0-00e4-4ce7-a5ef-0abaefa8655c\") " Sep 6 00:19:19.944213 kubelet[2065]: I0906 00:19:19.938741 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945228 kubelet[2065]: I0906 00:19:19.939063 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945325 kubelet[2065]: I0906 00:19:19.939226 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945434 kubelet[2065]: I0906 00:19:19.939434 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945506 kubelet[2065]: I0906 00:19:19.939589 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945609 kubelet[2065]: I0906 00:19:19.941103 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945683 kubelet[2065]: I0906 00:19:19.941119 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945763 kubelet[2065]: I0906 00:19:19.941131 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.945841 kubelet[2065]: I0906 00:19:19.943208 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:19:19.945940 kubelet[2065]: I0906 00:19:19.944173 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.946023 kubelet[2065]: I0906 00:19:19.944193 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:19:19.946205 kubelet[2065]: I0906 00:19:19.946169 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:19:19.948091 kubelet[2065]: I0906 00:19:19.948051 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:19:19.950660 kubelet[2065]: I0906 00:19:19.950629 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-kube-api-access-428q6" (OuterVolumeSpecName: "kube-api-access-428q6") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "kube-api-access-428q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:19:19.952420 kubelet[2065]: I0906 00:19:19.952363 2065 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" (UID: "0b5484d0-00e4-4ce7-a5ef-0abaefa8655c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043823 2065 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-lib-modules\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043864 2065 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hubble-tls\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043876 2065 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-xtables-lock\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043885 2065 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-clustermesh-secrets\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043898 2065 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-428q6\" (UniqueName: \"kubernetes.io/projected/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-kube-api-access-428q6\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043919 2065 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-etc-cni-netd\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043930 2065 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-bpf-maps\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.043907 kubelet[2065]: I0906 00:19:20.043939 2065 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cni-path\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.043947 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-cgroup\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.043958 2065 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-net\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.043967 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-run\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.043975 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.043983 2065 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.043991 2065 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-hostproc\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.044923 kubelet[2065]: I0906 00:19:20.044003 2065 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c-cilium-config-path\") on node \"ci-3510.3.8-n-81199f28b8\" DevicePath \"\"" Sep 6 00:19:20.246857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da-rootfs.mount: Deactivated successfully. Sep 6 00:19:20.247781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da-shm.mount: Deactivated successfully. Sep 6 00:19:20.248189 systemd[1]: var-lib-kubelet-pods-0b5484d0\x2d00e4\x2d4ce7\x2da5ef\x2d0abaefa8655c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:19:20.248418 systemd[1]: var-lib-kubelet-pods-0b5484d0\x2d00e4\x2d4ce7\x2da5ef\x2d0abaefa8655c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d428q6.mount: Deactivated successfully. Sep 6 00:19:20.248567 systemd[1]: var-lib-kubelet-pods-0b5484d0\x2d00e4\x2d4ce7\x2da5ef\x2d0abaefa8655c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:19:20.248728 systemd[1]: var-lib-kubelet-pods-0b5484d0\x2d00e4\x2d4ce7\x2da5ef\x2d0abaefa8655c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:19:20.719080 kubelet[2065]: I0906 00:19:20.719034 2065 scope.go:117] "RemoveContainer" containerID="60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9" Sep 6 00:19:20.723105 env[1298]: time="2025-09-06T00:19:20.723055397Z" level=info msg="RemoveContainer for \"60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9\"" Sep 6 00:19:20.727057 env[1298]: time="2025-09-06T00:19:20.726261842Z" level=info msg="RemoveContainer for \"60d8e6f84bde4dfe606e457a46fdfb765b7a4d602fb07137bb3909dc9ac0f2d9\" returns successfully" Sep 6 00:19:20.765872 kubelet[2065]: E0906 00:19:20.765825 2065 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" containerName="mount-cgroup" Sep 6 00:19:20.765872 kubelet[2065]: I0906 00:19:20.765869 2065 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" containerName="mount-cgroup" Sep 6 00:19:20.849558 kubelet[2065]: I0906 00:19:20.849515 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-hostproc\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.849909 kubelet[2065]: I0906 00:19:20.849880 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-cni-path\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850090 kubelet[2065]: I0906 00:19:20.850071 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50713626-597c-4c86-a707-057fe09f8f66-clustermesh-secrets\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850254 kubelet[2065]: I0906 00:19:20.850233 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-xtables-lock\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850400 kubelet[2065]: I0906 00:19:20.850365 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-bpf-maps\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850557 kubelet[2065]: I0906 00:19:20.850537 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-host-proc-sys-net\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850728 kubelet[2065]: I0906 00:19:20.850663 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-host-proc-sys-kernel\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850874 kubelet[2065]: I0906 00:19:20.850852 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50713626-597c-4c86-a707-057fe09f8f66-cilium-config-path\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.850999 kubelet[2065]: I0906 00:19:20.850979 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-cilium-cgroup\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.851116 kubelet[2065]: I0906 00:19:20.851097 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kznw\" (UniqueName: \"kubernetes.io/projected/50713626-597c-4c86-a707-057fe09f8f66-kube-api-access-8kznw\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.851243 kubelet[2065]: I0906 00:19:20.851209 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-cilium-run\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.851442 kubelet[2065]: I0906 00:19:20.851421 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-etc-cni-netd\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.851577 kubelet[2065]: I0906 00:19:20.851557 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50713626-597c-4c86-a707-057fe09f8f66-lib-modules\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.851694 kubelet[2065]: I0906 00:19:20.851674 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50713626-597c-4c86-a707-057fe09f8f66-hubble-tls\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:20.851823 kubelet[2065]: I0906 00:19:20.851790 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50713626-597c-4c86-a707-057fe09f8f66-cilium-ipsec-secrets\") pod \"cilium-9lptr\" (UID: \"50713626-597c-4c86-a707-057fe09f8f66\") " pod="kube-system/cilium-9lptr" Sep 6 00:19:21.068873 kubelet[2065]: E0906 00:19:21.068807 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:21.071176 env[1298]: time="2025-09-06T00:19:21.071133416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lptr,Uid:50713626-597c-4c86-a707-057fe09f8f66,Namespace:kube-system,Attempt:0,}" Sep 6 00:19:21.091924 env[1298]: time="2025-09-06T00:19:21.090091242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:19:21.091924 env[1298]: time="2025-09-06T00:19:21.091250690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:19:21.091924 env[1298]: time="2025-09-06T00:19:21.091285558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:19:21.091924 env[1298]: time="2025-09-06T00:19:21.091505350Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea pid=4028 runtime=io.containerd.runc.v2 Sep 6 00:19:21.140956 env[1298]: time="2025-09-06T00:19:21.140905365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lptr,Uid:50713626-597c-4c86-a707-057fe09f8f66,Namespace:kube-system,Attempt:0,} returns sandbox id \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\"" Sep 6 00:19:21.143199 kubelet[2065]: E0906 00:19:21.141897 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:21.148643 env[1298]: time="2025-09-06T00:19:21.148574458Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:19:21.155816 env[1298]: time="2025-09-06T00:19:21.155767783Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa879cb9345e27e83b5034cbbe37f7b397b4ba8da529421c74e51861e22fa60f\"" Sep 6 00:19:21.158637 env[1298]: time="2025-09-06T00:19:21.158561509Z" level=info msg="StartContainer for \"fa879cb9345e27e83b5034cbbe37f7b397b4ba8da529421c74e51861e22fa60f\"" Sep 6 00:19:21.221813 env[1298]: time="2025-09-06T00:19:21.221764552Z" level=info msg="StartContainer for \"fa879cb9345e27e83b5034cbbe37f7b397b4ba8da529421c74e51861e22fa60f\" returns successfully" Sep 6 00:19:21.255799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa879cb9345e27e83b5034cbbe37f7b397b4ba8da529421c74e51861e22fa60f-rootfs.mount: Deactivated successfully. Sep 6 00:19:21.263919 env[1298]: time="2025-09-06T00:19:21.263861558Z" level=info msg="shim disconnected" id=fa879cb9345e27e83b5034cbbe37f7b397b4ba8da529421c74e51861e22fa60f Sep 6 00:19:21.264293 env[1298]: time="2025-09-06T00:19:21.264215871Z" level=warning msg="cleaning up after shim disconnected" id=fa879cb9345e27e83b5034cbbe37f7b397b4ba8da529421c74e51861e22fa60f namespace=k8s.io Sep 6 00:19:21.264447 env[1298]: time="2025-09-06T00:19:21.264425020Z" level=info msg="cleaning up dead shim" Sep 6 00:19:21.274137 env[1298]: time="2025-09-06T00:19:21.274084894Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4109 runtime=io.containerd.runc.v2\n" Sep 6 00:19:21.403656 kubelet[2065]: I0906 00:19:21.403074 2065 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5484d0-00e4-4ce7-a5ef-0abaefa8655c" path="/var/lib/kubelet/pods/0b5484d0-00e4-4ce7-a5ef-0abaefa8655c/volumes" Sep 6 00:19:21.527599 kubelet[2065]: E0906 00:19:21.527530 2065 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:19:21.723614 kubelet[2065]: E0906 00:19:21.723131 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:21.727375 env[1298]: time="2025-09-06T00:19:21.727325621Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:19:21.745716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903191992.mount: Deactivated successfully. Sep 6 00:19:21.757293 env[1298]: time="2025-09-06T00:19:21.757018615Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"484debe1371f76dad2554accf166a28dc1d4b31bf6c8b30213b5cc4fb081e0c4\"" Sep 6 00:19:21.758805 env[1298]: time="2025-09-06T00:19:21.758756532Z" level=info msg="StartContainer for \"484debe1371f76dad2554accf166a28dc1d4b31bf6c8b30213b5cc4fb081e0c4\"" Sep 6 00:19:21.868795 env[1298]: time="2025-09-06T00:19:21.868738702Z" level=info msg="StartContainer for \"484debe1371f76dad2554accf166a28dc1d4b31bf6c8b30213b5cc4fb081e0c4\" returns successfully" Sep 6 00:19:21.925253 env[1298]: time="2025-09-06T00:19:21.923662346Z" level=info msg="shim disconnected" id=484debe1371f76dad2554accf166a28dc1d4b31bf6c8b30213b5cc4fb081e0c4 Sep 6 00:19:21.925253 env[1298]: time="2025-09-06T00:19:21.923718493Z" level=warning msg="cleaning up after shim disconnected" id=484debe1371f76dad2554accf166a28dc1d4b31bf6c8b30213b5cc4fb081e0c4 namespace=k8s.io Sep 6 00:19:21.925253 env[1298]: time="2025-09-06T00:19:21.923728982Z" level=info msg="cleaning up dead shim" Sep 6 00:19:21.933815 env[1298]: time="2025-09-06T00:19:21.933752995Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4171 runtime=io.containerd.runc.v2\n" Sep 6 00:19:22.729262 kubelet[2065]: E0906 00:19:22.729208 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:22.731482 env[1298]: time="2025-09-06T00:19:22.731430843Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:19:22.746672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301785943.mount: Deactivated successfully. Sep 6 00:19:22.754611 env[1298]: time="2025-09-06T00:19:22.754564284Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b8e282da6513f2168f036208f9c608684a6e56ba89fefb183bd737715328aaef\"" Sep 6 00:19:22.755397 env[1298]: time="2025-09-06T00:19:22.755359534Z" level=info msg="StartContainer for \"b8e282da6513f2168f036208f9c608684a6e56ba89fefb183bd737715328aaef\"" Sep 6 00:19:22.825110 env[1298]: time="2025-09-06T00:19:22.825060000Z" level=info msg="StartContainer for \"b8e282da6513f2168f036208f9c608684a6e56ba89fefb183bd737715328aaef\" returns successfully" Sep 6 00:19:22.861536 env[1298]: time="2025-09-06T00:19:22.861460784Z" level=info msg="shim disconnected" id=b8e282da6513f2168f036208f9c608684a6e56ba89fefb183bd737715328aaef Sep 6 00:19:22.861536 env[1298]: time="2025-09-06T00:19:22.861525710Z" level=warning msg="cleaning up after shim disconnected" id=b8e282da6513f2168f036208f9c608684a6e56ba89fefb183bd737715328aaef namespace=k8s.io Sep 6 00:19:22.861536 env[1298]: time="2025-09-06T00:19:22.861539888Z" level=info msg="cleaning up dead shim" Sep 6 00:19:22.873090 env[1298]: time="2025-09-06T00:19:22.873014660Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4230 runtime=io.containerd.runc.v2\n" Sep 6 00:19:23.246952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8e282da6513f2168f036208f9c608684a6e56ba89fefb183bd737715328aaef-rootfs.mount: Deactivated successfully. Sep 6 00:19:23.733760 kubelet[2065]: E0906 00:19:23.733709 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:23.745734 env[1298]: time="2025-09-06T00:19:23.745690572Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:19:23.767298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170407942.mount: Deactivated successfully. Sep 6 00:19:23.779986 env[1298]: time="2025-09-06T00:19:23.779916047Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42123a5fde28e8b405fb8a495002298d810968e6fec690b87f6fb73a9be37e93\"" Sep 6 00:19:23.782719 env[1298]: time="2025-09-06T00:19:23.782682588Z" level=info msg="StartContainer for \"42123a5fde28e8b405fb8a495002298d810968e6fec690b87f6fb73a9be37e93\"" Sep 6 00:19:23.860211 env[1298]: time="2025-09-06T00:19:23.860164143Z" level=info msg="StartContainer for \"42123a5fde28e8b405fb8a495002298d810968e6fec690b87f6fb73a9be37e93\" returns successfully" Sep 6 00:19:23.886841 env[1298]: time="2025-09-06T00:19:23.886713516Z" level=info msg="shim disconnected" id=42123a5fde28e8b405fb8a495002298d810968e6fec690b87f6fb73a9be37e93 Sep 6 00:19:23.887115 env[1298]: time="2025-09-06T00:19:23.887092053Z" level=warning msg="cleaning up after shim disconnected" id=42123a5fde28e8b405fb8a495002298d810968e6fec690b87f6fb73a9be37e93 namespace=k8s.io Sep 6 00:19:23.887237 env[1298]: time="2025-09-06T00:19:23.887221089Z" level=info msg="cleaning up dead shim" Sep 6 00:19:23.896647 env[1298]: time="2025-09-06T00:19:23.896597938Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:19:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4287 runtime=io.containerd.runc.v2\n" Sep 6 00:19:24.103684 kubelet[2065]: I0906 00:19:24.103625 2065 setters.go:600] "Node became not ready" node="ci-3510.3.8-n-81199f28b8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:19:24Z","lastTransitionTime":"2025-09-06T00:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:19:24.247133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42123a5fde28e8b405fb8a495002298d810968e6fec690b87f6fb73a9be37e93-rootfs.mount: Deactivated successfully. Sep 6 00:19:24.738057 kubelet[2065]: E0906 00:19:24.738013 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:24.740221 env[1298]: time="2025-09-06T00:19:24.740178122Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:19:24.764149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777950730.mount: Deactivated successfully. Sep 6 00:19:24.773454 env[1298]: time="2025-09-06T00:19:24.773380125Z" level=info msg="CreateContainer within sandbox \"e332c48e2cc49726d3db0d1017c1be7c2fad93586f5efcf2ebf0163b1cbbdbea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc\"" Sep 6 00:19:24.775443 env[1298]: time="2025-09-06T00:19:24.775385056Z" level=info msg="StartContainer for \"d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc\"" Sep 6 00:19:24.845817 env[1298]: time="2025-09-06T00:19:24.845305579Z" level=info msg="StartContainer for \"d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc\" returns successfully" Sep 6 00:19:25.320422 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:19:25.744585 kubelet[2065]: E0906 00:19:25.744446 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:25.773356 kubelet[2065]: I0906 00:19:25.773264 2065 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9lptr" podStartSLOduration=5.772060304 podStartE2EDuration="5.772060304s" podCreationTimestamp="2025-09-06 00:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:19:25.771024245 +0000 UTC m=+114.578408481" watchObservedRunningTime="2025-09-06 00:19:25.772060304 +0000 UTC m=+114.579444543" Sep 6 00:19:25.879425 systemd[1]: run-containerd-runc-k8s.io-d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc-runc.38XjLQ.mount: Deactivated successfully. Sep 6 00:19:27.070357 kubelet[2065]: E0906 00:19:27.070310 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:28.043373 systemd[1]: run-containerd-runc-k8s.io-d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc-runc.GjEXzN.mount: Deactivated successfully. Sep 6 00:19:28.468087 systemd-networkd[1055]: lxc_health: Link UP Sep 6 00:19:28.472630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:19:28.473049 systemd-networkd[1055]: lxc_health: Gained carrier Sep 6 00:19:29.071547 kubelet[2065]: E0906 00:19:29.071510 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:29.751710 kubelet[2065]: E0906 00:19:29.751677 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:30.214860 systemd-networkd[1055]: lxc_health: Gained IPv6LL Sep 6 00:19:30.238076 systemd[1]: run-containerd-runc-k8s.io-d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc-runc.dsg7Qq.mount: Deactivated successfully. Sep 6 00:19:30.754251 kubelet[2065]: E0906 00:19:30.754061 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:31.388875 env[1298]: time="2025-09-06T00:19:31.388592458Z" level=info msg="StopPodSandbox for \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\"" Sep 6 00:19:31.388875 env[1298]: time="2025-09-06T00:19:31.388728475Z" level=info msg="TearDown network for sandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" successfully" Sep 6 00:19:31.388875 env[1298]: time="2025-09-06T00:19:31.388776885Z" level=info msg="StopPodSandbox for \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" returns successfully" Sep 6 00:19:31.390190 env[1298]: time="2025-09-06T00:19:31.389815972Z" level=info msg="RemovePodSandbox for \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\"" Sep 6 00:19:31.390190 env[1298]: time="2025-09-06T00:19:31.389862860Z" level=info msg="Forcibly stopping sandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\"" Sep 6 00:19:31.390190 env[1298]: time="2025-09-06T00:19:31.389967113Z" level=info msg="TearDown network for sandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" successfully" Sep 6 00:19:31.393312 env[1298]: time="2025-09-06T00:19:31.393265853Z" level=info msg="RemovePodSandbox \"c4cd671d39834a6ff3d86c1501d2e99329d1278f0daca276c52ca2d04d50c988\" returns successfully" Sep 6 00:19:31.394102 env[1298]: time="2025-09-06T00:19:31.393820709Z" level=info msg="StopPodSandbox for \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\"" Sep 6 00:19:31.394102 env[1298]: time="2025-09-06T00:19:31.393926635Z" level=info msg="TearDown network for sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" successfully" Sep 6 00:19:31.394102 env[1298]: time="2025-09-06T00:19:31.393974529Z" level=info msg="StopPodSandbox for \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" returns successfully" Sep 6 00:19:31.394977 env[1298]: time="2025-09-06T00:19:31.394466794Z" level=info msg="RemovePodSandbox for \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\"" Sep 6 00:19:31.394977 env[1298]: time="2025-09-06T00:19:31.394501351Z" level=info msg="Forcibly stopping sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\"" Sep 6 00:19:31.394977 env[1298]: time="2025-09-06T00:19:31.394598047Z" level=info msg="TearDown network for sandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" successfully" Sep 6 00:19:31.397214 env[1298]: time="2025-09-06T00:19:31.397163199Z" level=info msg="RemovePodSandbox \"f852b4b51b5a28ed7f1425854f96152a1d9c7a52cb5370f50ac5adac6cef2d42\" returns successfully" Sep 6 00:19:31.397667 env[1298]: time="2025-09-06T00:19:31.397633164Z" level=info msg="StopPodSandbox for \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\"" Sep 6 00:19:31.397957 env[1298]: time="2025-09-06T00:19:31.397868863Z" level=info msg="TearDown network for sandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" successfully" Sep 6 00:19:31.398093 env[1298]: time="2025-09-06T00:19:31.398064613Z" level=info msg="StopPodSandbox for \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" returns successfully" Sep 6 00:19:31.398633 env[1298]: time="2025-09-06T00:19:31.398607296Z" level=info msg="RemovePodSandbox for \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\"" Sep 6 00:19:31.398711 env[1298]: time="2025-09-06T00:19:31.398638025Z" level=info msg="Forcibly stopping sandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\"" Sep 6 00:19:31.398760 env[1298]: time="2025-09-06T00:19:31.398730344Z" level=info msg="TearDown network for sandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" successfully" Sep 6 00:19:31.405635 env[1298]: time="2025-09-06T00:19:31.405596390Z" level=info msg="RemovePodSandbox \"99961d5cafb3226969d7e7be8f46a095768bbc3b6336ecea7f990e588d4e18da\" returns successfully" Sep 6 00:19:32.421651 systemd[1]: run-containerd-runc-k8s.io-d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc-runc.qM0IMQ.mount: Deactivated successfully. Sep 6 00:19:34.596162 systemd[1]: run-containerd-runc-k8s.io-d64ad8819a9f4390f754b461e6351059ab6826568a1d3c9f2e0e68843fa04ecc-runc.6eSZlN.mount: Deactivated successfully. Sep 6 00:19:34.690112 sshd[3861]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:34.693244 systemd[1]: sshd@27-143.198.146.98:22-147.75.109.163:54466.service: Deactivated successfully. Sep 6 00:19:34.694199 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 00:19:34.695639 systemd-logind[1285]: Session 28 logged out. Waiting for processes to exit. Sep 6 00:19:34.696769 systemd-logind[1285]: Removed session 28.