May 17 00:43:24.027879 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:43:24.027916 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:43:24.027931 kernel: BIOS-provided physical RAM map: May 17 00:43:24.027938 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:43:24.027944 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:43:24.027950 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:43:24.027958 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:43:24.027965 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:43:24.027974 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:43:24.027981 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:43:24.027987 kernel: NX (Execute Disable) protection: active May 17 00:43:24.027994 kernel: SMBIOS 2.8 present. May 17 00:43:24.028000 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:43:24.028007 kernel: Hypervisor detected: KVM May 17 00:43:24.028015 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:43:24.028026 kernel: kvm-clock: cpu 0, msr 5719a001, primary cpu clock May 17 00:43:24.028033 kernel: kvm-clock: using sched offset of 3676695276 cycles May 17 00:43:24.028041 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:43:24.028051 kernel: tsc: Detected 2494.140 MHz processor May 17 00:43:24.028058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:43:24.028066 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:43:24.028073 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:43:24.028081 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:43:24.028092 kernel: ACPI: Early table checksum verification disabled May 17 00:43:24.028099 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:43:24.028106 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028114 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028121 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028128 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:43:24.028135 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028143 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028150 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028161 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:43:24.028168 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:43:24.028175 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:43:24.028182 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:43:24.028190 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:43:24.028197 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:43:24.028204 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:43:24.028212 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:43:24.028226 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:43:24.028234 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:43:24.028242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:43:24.028250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:43:24.028258 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:43:24.028266 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:43:24.028276 kernel: Zone ranges: May 17 00:43:24.028284 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:43:24.028292 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:43:24.028299 kernel: Normal empty May 17 00:43:24.028307 kernel: Movable zone start for each node May 17 00:43:24.028315 kernel: Early memory node ranges May 17 00:43:24.028323 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:43:24.028330 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:43:24.028338 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:43:24.028349 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:43:24.028360 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:43:24.028378 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:43:24.028387 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:43:24.028394 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:43:24.028421 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:43:24.028429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:43:24.028437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:43:24.028445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:43:24.028457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:43:24.028469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:43:24.028486 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:43:24.028497 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:43:24.028508 kernel: TSC deadline timer available May 17 00:43:24.028519 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:43:24.028531 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:43:24.028541 kernel: Booting paravirtualized kernel on KVM May 17 00:43:24.028551 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:43:24.028569 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:43:24.028580 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:43:24.028591 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:43:24.028604 kernel: pcpu-alloc: [0] 0 1 May 17 00:43:24.028615 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 May 17 00:43:24.028628 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:43:24.028639 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:43:24.028649 kernel: Policy zone: DMA32 May 17 00:43:24.028662 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:43:24.028680 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:43:24.028692 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:43:24.028703 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:43:24.028714 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:43:24.028726 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) May 17 00:43:24.028738 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:43:24.028751 kernel: Kernel/User page tables isolation: enabled May 17 00:43:24.028764 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:43:24.028782 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:43:24.028796 kernel: rcu: Hierarchical RCU implementation. May 17 00:43:24.028810 kernel: rcu: RCU event tracing is enabled. May 17 00:43:24.028823 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:43:24.028837 kernel: Rude variant of Tasks RCU enabled. May 17 00:43:24.028850 kernel: Tracing variant of Tasks RCU enabled. May 17 00:43:24.028863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:43:24.028875 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:43:24.028887 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:43:24.028903 kernel: random: crng init done May 17 00:43:24.028916 kernel: Console: colour VGA+ 80x25 May 17 00:43:24.028928 kernel: printk: console [tty0] enabled May 17 00:43:24.028940 kernel: printk: console [ttyS0] enabled May 17 00:43:24.028953 kernel: ACPI: Core revision 20210730 May 17 00:43:24.028966 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:43:24.028978 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:43:24.028990 kernel: x2apic enabled May 17 00:43:24.029003 kernel: Switched APIC routing to physical x2apic. May 17 00:43:24.029016 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:43:24.029032 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 17 00:43:24.029045 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 17 00:43:24.029066 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:43:24.029079 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:43:24.029093 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:43:24.029106 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:43:24.029120 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:43:24.029133 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:43:24.029153 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:43:24.029181 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:43:24.029196 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:43:24.029213 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:43:24.029228 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:43:24.029242 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:43:24.029257 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:43:24.029272 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:43:24.029286 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:43:24.029301 kernel: Freeing SMP alternatives memory: 32K May 17 00:43:24.029320 kernel: pid_max: default: 32768 minimum: 301 May 17 00:43:24.029335 kernel: LSM: Security Framework initializing May 17 00:43:24.029349 kernel: SELinux: Initializing. May 17 00:43:24.029361 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:43:24.029393 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:43:24.029408 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:43:24.029420 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:43:24.029438 kernel: signal: max sigframe size: 1776 May 17 00:43:24.029451 kernel: rcu: Hierarchical SRCU implementation. May 17 00:43:24.029464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:43:24.029476 kernel: smp: Bringing up secondary CPUs ... May 17 00:43:24.029488 kernel: x86: Booting SMP configuration: May 17 00:43:24.029501 kernel: .... node #0, CPUs: #1 May 17 00:43:24.029515 kernel: kvm-clock: cpu 1, msr 5719a041, secondary cpu clock May 17 00:43:24.029528 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 May 17 00:43:24.029542 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:43:24.029560 kernel: smpboot: Max logical packages: 1 May 17 00:43:24.029572 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 17 00:43:24.029584 kernel: devtmpfs: initialized May 17 00:43:24.029597 kernel: x86/mm: Memory block size: 128MB May 17 00:43:24.029610 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:43:24.029622 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:43:24.029634 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:43:24.029646 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:43:24.029660 kernel: audit: initializing netlink subsys (disabled) May 17 00:43:24.029679 kernel: audit: type=2000 audit(1747442603.260:1): state=initialized audit_enabled=0 res=1 May 17 00:43:24.029690 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:43:24.029699 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:43:24.029707 kernel: cpuidle: using governor menu May 17 00:43:24.029716 kernel: ACPI: bus type PCI registered May 17 00:43:24.029725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:43:24.029739 kernel: dca service started, version 1.12.1 May 17 00:43:24.029748 kernel: PCI: Using configuration type 1 for base access May 17 00:43:24.029756 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:43:24.029768 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:43:24.029777 kernel: ACPI: Added _OSI(Module Device) May 17 00:43:24.029785 kernel: ACPI: Added _OSI(Processor Device) May 17 00:43:24.029794 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:43:24.029803 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:43:24.029811 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:43:24.029820 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:43:24.029828 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:43:24.029837 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:43:24.029849 kernel: ACPI: Interpreter enabled May 17 00:43:24.029858 kernel: ACPI: PM: (supports S0 S5) May 17 00:43:24.029867 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:43:24.029875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:43:24.029884 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:43:24.029893 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:43:24.030184 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:43:24.030285 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:43:24.030303 kernel: acpiphp: Slot [3] registered May 17 00:43:24.030311 kernel: acpiphp: Slot [4] registered May 17 00:43:24.030320 kernel: acpiphp: Slot [5] registered May 17 00:43:24.030329 kernel: acpiphp: Slot [6] registered May 17 00:43:24.030338 kernel: acpiphp: Slot [7] registered May 17 00:43:24.030346 kernel: acpiphp: Slot [8] registered May 17 00:43:24.030355 kernel: acpiphp: Slot [9] registered May 17 00:43:24.030364 kernel: acpiphp: Slot [10] registered May 17 00:43:24.030390 kernel: acpiphp: Slot [11] registered May 17 00:43:24.030403 kernel: acpiphp: Slot [12] registered May 17 00:43:24.030427 kernel: acpiphp: Slot [13] registered May 17 00:43:24.030439 kernel: acpiphp: Slot [14] registered May 17 00:43:24.030451 kernel: acpiphp: Slot [15] registered May 17 00:43:24.030463 kernel: acpiphp: Slot [16] registered May 17 00:43:24.030475 kernel: acpiphp: Slot [17] registered May 17 00:43:24.030485 kernel: acpiphp: Slot [18] registered May 17 00:43:24.030494 kernel: acpiphp: Slot [19] registered May 17 00:43:24.030505 kernel: acpiphp: Slot [20] registered May 17 00:43:24.030519 kernel: acpiphp: Slot [21] registered May 17 00:43:24.030527 kernel: acpiphp: Slot [22] registered May 17 00:43:24.030536 kernel: acpiphp: Slot [23] registered May 17 00:43:24.030544 kernel: acpiphp: Slot [24] registered May 17 00:43:24.030553 kernel: acpiphp: Slot [25] registered May 17 00:43:24.030562 kernel: acpiphp: Slot [26] registered May 17 00:43:24.030571 kernel: acpiphp: Slot [27] registered May 17 00:43:24.030580 kernel: acpiphp: Slot [28] registered May 17 00:43:24.030588 kernel: acpiphp: Slot [29] registered May 17 00:43:24.030597 kernel: acpiphp: Slot [30] registered May 17 00:43:24.030610 kernel: acpiphp: Slot [31] registered May 17 00:43:24.030618 kernel: PCI host bridge to bus 0000:00 May 17 00:43:24.030730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:43:24.030815 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:43:24.030893 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:43:24.030972 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:43:24.031049 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:43:24.031131 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:43:24.031253 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:43:24.034459 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:43:24.034735 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:43:24.034832 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:43:24.034919 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:43:24.035016 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:43:24.035101 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:43:24.035187 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:43:24.035285 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:43:24.035387 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:43:24.035486 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:43:24.035572 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:43:24.035662 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:43:24.035766 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:43:24.035856 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:43:24.035948 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:43:24.036036 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:43:24.036133 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:43:24.036219 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:43:24.036326 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:43:24.036427 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:43:24.036517 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:43:24.036609 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:43:24.036705 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:43:24.036823 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:43:24.036962 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:43:24.037050 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:43:24.037156 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:43:24.037243 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:43:24.037328 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:43:24.037429 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:43:24.037554 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:43:24.037651 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:43:24.037737 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:43:24.037832 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:43:24.038010 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:43:24.038146 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:43:24.038257 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:43:24.038345 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:43:24.043698 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:43:24.043903 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:43:24.044042 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:43:24.044064 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:43:24.044079 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:43:24.044092 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:43:24.044106 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:43:24.044132 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:43:24.044147 kernel: iommu: Default domain type: Translated May 17 00:43:24.044161 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:43:24.044291 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:43:24.044486 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:43:24.044611 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:43:24.044624 kernel: vgaarb: loaded May 17 00:43:24.044634 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:43:24.044643 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:43:24.044669 kernel: PTP clock support registered May 17 00:43:24.044678 kernel: PCI: Using ACPI for IRQ routing May 17 00:43:24.044687 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:43:24.044696 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:43:24.044704 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:43:24.044713 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:43:24.044721 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:43:24.044730 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:43:24.044739 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:43:24.044752 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:43:24.044761 kernel: pnp: PnP ACPI init May 17 00:43:24.044771 kernel: pnp: PnP ACPI: found 4 devices May 17 00:43:24.044779 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:43:24.044788 kernel: NET: Registered PF_INET protocol family May 17 00:43:24.044796 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:43:24.044805 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:43:24.044814 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:43:24.044827 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:43:24.044836 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:43:24.044844 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:43:24.044853 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:43:24.044862 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:43:24.044870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:43:24.044879 kernel: NET: Registered PF_XDP protocol family May 17 00:43:24.044975 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:43:24.045056 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:43:24.045135 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:43:24.045213 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:43:24.045289 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:43:24.048526 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:43:24.048760 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:43:24.048868 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:43:24.048881 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:43:24.048977 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 30113 usecs May 17 00:43:24.048998 kernel: PCI: CLS 0 bytes, default 64 May 17 00:43:24.049007 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:43:24.049017 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 17 00:43:24.049026 kernel: Initialise system trusted keyrings May 17 00:43:24.049035 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:43:24.049044 kernel: Key type asymmetric registered May 17 00:43:24.049052 kernel: Asymmetric key parser 'x509' registered May 17 00:43:24.049061 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:43:24.049070 kernel: io scheduler mq-deadline registered May 17 00:43:24.049081 kernel: io scheduler kyber registered May 17 00:43:24.049090 kernel: io scheduler bfq registered May 17 00:43:24.049099 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:43:24.049108 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:43:24.049117 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:43:24.049125 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:43:24.049134 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:43:24.049143 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:43:24.049152 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:43:24.049163 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:43:24.049172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:43:24.049181 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:43:24.049299 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:43:24.049477 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:43:24.049598 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:43:23 UTC (1747442603) May 17 00:43:24.049679 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:43:24.049698 kernel: intel_pstate: CPU model not supported May 17 00:43:24.049708 kernel: NET: Registered PF_INET6 protocol family May 17 00:43:24.049720 kernel: Segment Routing with IPv6 May 17 00:43:24.049731 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:43:24.049740 kernel: NET: Registered PF_PACKET protocol family May 17 00:43:24.049749 kernel: Key type dns_resolver registered May 17 00:43:24.049758 kernel: IPI shorthand broadcast: enabled May 17 00:43:24.049767 kernel: sched_clock: Marking stable (600002589, 83870934)->(778530865, -94657342) May 17 00:43:24.049776 kernel: registered taskstats version 1 May 17 00:43:24.049786 kernel: Loading compiled-in X.509 certificates May 17 00:43:24.049798 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:43:24.049806 kernel: Key type .fscrypt registered May 17 00:43:24.049814 kernel: Key type fscrypt-provisioning registered May 17 00:43:24.049823 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:43:24.049832 kernel: ima: Allocated hash algorithm: sha1 May 17 00:43:24.049841 kernel: ima: No architecture policies found May 17 00:43:24.049849 kernel: clk: Disabling unused clocks May 17 00:43:24.049858 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:43:24.049869 kernel: Write protecting the kernel read-only data: 28672k May 17 00:43:24.049878 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:43:24.049887 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:43:24.049896 kernel: Run /init as init process May 17 00:43:24.049904 kernel: with arguments: May 17 00:43:24.049914 kernel: /init May 17 00:43:24.049945 kernel: with environment: May 17 00:43:24.049960 kernel: HOME=/ May 17 00:43:24.049969 kernel: TERM=linux May 17 00:43:24.049978 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:43:24.050059 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:43:24.050077 systemd[1]: Detected virtualization kvm. May 17 00:43:24.050090 systemd[1]: Detected architecture x86-64. May 17 00:43:24.050102 systemd[1]: Running in initrd. May 17 00:43:24.050115 systemd[1]: No hostname configured, using default hostname. May 17 00:43:24.050128 systemd[1]: Hostname set to . May 17 00:43:24.050147 systemd[1]: Initializing machine ID from VM UUID. May 17 00:43:24.050157 systemd[1]: Queued start job for default target initrd.target. May 17 00:43:24.050167 systemd[1]: Started systemd-ask-password-console.path. May 17 00:43:24.050176 systemd[1]: Reached target cryptsetup.target. May 17 00:43:24.050185 systemd[1]: Reached target paths.target. May 17 00:43:24.050194 systemd[1]: Reached target slices.target. May 17 00:43:24.050203 systemd[1]: Reached target swap.target. May 17 00:43:24.050213 systemd[1]: Reached target timers.target. May 17 00:43:24.050226 systemd[1]: Listening on iscsid.socket. May 17 00:43:24.050235 systemd[1]: Listening on iscsiuio.socket. May 17 00:43:24.050245 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:43:24.050254 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:43:24.050263 systemd[1]: Listening on systemd-journald.socket. May 17 00:43:24.050273 systemd[1]: Listening on systemd-networkd.socket. May 17 00:43:24.050282 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:43:24.050292 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:43:24.050301 systemd[1]: Reached target sockets.target. May 17 00:43:24.050314 systemd[1]: Starting kmod-static-nodes.service... May 17 00:43:24.050324 systemd[1]: Finished network-cleanup.service. May 17 00:43:24.050336 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:43:24.050346 systemd[1]: Starting systemd-journald.service... May 17 00:43:24.050358 systemd[1]: Starting systemd-modules-load.service... May 17 00:43:24.050383 systemd[1]: Starting systemd-resolved.service... May 17 00:43:24.050411 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:43:24.050435 systemd[1]: Finished kmod-static-nodes.service. May 17 00:43:24.050444 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:43:24.050462 systemd-journald[183]: Journal started May 17 00:43:24.050543 systemd-journald[183]: Runtime Journal (/run/log/journal/d0cc2c25d9694a24ab7f9151d24783b5) is 4.9M, max 39.5M, 34.5M free. May 17 00:43:24.047476 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:43:24.054411 systemd-resolved[185]: Positive Trust Anchors: May 17 00:43:24.054424 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:43:24.080781 systemd[1]: Started systemd-journald.service. May 17 00:43:24.080814 kernel: audit: type=1130 audit(1747442604.073:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.080832 kernel: audit: type=1130 audit(1747442604.078:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.054458 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:43:24.086631 kernel: audit: type=1130 audit(1747442604.080:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.057751 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:43:24.078695 systemd[1]: Started systemd-resolved.service. May 17 00:43:24.084459 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:43:24.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.093658 kernel: audit: type=1130 audit(1747442604.089:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.093211 systemd[1]: Reached target nss-lookup.target. May 17 00:43:24.095918 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:43:24.098516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:43:24.106413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:43:24.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.106806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:43:24.111471 kernel: audit: type=1130 audit(1747442604.106:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.121020 kernel: Bridge firewalling registered May 17 00:43:24.117600 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:43:24.129264 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:43:24.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.138416 kernel: audit: type=1130 audit(1747442604.129:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.134626 systemd[1]: Starting dracut-cmdline.service... May 17 00:43:24.147416 kernel: SCSI subsystem initialized May 17 00:43:24.152273 dracut-cmdline[202]: dracut-dracut-053 May 17 00:43:24.157228 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:43:24.161756 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:43:24.161831 kernel: device-mapper: uevent: version 1.0.3 May 17 00:43:24.161849 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:43:24.171642 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:43:24.172518 systemd[1]: Finished systemd-modules-load.service. May 17 00:43:24.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.176441 kernel: audit: type=1130 audit(1747442604.172:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.173929 systemd[1]: Starting systemd-sysctl.service... May 17 00:43:24.187798 systemd[1]: Finished systemd-sysctl.service. May 17 00:43:24.191421 kernel: audit: type=1130 audit(1747442604.187:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.260438 kernel: Loading iSCSI transport class v2.0-870. May 17 00:43:24.279419 kernel: iscsi: registered transport (tcp) May 17 00:43:24.308425 kernel: iscsi: registered transport (qla4xxx) May 17 00:43:24.308552 kernel: QLogic iSCSI HBA Driver May 17 00:43:24.356149 systemd[1]: Finished dracut-cmdline.service. May 17 00:43:24.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.357829 systemd[1]: Starting dracut-pre-udev.service... May 17 00:43:24.363410 kernel: audit: type=1130 audit(1747442604.356:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.417466 kernel: raid6: avx2x4 gen() 22209 MB/s May 17 00:43:24.434440 kernel: raid6: avx2x4 xor() 6346 MB/s May 17 00:43:24.451489 kernel: raid6: avx2x2 gen() 23629 MB/s May 17 00:43:24.468444 kernel: raid6: avx2x2 xor() 20751 MB/s May 17 00:43:24.485445 kernel: raid6: avx2x1 gen() 20912 MB/s May 17 00:43:24.502448 kernel: raid6: avx2x1 xor() 17819 MB/s May 17 00:43:24.519449 kernel: raid6: sse2x4 gen() 11232 MB/s May 17 00:43:24.536446 kernel: raid6: sse2x4 xor() 6687 MB/s May 17 00:43:24.553444 kernel: raid6: sse2x2 gen() 12294 MB/s May 17 00:43:24.570441 kernel: raid6: sse2x2 xor() 8553 MB/s May 17 00:43:24.587444 kernel: raid6: sse2x1 gen() 10850 MB/s May 17 00:43:24.604607 kernel: raid6: sse2x1 xor() 6109 MB/s May 17 00:43:24.604702 kernel: raid6: using algorithm avx2x2 gen() 23629 MB/s May 17 00:43:24.604717 kernel: raid6: .... xor() 20751 MB/s, rmw enabled May 17 00:43:24.605752 kernel: raid6: using avx2x2 recovery algorithm May 17 00:43:24.622433 kernel: xor: automatically using best checksumming function avx May 17 00:43:24.733656 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:43:24.746421 systemd[1]: Finished dracut-pre-udev.service. May 17 00:43:24.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.747000 audit: BPF prog-id=7 op=LOAD May 17 00:43:24.747000 audit: BPF prog-id=8 op=LOAD May 17 00:43:24.748333 systemd[1]: Starting systemd-udevd.service... May 17 00:43:24.765454 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 17 00:43:24.774241 systemd[1]: Started systemd-udevd.service. May 17 00:43:24.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.778834 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:43:24.797620 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation May 17 00:43:24.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.851664 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:43:24.854188 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:43:24.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:24.908239 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:43:24.990415 kernel: scsi host0: Virtio SCSI HBA May 17 00:43:24.990591 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:43:25.046968 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:43:25.046999 kernel: GPT:9289727 != 125829119 May 17 00:43:25.047016 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:43:25.047033 kernel: GPT:9289727 != 125829119 May 17 00:43:25.047047 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:43:25.047062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:43:25.047077 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:43:25.049426 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 17 00:43:25.071210 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:43:25.071236 kernel: AES CTR mode by8 optimization enabled May 17 00:43:25.096404 kernel: ACPI: bus type USB registered May 17 00:43:25.096477 kernel: usbcore: registered new interface driver usbfs May 17 00:43:25.096492 kernel: usbcore: registered new interface driver hub May 17 00:43:25.096503 kernel: usbcore: registered new device driver usb May 17 00:43:25.106543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:43:25.169553 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (444) May 17 00:43:25.169591 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver May 17 00:43:25.169609 kernel: libata version 3.00 loaded. May 17 00:43:25.169628 kernel: ehci-pci: EHCI PCI platform driver May 17 00:43:25.169645 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:43:25.169891 kernel: scsi host1: ata_piix May 17 00:43:25.170203 kernel: uhci_hcd: USB Universal Host Controller Interface driver May 17 00:43:25.170219 kernel: scsi host2: ata_piix May 17 00:43:25.170419 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:43:25.170438 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:43:25.174189 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:43:25.175215 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:43:25.182552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:43:25.190231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:43:25.193714 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:43:25.199142 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:43:25.199277 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:43:25.199419 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 May 17 00:43:25.199516 kernel: hub 1-0:1.0: USB hub found May 17 00:43:25.199642 kernel: hub 1-0:1.0: 2 ports detected May 17 00:43:25.195270 systemd[1]: Starting disk-uuid.service... May 17 00:43:25.204136 disk-uuid[499]: Primary Header is updated. May 17 00:43:25.204136 disk-uuid[499]: Secondary Entries is updated. May 17 00:43:25.204136 disk-uuid[499]: Secondary Header is updated. May 17 00:43:25.215418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:43:25.221412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:43:26.227581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:43:26.229205 disk-uuid[504]: The operation has completed successfully. May 17 00:43:26.235396 kernel: block device autoloading is deprecated. It will be removed in Linux 5.19 May 17 00:43:26.284620 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:43:26.285736 systemd[1]: Finished disk-uuid.service. May 17 00:43:26.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.288524 systemd[1]: Starting verity-setup.service... May 17 00:43:26.312401 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:43:26.371715 systemd[1]: Found device dev-mapper-usr.device. May 17 00:43:26.373676 systemd[1]: Mounting sysusr-usr.mount... May 17 00:43:26.374967 systemd[1]: Finished verity-setup.service. May 17 00:43:26.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.467405 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:43:26.468859 systemd[1]: Mounted sysusr-usr.mount. May 17 00:43:26.470137 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:43:26.471877 systemd[1]: Starting ignition-setup.service... May 17 00:43:26.473971 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:43:26.496800 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:43:26.496888 kernel: BTRFS info (device vda6): using free space tree May 17 00:43:26.496906 kernel: BTRFS info (device vda6): has skinny extents May 17 00:43:26.518299 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:43:26.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.525224 systemd[1]: Finished ignition-setup.service. May 17 00:43:26.526915 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:43:26.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.636000 audit: BPF prog-id=9 op=LOAD May 17 00:43:26.635450 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:43:26.638355 systemd[1]: Starting systemd-networkd.service... May 17 00:43:26.688638 systemd-networkd[693]: lo: Link UP May 17 00:43:26.688737 ignition[619]: Ignition 2.14.0 May 17 00:43:26.688652 systemd-networkd[693]: lo: Gained carrier May 17 00:43:26.688748 ignition[619]: Stage: fetch-offline May 17 00:43:26.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.689999 systemd-networkd[693]: Enumeration completed May 17 00:43:26.688847 ignition[619]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:26.690545 systemd-networkd[693]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:43:26.688889 ignition[619]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:26.691729 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:43:26.692628 systemd[1]: Started systemd-networkd.service. May 17 00:43:26.693279 systemd-networkd[693]: eth1: Link UP May 17 00:43:26.693284 systemd-networkd[693]: eth1: Gained carrier May 17 00:43:26.694754 systemd[1]: Reached target network.target. May 17 00:43:26.696651 systemd[1]: Starting iscsiuio.service... May 17 00:43:26.700793 systemd-networkd[693]: eth0: Link UP May 17 00:43:26.700803 systemd-networkd[693]: eth0: Gained carrier May 17 00:43:26.710292 ignition[619]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:26.710496 ignition[619]: parsed url from cmdline: "" May 17 00:43:26.710502 ignition[619]: no config URL provided May 17 00:43:26.710510 ignition[619]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:43:26.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.710523 ignition[619]: no config at "/usr/lib/ignition/user.ign" May 17 00:43:26.712216 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:43:26.710533 ignition[619]: failed to fetch config: resource requires networking May 17 00:43:26.714655 systemd[1]: Starting ignition-fetch.service... May 17 00:43:26.710890 ignition[619]: Ignition finished successfully May 17 00:43:26.727733 systemd-networkd[693]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 May 17 00:43:26.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.734490 systemd[1]: Started iscsiuio.service. May 17 00:43:26.736247 systemd[1]: Starting iscsid.service... May 17 00:43:26.740263 ignition[697]: Ignition 2.14.0 May 17 00:43:26.740552 systemd-networkd[693]: eth0: DHCPv4 address 137.184.126.228/20, gateway 137.184.112.1 acquired from 169.254.169.253 May 17 00:43:26.741818 ignition[697]: Stage: fetch May 17 00:43:26.742465 ignition[697]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:26.743213 ignition[697]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:26.745028 iscsid[704]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:43:26.745028 iscsid[704]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:43:26.745028 iscsid[704]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:43:26.745028 iscsid[704]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:43:26.745028 iscsid[704]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:43:26.749319 systemd[1]: Started iscsid.service. May 17 00:43:26.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.750081 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:26.752405 iscsid[704]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:43:26.752333 systemd[1]: Starting dracut-initqueue.service... May 17 00:43:26.750215 ignition[697]: parsed url from cmdline: "" May 17 00:43:26.750223 ignition[697]: no config URL provided May 17 00:43:26.750231 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:43:26.750246 ignition[697]: no config at "/usr/lib/ignition/user.ign" May 17 00:43:26.750294 ignition[697]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:43:26.777748 systemd[1]: Finished dracut-initqueue.service. May 17 00:43:26.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.778614 systemd[1]: Reached target remote-fs-pre.target. May 17 00:43:26.779390 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:43:26.780112 systemd[1]: Reached target remote-fs.target. May 17 00:43:26.782341 systemd[1]: Starting dracut-pre-mount.service... May 17 00:43:26.789013 ignition[697]: GET result: OK May 17 00:43:26.789155 ignition[697]: parsing config with SHA512: 48bd4f90190a31d18643cc33f5101d8c0f87a87423b3126ea53bb15f5b4e003729142d9564a26df33d34ce684c2415b513bd8fe116aecadf23cb7c429a8ae6a0 May 17 00:43:26.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.806219 systemd[1]: Finished dracut-pre-mount.service. May 17 00:43:26.808412 unknown[697]: fetched base config from "system" May 17 00:43:26.808426 unknown[697]: fetched base config from "system" May 17 00:43:26.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.809117 ignition[697]: fetch: fetch complete May 17 00:43:26.808439 unknown[697]: fetched user config from "digitalocean" May 17 00:43:26.809126 ignition[697]: fetch: fetch passed May 17 00:43:26.811191 systemd[1]: Finished ignition-fetch.service. May 17 00:43:26.809192 ignition[697]: Ignition finished successfully May 17 00:43:26.813493 systemd[1]: Starting ignition-kargs.service... May 17 00:43:26.834574 ignition[718]: Ignition 2.14.0 May 17 00:43:26.834596 ignition[718]: Stage: kargs May 17 00:43:26.834808 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:26.834846 ignition[718]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:26.837782 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:26.840185 ignition[718]: kargs: kargs passed May 17 00:43:26.840296 ignition[718]: Ignition finished successfully May 17 00:43:26.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.841796 systemd[1]: Finished ignition-kargs.service. May 17 00:43:26.844044 systemd[1]: Starting ignition-disks.service... May 17 00:43:26.856715 ignition[723]: Ignition 2.14.0 May 17 00:43:26.856731 ignition[723]: Stage: disks May 17 00:43:26.856938 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:26.856966 ignition[723]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:26.859596 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:26.861594 ignition[723]: disks: disks passed May 17 00:43:26.861677 ignition[723]: Ignition finished successfully May 17 00:43:26.862999 systemd[1]: Finished ignition-disks.service. May 17 00:43:26.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.864265 systemd[1]: Reached target initrd-root-device.target. May 17 00:43:26.864985 systemd[1]: Reached target local-fs-pre.target. May 17 00:43:26.865604 systemd[1]: Reached target local-fs.target. May 17 00:43:26.866457 systemd[1]: Reached target sysinit.target. May 17 00:43:26.867230 systemd[1]: Reached target basic.target. May 17 00:43:26.869907 systemd[1]: Starting systemd-fsck-root.service... May 17 00:43:26.892139 systemd-fsck[731]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:43:26.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:26.895958 systemd[1]: Finished systemd-fsck-root.service. May 17 00:43:26.898677 systemd[1]: Mounting sysroot.mount... May 17 00:43:26.912555 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:43:26.914230 systemd[1]: Mounted sysroot.mount. May 17 00:43:26.914857 systemd[1]: Reached target initrd-root-fs.target. May 17 00:43:26.917452 systemd[1]: Mounting sysroot-usr.mount... May 17 00:43:26.919615 systemd[1]: Starting flatcar-digitalocean-network.service... May 17 00:43:26.922511 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:43:26.923158 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:43:26.923210 systemd[1]: Reached target ignition-diskful.target. May 17 00:43:26.928775 systemd[1]: Mounted sysroot-usr.mount. May 17 00:43:26.934355 systemd[1]: Starting initrd-setup-root.service... May 17 00:43:26.952425 initrd-setup-root[743]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:43:26.967012 initrd-setup-root[751]: cut: /sysroot/etc/group: No such file or directory May 17 00:43:26.980026 initrd-setup-root[761]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:43:26.994944 initrd-setup-root[771]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:43:27.087283 systemd[1]: Finished initrd-setup-root.service. May 17 00:43:27.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.089352 systemd[1]: Starting ignition-mount.service... May 17 00:43:27.091296 systemd[1]: Starting sysroot-boot.service... May 17 00:43:27.102951 coreos-metadata[738]: May 17 00:43:27.102 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:43:27.108448 bash[788]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:43:27.119561 coreos-metadata[738]: May 17 00:43:27.119 INFO Fetch successful May 17 00:43:27.126026 coreos-metadata[738]: May 17 00:43:27.125 INFO wrote hostname ci-3510.3.7-n-d30b09a4ce to /sysroot/etc/hostname May 17 00:43:27.126282 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:43:27.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.136237 systemd[1]: Finished sysroot-boot.service. May 17 00:43:27.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.137403 coreos-metadata[737]: May 17 00:43:27.136 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:43:27.142582 ignition[790]: INFO : Ignition 2.14.0 May 17 00:43:27.143280 ignition[790]: INFO : Stage: mount May 17 00:43:27.143889 ignition[790]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:27.144514 ignition[790]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:27.147708 ignition[790]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:27.150238 ignition[790]: INFO : mount: mount passed May 17 00:43:27.150952 ignition[790]: INFO : Ignition finished successfully May 17 00:43:27.151834 coreos-metadata[737]: May 17 00:43:27.151 INFO Fetch successful May 17 00:43:27.153746 systemd[1]: Finished ignition-mount.service. May 17 00:43:27.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.160025 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:43:27.160159 systemd[1]: Finished flatcar-digitalocean-network.service. May 17 00:43:27.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.393907 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:43:27.413423 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (797) May 17 00:43:27.417124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:43:27.417207 kernel: BTRFS info (device vda6): using free space tree May 17 00:43:27.417221 kernel: BTRFS info (device vda6): has skinny extents May 17 00:43:27.423332 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:43:27.434522 systemd[1]: Starting ignition-files.service... May 17 00:43:27.457944 ignition[817]: INFO : Ignition 2.14.0 May 17 00:43:27.457944 ignition[817]: INFO : Stage: files May 17 00:43:27.459568 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:27.459568 ignition[817]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:27.461164 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:27.462701 ignition[817]: DEBUG : files: compiled without relabeling support, skipping May 17 00:43:27.463552 ignition[817]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:43:27.463552 ignition[817]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:43:27.467018 ignition[817]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:43:27.467880 ignition[817]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:43:27.469952 ignition[817]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:43:27.468700 unknown[817]: wrote ssh authorized keys file for user: core May 17 00:43:27.476526 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:43:27.476526 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:43:27.476526 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:43:27.476526 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:43:27.524732 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:43:27.653100 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:43:27.654614 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:43:27.655589 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:43:28.099621 systemd-networkd[693]: eth1: Gained IPv6LL May 17 00:43:28.116789 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 17 00:43:28.190072 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:43:28.191020 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:43:28.196168 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:43:28.196168 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:43:28.196168 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:43:28.196168 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:43:28.196168 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:43:28.196168 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:43:28.611615 systemd-networkd[693]: eth0: Gained IPv6LL May 17 00:43:28.675931 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 17 00:43:28.970116 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:43:28.970116 ignition[817]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:43:28.970116 ignition[817]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:43:28.970116 ignition[817]: INFO : files: op(e): [started] processing unit "containerd.service" May 17 00:43:28.973421 ignition[817]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:43:28.973421 ignition[817]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:43:28.973421 ignition[817]: INFO : files: op(e): [finished] processing unit "containerd.service" May 17 00:43:28.973421 ignition[817]: INFO : files: op(10): [started] processing unit "prepare-helm.service" May 17 00:43:28.973421 ignition[817]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:43:28.973421 ignition[817]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:43:28.973421 ignition[817]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" May 17 00:43:28.973421 ignition[817]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:43:28.981261 ignition[817]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:43:28.981261 ignition[817]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 17 00:43:28.981261 ignition[817]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:43:28.983829 ignition[817]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:43:28.983829 ignition[817]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:43:28.985428 ignition[817]: INFO : files: files passed May 17 00:43:28.985428 ignition[817]: INFO : Ignition finished successfully May 17 00:43:28.987088 systemd[1]: Finished ignition-files.service. May 17 00:43:28.995356 kernel: kauditd_printk_skb: 28 callbacks suppressed May 17 00:43:28.995406 kernel: audit: type=1130 audit(1747442608.987:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:28.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:28.988878 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:43:28.992107 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:43:28.994660 systemd[1]: Starting ignition-quench.service... May 17 00:43:29.005949 kernel: audit: type=1130 audit(1747442608.999:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.006141 kernel: audit: type=1131 audit(1747442608.999:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:28.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:28.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.006274 initrd-setup-root-after-ignition[842]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:43:28.998886 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:43:28.999014 systemd[1]: Finished ignition-quench.service. May 17 00:43:29.008991 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:43:29.013109 kernel: audit: type=1130 audit(1747442609.009:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.009601 systemd[1]: Reached target ignition-complete.target. May 17 00:43:29.014698 systemd[1]: Starting initrd-parse-etc.service... May 17 00:43:29.037528 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:43:29.038596 systemd[1]: Finished initrd-parse-etc.service. May 17 00:43:29.042406 kernel: audit: type=1130 audit(1747442609.039:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.042717 systemd[1]: Reached target initrd-fs.target. May 17 00:43:29.046301 kernel: audit: type=1131 audit(1747442609.042:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.045253 systemd[1]: Reached target initrd.target. May 17 00:43:29.045835 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:43:29.047674 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:43:29.064825 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:43:29.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.066794 systemd[1]: Starting initrd-cleanup.service... May 17 00:43:29.075997 kernel: audit: type=1130 audit(1747442609.064:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.083263 systemd[1]: Stopped target nss-lookup.target. May 17 00:43:29.083898 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:43:29.084741 systemd[1]: Stopped target timers.target. May 17 00:43:29.085463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:43:29.085626 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:43:29.091283 kernel: audit: type=1131 audit(1747442609.085:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.086516 systemd[1]: Stopped target initrd.target. May 17 00:43:29.090838 systemd[1]: Stopped target basic.target. May 17 00:43:29.091765 systemd[1]: Stopped target ignition-complete.target. May 17 00:43:29.092511 systemd[1]: Stopped target ignition-diskful.target. May 17 00:43:29.093219 systemd[1]: Stopped target initrd-root-device.target. May 17 00:43:29.094102 systemd[1]: Stopped target remote-fs.target. May 17 00:43:29.095005 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:43:29.095950 systemd[1]: Stopped target sysinit.target. May 17 00:43:29.096731 systemd[1]: Stopped target local-fs.target. May 17 00:43:29.097539 systemd[1]: Stopped target local-fs-pre.target. May 17 00:43:29.098541 systemd[1]: Stopped target swap.target. May 17 00:43:29.103644 kernel: audit: type=1131 audit(1747442609.099:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.099288 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:43:29.099483 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:43:29.109000 kernel: audit: type=1131 audit(1747442609.104:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.100280 systemd[1]: Stopped target cryptsetup.target. May 17 00:43:29.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.104107 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:43:29.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.104327 systemd[1]: Stopped dracut-initqueue.service. May 17 00:43:29.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.105485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:43:29.105677 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:43:29.109665 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:43:29.109825 systemd[1]: Stopped ignition-files.service. May 17 00:43:29.110591 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:43:29.110757 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:43:29.112650 systemd[1]: Stopping ignition-mount.service... May 17 00:43:29.113848 systemd[1]: Stopping iscsiuio.service... May 17 00:43:29.123155 systemd[1]: Stopping sysroot-boot.service... May 17 00:43:29.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.126787 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:43:29.127176 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:43:29.137298 ignition[855]: INFO : Ignition 2.14.0 May 17 00:43:29.137298 ignition[855]: INFO : Stage: umount May 17 00:43:29.137298 ignition[855]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:43:29.137298 ignition[855]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:43:29.137298 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:43:29.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.127977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:43:29.141921 ignition[855]: INFO : umount: umount passed May 17 00:43:29.141921 ignition[855]: INFO : Ignition finished successfully May 17 00:43:29.128142 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:43:29.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.142873 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:43:29.143067 systemd[1]: Stopped iscsiuio.service. May 17 00:43:29.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.147252 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:43:29.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.147454 systemd[1]: Stopped ignition-mount.service. May 17 00:43:29.148876 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:43:29.149013 systemd[1]: Stopped ignition-disks.service. May 17 00:43:29.149571 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:43:29.149629 systemd[1]: Stopped ignition-kargs.service. May 17 00:43:29.150419 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:43:29.150485 systemd[1]: Stopped ignition-fetch.service. May 17 00:43:29.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.155160 systemd[1]: Stopped target network.target. May 17 00:43:29.156271 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:43:29.157199 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:43:29.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.158937 systemd[1]: Stopped target paths.target. May 17 00:43:29.160282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:43:29.164793 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:43:29.166098 systemd[1]: Stopped target slices.target. May 17 00:43:29.166670 systemd[1]: Stopped target sockets.target. May 17 00:43:29.167109 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:43:29.167165 systemd[1]: Closed iscsid.socket. May 17 00:43:29.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.167643 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:43:29.167709 systemd[1]: Closed iscsiuio.socket. May 17 00:43:29.168327 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:43:29.168472 systemd[1]: Stopped ignition-setup.service. May 17 00:43:29.169696 systemd[1]: Stopping systemd-networkd.service... May 17 00:43:29.170533 systemd[1]: Stopping systemd-resolved.service... May 17 00:43:29.173457 systemd-networkd[693]: eth0: DHCPv6 lease lost May 17 00:43:29.173718 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:43:29.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.174757 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:43:29.174885 systemd[1]: Finished initrd-cleanup.service. May 17 00:43:29.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.176123 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:43:29.176248 systemd[1]: Stopped sysroot-boot.service. May 17 00:43:29.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.177498 systemd-networkd[693]: eth1: DHCPv6 lease lost May 17 00:43:29.179028 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:43:29.179165 systemd[1]: Stopped systemd-networkd.service. May 17 00:43:29.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.180856 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:43:29.180998 systemd[1]: Stopped systemd-resolved.service. May 17 00:43:29.183000 audit: BPF prog-id=9 op=UNLOAD May 17 00:43:29.183000 audit: BPF prog-id=6 op=UNLOAD May 17 00:43:29.184055 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:43:29.184119 systemd[1]: Closed systemd-networkd.socket. May 17 00:43:29.185006 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:43:29.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.185083 systemd[1]: Stopped initrd-setup-root.service. May 17 00:43:29.187016 systemd[1]: Stopping network-cleanup.service... May 17 00:43:29.187601 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:43:29.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.187697 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:43:29.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.188522 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:43:29.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.188607 systemd[1]: Stopped systemd-sysctl.service. May 17 00:43:29.189547 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:43:29.189613 systemd[1]: Stopped systemd-modules-load.service. May 17 00:43:29.196735 systemd[1]: Stopping systemd-udevd.service... May 17 00:43:29.199666 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:43:29.202297 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:43:29.202608 systemd[1]: Stopped systemd-udevd.service. May 17 00:43:29.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.203947 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:43:29.204016 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:43:29.206637 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:43:29.206691 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:43:29.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.207278 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:43:29.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.207351 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:43:29.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.214208 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:43:29.214294 systemd[1]: Stopped dracut-cmdline.service. May 17 00:43:29.214889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:43:29.214956 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:43:29.216890 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:43:29.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.217564 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:43:29.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.217655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:43:29.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.218606 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:43:29.218678 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:43:29.219196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:43:29.219268 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:43:29.221895 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:43:29.222805 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:43:29.222959 systemd[1]: Stopped network-cleanup.service. May 17 00:43:29.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.230818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:43:29.231565 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:43:29.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:29.232246 systemd[1]: Reached target initrd-switch-root.target. May 17 00:43:29.234470 systemd[1]: Starting initrd-switch-root.service... May 17 00:43:29.245314 systemd[1]: Switching root. May 17 00:43:29.248000 audit: BPF prog-id=8 op=UNLOAD May 17 00:43:29.248000 audit: BPF prog-id=7 op=UNLOAD May 17 00:43:29.250000 audit: BPF prog-id=5 op=UNLOAD May 17 00:43:29.250000 audit: BPF prog-id=4 op=UNLOAD May 17 00:43:29.250000 audit: BPF prog-id=3 op=UNLOAD May 17 00:43:29.271716 iscsid[704]: iscsid shutting down. May 17 00:43:29.272281 systemd-journald[183]: Journal stopped May 17 00:43:33.112647 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). May 17 00:43:33.112752 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:43:33.112775 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:43:33.112790 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:43:33.112806 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:43:33.112817 kernel: SELinux: policy capability open_perms=1 May 17 00:43:33.112829 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:43:33.112840 kernel: SELinux: policy capability always_check_network=0 May 17 00:43:33.112852 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:43:33.112867 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:43:33.112879 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:43:33.112893 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:43:33.112916 systemd[1]: Successfully loaded SELinux policy in 52.059ms. May 17 00:43:33.112951 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.350ms. May 17 00:43:33.112972 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:43:33.112990 systemd[1]: Detected virtualization kvm. May 17 00:43:33.113006 systemd[1]: Detected architecture x86-64. May 17 00:43:33.113025 systemd[1]: Detected first boot. May 17 00:43:33.113042 systemd[1]: Hostname set to . May 17 00:43:33.113065 systemd[1]: Initializing machine ID from VM UUID. May 17 00:43:33.113084 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:43:33.113103 systemd[1]: Populated /etc with preset unit settings. May 17 00:43:33.113122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:43:33.113141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:43:33.113161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:43:33.113189 systemd[1]: Queued start job for default target multi-user.target. May 17 00:43:33.113213 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:43:33.113232 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:43:33.113251 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:43:33.113276 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:43:33.113296 systemd[1]: Created slice system-getty.slice. May 17 00:43:33.113317 systemd[1]: Created slice system-modprobe.slice. May 17 00:43:33.113336 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:43:33.113353 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:43:33.113387 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:43:33.113417 systemd[1]: Created slice user.slice. May 17 00:43:33.113437 systemd[1]: Started systemd-ask-password-console.path. May 17 00:43:33.113456 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:43:33.113474 systemd[1]: Set up automount boot.automount. May 17 00:43:33.113494 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:43:33.113516 systemd[1]: Reached target integritysetup.target. May 17 00:43:33.113540 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:43:33.113560 systemd[1]: Reached target remote-fs.target. May 17 00:43:33.113578 systemd[1]: Reached target slices.target. May 17 00:43:33.113595 systemd[1]: Reached target swap.target. May 17 00:43:33.113613 systemd[1]: Reached target torcx.target. May 17 00:43:33.113631 systemd[1]: Reached target veritysetup.target. May 17 00:43:33.113650 systemd[1]: Listening on systemd-coredump.socket. May 17 00:43:33.113668 systemd[1]: Listening on systemd-initctl.socket. May 17 00:43:33.113687 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:43:33.113711 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:43:33.113735 systemd[1]: Listening on systemd-journald.socket. May 17 00:43:33.113754 systemd[1]: Listening on systemd-networkd.socket. May 17 00:43:33.113774 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:43:33.113794 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:43:33.113812 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:43:33.113838 systemd[1]: Mounting dev-hugepages.mount... May 17 00:43:33.113855 systemd[1]: Mounting dev-mqueue.mount... May 17 00:43:33.113873 systemd[1]: Mounting media.mount... May 17 00:43:33.113892 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:33.113923 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:43:33.113944 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:43:33.113978 systemd[1]: Mounting tmp.mount... May 17 00:43:33.113993 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:43:33.114014 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:43:33.114035 systemd[1]: Starting kmod-static-nodes.service... May 17 00:43:33.114053 systemd[1]: Starting modprobe@configfs.service... May 17 00:43:33.114065 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:43:33.114077 systemd[1]: Starting modprobe@drm.service... May 17 00:43:33.114095 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:43:33.114121 systemd[1]: Starting modprobe@fuse.service... May 17 00:43:33.114144 systemd[1]: Starting modprobe@loop.service... May 17 00:43:33.114164 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:43:33.114181 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:43:33.114194 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:43:33.114208 systemd[1]: Starting systemd-journald.service... May 17 00:43:33.114226 systemd[1]: Starting systemd-modules-load.service... May 17 00:43:33.114239 systemd[1]: Starting systemd-network-generator.service... May 17 00:43:33.114260 systemd[1]: Starting systemd-remount-fs.service... May 17 00:43:33.114272 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:43:33.114286 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:33.114299 systemd[1]: Mounted dev-hugepages.mount. May 17 00:43:33.114311 systemd[1]: Mounted dev-mqueue.mount. May 17 00:43:33.114322 systemd[1]: Mounted media.mount. May 17 00:43:33.114336 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:43:33.114354 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:43:33.121027 systemd[1]: Mounted tmp.mount. May 17 00:43:33.121103 systemd[1]: Finished kmod-static-nodes.service. May 17 00:43:33.121123 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:43:33.121140 systemd[1]: Finished modprobe@configfs.service. May 17 00:43:33.121156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:43:33.121175 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:43:33.121192 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:43:33.121210 systemd[1]: Finished modprobe@drm.service. May 17 00:43:33.121225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:43:33.121243 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:43:33.121266 systemd[1]: Finished systemd-modules-load.service. May 17 00:43:33.121284 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:43:33.121302 systemd[1]: Starting systemd-sysctl.service... May 17 00:43:33.121319 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:43:33.121335 kernel: loop: module loaded May 17 00:43:33.121354 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:43:33.121395 systemd[1]: Finished modprobe@loop.service. May 17 00:43:33.121413 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:43:33.121441 systemd-journald[994]: Journal started May 17 00:43:33.121537 systemd-journald[994]: Runtime Journal (/run/log/journal/d0cc2c25d9694a24ab7f9151d24783b5) is 4.9M, max 39.5M, 34.5M free. May 17 00:43:33.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.103000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:43:33.103000 audit[994]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff3489c4f0 a2=4000 a3=7fff3489c58c items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:33.103000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:43:33.130349 kernel: fuse: init (API version 7.34) May 17 00:43:33.130436 systemd[1]: Started systemd-journald.service. May 17 00:43:33.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.125167 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:43:33.125609 systemd[1]: Finished modprobe@fuse.service. May 17 00:43:33.129845 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:43:33.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.135541 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:43:33.137454 systemd[1]: Finished systemd-remount-fs.service. May 17 00:43:33.138465 systemd[1]: Finished systemd-network-generator.service. May 17 00:43:33.139075 systemd[1]: Reached target network-pre.target. May 17 00:43:33.139531 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:43:33.142131 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:43:33.144990 systemd[1]: Starting systemd-journal-flush.service... May 17 00:43:33.154764 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:43:33.164865 systemd[1]: Starting systemd-random-seed.service... May 17 00:43:33.172993 systemd[1]: Finished systemd-sysctl.service. May 17 00:43:33.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.195327 systemd-journald[994]: Time spent on flushing to /var/log/journal/d0cc2c25d9694a24ab7f9151d24783b5 is 56.674ms for 1089 entries. May 17 00:43:33.195327 systemd-journald[994]: System Journal (/var/log/journal/d0cc2c25d9694a24ab7f9151d24783b5) is 8.0M, max 195.6M, 187.6M free. May 17 00:43:33.256702 systemd-journald[994]: Received client request to flush runtime journal. May 17 00:43:33.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.204785 systemd[1]: Finished systemd-random-seed.service. May 17 00:43:33.205483 systemd[1]: Reached target first-boot-complete.target. May 17 00:43:33.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.258130 systemd[1]: Finished systemd-journal-flush.service. May 17 00:43:33.274713 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:43:33.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.277484 systemd[1]: Starting systemd-udev-settle.service... May 17 00:43:33.283053 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:43:33.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.285796 systemd[1]: Starting systemd-sysusers.service... May 17 00:43:33.303758 udevadm[1046]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:43:33.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.336906 systemd[1]: Finished systemd-sysusers.service. May 17 00:43:33.339628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:43:33.388101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:43:33.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.002597 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:43:34.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.007591 kernel: kauditd_printk_skb: 78 callbacks suppressed May 17 00:43:34.007716 kernel: audit: type=1130 audit(1747442614.002:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.007789 systemd[1]: Starting systemd-udevd.service... May 17 00:43:34.031951 systemd-udevd[1055]: Using default interface naming scheme 'v252'. May 17 00:43:34.063578 systemd[1]: Started systemd-udevd.service. May 17 00:43:34.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.067397 kernel: audit: type=1130 audit(1747442614.063:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.070365 systemd[1]: Starting systemd-networkd.service... May 17 00:43:34.082248 systemd[1]: Starting systemd-userdbd.service... May 17 00:43:34.156659 systemd[1]: Started systemd-userdbd.service. May 17 00:43:34.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.161441 kernel: audit: type=1130 audit(1747442614.156:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.162506 systemd[1]: Found device dev-ttyS0.device. May 17 00:43:34.182775 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:34.182998 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:43:34.186692 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:43:34.188616 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:43:34.190561 systemd[1]: Starting modprobe@loop.service... May 17 00:43:34.191855 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:43:34.192068 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:43:34.192209 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:34.192771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:43:34.192971 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:43:34.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.200234 kernel: audit: type=1130 audit(1747442614.194:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.200365 kernel: audit: type=1131 audit(1747442614.196:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.203533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:43:34.203763 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:43:34.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.208410 kernel: audit: type=1130 audit(1747442614.204:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.212544 kernel: audit: type=1131 audit(1747442614.207:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.208645 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:43:34.241867 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:43:34.242207 systemd[1]: Finished modprobe@loop.service. May 17 00:43:34.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.252410 kernel: audit: type=1130 audit(1747442614.242:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.252551 kernel: audit: type=1131 audit(1747442614.245:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.250139 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:43:34.259930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:43:34.328860 systemd-networkd[1071]: lo: Link UP May 17 00:43:34.329297 systemd-networkd[1071]: lo: Gained carrier May 17 00:43:34.330188 systemd-networkd[1071]: Enumeration completed May 17 00:43:34.330479 systemd-networkd[1071]: eth1: Configuring with /run/systemd/network/10-8a:92:45:ac:bf:43.network. May 17 00:43:34.330577 systemd[1]: Started systemd-networkd.service. May 17 00:43:34.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.332457 systemd-networkd[1071]: eth0: Configuring with /run/systemd/network/10-b2:8a:10:25:0d:e0.network. May 17 00:43:34.335436 kernel: audit: type=1130 audit(1747442614.330:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.335739 systemd-networkd[1071]: eth1: Link UP May 17 00:43:34.335880 systemd-networkd[1071]: eth1: Gained carrier May 17 00:43:34.341770 systemd-networkd[1071]: eth0: Link UP May 17 00:43:34.341782 systemd-networkd[1071]: eth0: Gained carrier May 17 00:43:34.356451 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:43:34.363415 kernel: ACPI: button: Power Button [PWRF] May 17 00:43:34.365000 audit[1066]: AVC avc: denied { confidentiality } for pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:43:34.365000 audit[1066]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560f8e319650 a1=338ac a2=7f7cfe999bc5 a3=5 items=110 ppid=1055 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:34.365000 audit: CWD cwd="/" May 17 00:43:34.365000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=1 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=2 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=3 name=(null) inode=13110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=4 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=5 name=(null) inode=13111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=6 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=7 name=(null) inode=13112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=8 name=(null) inode=13112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=9 name=(null) inode=13113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=10 name=(null) inode=13112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=11 name=(null) inode=13114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=12 name=(null) inode=13112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=13 name=(null) inode=13115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=14 name=(null) inode=13112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=15 name=(null) inode=13116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=16 name=(null) inode=13112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=17 name=(null) inode=13117 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=18 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=19 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=20 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=21 name=(null) inode=13119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=22 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=23 name=(null) inode=13120 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=24 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=25 name=(null) inode=13121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=26 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=27 name=(null) inode=13122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=28 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=29 name=(null) inode=13123 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=30 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=31 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=32 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=33 name=(null) inode=13125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=34 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=35 name=(null) inode=13126 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=36 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=37 name=(null) inode=13127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=38 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=39 name=(null) inode=13128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=40 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=41 name=(null) inode=13129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=42 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=43 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=44 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=45 name=(null) inode=13131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=46 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=47 name=(null) inode=13132 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=48 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=49 name=(null) inode=13133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=50 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=51 name=(null) inode=13134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=52 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=53 name=(null) inode=13135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=55 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=56 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=57 name=(null) inode=13137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=58 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=59 name=(null) inode=13138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=60 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=61 name=(null) inode=13139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=62 name=(null) inode=13139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=63 name=(null) inode=13140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=64 name=(null) inode=13139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=65 name=(null) inode=13141 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=66 name=(null) inode=13139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=67 name=(null) inode=13142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=68 name=(null) inode=13139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=69 name=(null) inode=13143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=70 name=(null) inode=13139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=71 name=(null) inode=13144 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=72 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=73 name=(null) inode=13145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=74 name=(null) inode=13145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=75 name=(null) inode=13146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=76 name=(null) inode=13145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=77 name=(null) inode=13147 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=78 name=(null) inode=13145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=79 name=(null) inode=13148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=80 name=(null) inode=13145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=81 name=(null) inode=13149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=82 name=(null) inode=13145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=83 name=(null) inode=13150 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=84 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=85 name=(null) inode=13151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=86 name=(null) inode=13151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=87 name=(null) inode=13152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=88 name=(null) inode=13151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=89 name=(null) inode=13153 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=90 name=(null) inode=13151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=91 name=(null) inode=13154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=92 name=(null) inode=13151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=93 name=(null) inode=13155 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=94 name=(null) inode=13151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=95 name=(null) inode=13156 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=96 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=97 name=(null) inode=13157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=98 name=(null) inode=13157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=99 name=(null) inode=13158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=100 name=(null) inode=13157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=101 name=(null) inode=13159 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=102 name=(null) inode=13157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=103 name=(null) inode=13160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=104 name=(null) inode=13157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=105 name=(null) inode=13161 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=106 name=(null) inode=13157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=107 name=(null) inode=13162 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PATH item=109 name=(null) inode=13164 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:34.365000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:43:34.415405 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:43:34.435435 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:43:34.444405 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:43:34.569417 kernel: EDAC MC: Ver: 3.0.0 May 17 00:43:34.593241 systemd[1]: Finished systemd-udev-settle.service. May 17 00:43:34.596189 systemd[1]: Starting lvm2-activation-early.service... May 17 00:43:34.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.620667 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:43:34.651331 systemd[1]: Finished lvm2-activation-early.service. May 17 00:43:34.652106 systemd[1]: Reached target cryptsetup.target. May 17 00:43:34.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.654706 systemd[1]: Starting lvm2-activation.service... May 17 00:43:34.663331 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:43:34.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.697063 systemd[1]: Finished lvm2-activation.service. May 17 00:43:34.697728 systemd[1]: Reached target local-fs-pre.target. May 17 00:43:34.701257 systemd[1]: Mounting media-configdrive.mount... May 17 00:43:34.702211 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:43:34.702529 systemd[1]: Reached target machines.target. May 17 00:43:34.705011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:43:34.722910 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:43:34.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.726414 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:43:34.730067 systemd[1]: Mounted media-configdrive.mount. May 17 00:43:34.731035 systemd[1]: Reached target local-fs.target. May 17 00:43:34.734558 systemd[1]: Starting ldconfig.service... May 17 00:43:34.737788 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:43:34.738123 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:34.742192 systemd[1]: Starting systemd-boot-update.service... May 17 00:43:34.750904 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:43:34.755738 systemd[1]: Starting systemd-sysext.service... May 17 00:43:34.767582 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) May 17 00:43:34.772222 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:43:34.776843 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:43:34.792283 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:43:34.792668 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:43:34.817787 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:43:34.819901 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:43:34.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.821857 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:43:34.853673 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:43:34.878419 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:43:34.896795 (sd-sysext)[1123]: Using extensions 'kubernetes'. May 17 00:43:34.899282 (sd-sysext)[1123]: Merged extensions into '/usr'. May 17 00:43:34.907240 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) May 17 00:43:34.907240 systemd-fsck[1120]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:43:34.919705 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:43:34.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.928958 systemd[1]: Mounting boot.mount... May 17 00:43:34.946717 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:34.949471 systemd[1]: Mounting usr-share-oem.mount... May 17 00:43:34.950671 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:43:34.953908 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:43:34.956585 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:43:34.961971 systemd[1]: Starting modprobe@loop.service... May 17 00:43:34.962637 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:43:34.962866 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:34.963112 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:34.974570 systemd[1]: Mounted boot.mount. May 17 00:43:34.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.977183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:43:34.977623 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:43:34.981634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:43:34.981963 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:43:34.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:34.984396 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:43:34.993119 systemd[1]: Mounted usr-share-oem.mount. May 17 00:43:35.002713 systemd[1]: Finished systemd-sysext.service. May 17 00:43:35.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.012892 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:43:35.013718 systemd[1]: Finished modprobe@loop.service. May 17 00:43:35.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.017783 systemd[1]: Starting ensure-sysext.service... May 17 00:43:35.018568 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:43:35.026839 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:43:35.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.040736 systemd[1]: Finished systemd-boot-update.service. May 17 00:43:35.051153 systemd[1]: Reloading. May 17 00:43:35.061154 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:43:35.062399 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:43:35.065436 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:43:35.226130 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-05-17T00:43:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:43:35.226160 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-05-17T00:43:35Z" level=info msg="torcx already run" May 17 00:43:35.282808 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:43:35.448684 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:43:35.449237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:43:35.484574 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:43:35.523725 systemd-networkd[1071]: eth0: Gained IPv6LL May 17 00:43:35.573836 systemd[1]: Finished ldconfig.service. May 17 00:43:35.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.577517 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:43:35.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.582843 systemd[1]: Starting audit-rules.service... May 17 00:43:35.586789 systemd[1]: Starting clean-ca-certificates.service... May 17 00:43:35.593959 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:43:35.597592 systemd[1]: Starting systemd-resolved.service... May 17 00:43:35.603817 systemd[1]: Starting systemd-timesyncd.service... May 17 00:43:35.610596 systemd[1]: Starting systemd-update-utmp.service... May 17 00:43:35.612631 systemd[1]: Finished clean-ca-certificates.service. May 17 00:43:35.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.638000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:43:35.629770 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:35.630378 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:43:35.632806 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:43:35.638989 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:43:35.642946 systemd[1]: Starting modprobe@loop.service... May 17 00:43:35.643678 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:43:35.643922 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:35.644139 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:43:35.644276 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:35.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.649653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:43:35.649969 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:43:35.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.660706 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:43:35.660995 systemd[1]: Finished modprobe@loop.service. May 17 00:43:35.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.663878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:43:35.664248 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:43:35.665549 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:35.666055 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:43:35.674795 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:43:35.677727 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:43:35.678002 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:35.678181 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:43:35.678335 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:43:35.680152 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:35.686138 systemd[1]: Finished systemd-update-utmp.service. May 17 00:43:35.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.692921 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:35.693258 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:43:35.695751 systemd[1]: Starting modprobe@drm.service... May 17 00:43:35.698546 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:43:35.701417 systemd[1]: Starting modprobe@loop.service... May 17 00:43:35.705719 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:43:35.706168 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:35.710797 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:43:35.711745 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:43:35.711975 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:43:35.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.716407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:43:35.716691 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:43:35.717961 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:43:35.718239 systemd[1]: Finished modprobe@drm.service. May 17 00:43:35.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.720199 systemd-networkd[1071]: eth1: Gained IPv6LL May 17 00:43:35.723799 systemd[1]: Finished ensure-sysext.service. May 17 00:43:35.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.736593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:43:35.736856 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:43:35.737568 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:43:35.755121 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:43:35.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.760776 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:43:35.761072 systemd[1]: Finished modprobe@loop.service. May 17 00:43:35.761844 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:43:35.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.766175 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:43:35.769518 systemd[1]: Starting systemd-update-done.service... May 17 00:43:35.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:35.786487 systemd[1]: Finished systemd-update-done.service. May 17 00:43:35.838000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:43:35.838000 audit[1261]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeeca115a0 a2=420 a3=0 items=0 ppid=1217 pid=1261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:35.838000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:43:35.840143 augenrules[1261]: No rules May 17 00:43:35.840876 systemd[1]: Finished audit-rules.service. May 17 00:43:35.861891 systemd-resolved[1221]: Positive Trust Anchors: May 17 00:43:35.862526 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:43:35.862714 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:43:35.875732 systemd-resolved[1221]: Using system hostname 'ci-3510.3.7-n-d30b09a4ce'. May 17 00:43:35.879332 systemd[1]: Started systemd-resolved.service. May 17 00:43:35.879939 systemd[1]: Reached target network.target. May 17 00:43:35.880340 systemd[1]: Reached target network-online.target. May 17 00:43:35.880795 systemd[1]: Reached target nss-lookup.target. May 17 00:43:35.882842 systemd[1]: Started systemd-timesyncd.service. May 17 00:43:35.883719 systemd[1]: Reached target sysinit.target. May 17 00:43:35.884404 systemd[1]: Started motdgen.path. May 17 00:43:35.884907 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:43:35.885481 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:43:35.886059 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:43:35.886157 systemd[1]: Reached target paths.target. May 17 00:43:35.886612 systemd[1]: Reached target time-set.target. May 17 00:43:35.887332 systemd[1]: Started logrotate.timer. May 17 00:43:35.887962 systemd[1]: Started mdadm.timer. May 17 00:43:35.888470 systemd[1]: Reached target timers.target. May 17 00:43:35.889513 systemd[1]: Listening on dbus.socket. May 17 00:43:35.892956 systemd[1]: Starting docker.socket... May 17 00:43:35.897669 systemd[1]: Listening on sshd.socket. May 17 00:43:35.898498 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:35.899413 systemd[1]: Listening on docker.socket. May 17 00:43:35.899985 systemd[1]: Reached target sockets.target. May 17 00:43:35.900537 systemd[1]: Reached target basic.target. May 17 00:43:35.901268 systemd[1]: System is tainted: cgroupsv1 May 17 00:43:35.901384 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:43:35.901424 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:43:35.903536 systemd[1]: Starting containerd.service... May 17 00:43:35.905653 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:43:35.911700 systemd[1]: Starting dbus.service... May 17 00:43:35.917787 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:43:35.922774 systemd[1]: Starting extend-filesystems.service... May 17 00:43:35.923429 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:43:35.926916 systemd[1]: Starting kubelet.service... May 17 00:43:35.929798 systemd[1]: Starting motdgen.service... May 17 00:43:35.934181 systemd[1]: Starting prepare-helm.service... May 17 00:43:35.945586 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:43:36.398301 jq[1275]: false May 17 00:43:36.386615 systemd-timesyncd[1222]: Contacted time server 23.150.41.122:123 (0.flatcar.pool.ntp.org). May 17 00:43:36.386714 systemd-timesyncd[1222]: Initial clock synchronization to Sat 2025-05-17 00:43:36.386420 UTC. May 17 00:43:36.387678 systemd[1]: Starting sshd-keygen.service... May 17 00:43:36.399632 systemd-resolved[1221]: Clock change detected. Flushing caches. May 17 00:43:36.399805 systemd[1]: Starting systemd-logind.service... May 17 00:43:36.400744 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:43:36.401237 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:43:36.403924 systemd[1]: Starting update-engine.service... May 17 00:43:36.410422 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:43:36.424598 jq[1291]: true May 17 00:43:36.427842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:43:36.428388 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:43:36.434875 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:43:36.439158 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:43:36.489432 tar[1297]: linux-amd64/helm May 17 00:43:36.504910 jq[1299]: true May 17 00:43:36.509904 dbus-daemon[1273]: [system] SELinux support is enabled May 17 00:43:36.511361 systemd[1]: Started dbus.service. May 17 00:43:36.516935 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:43:36.517031 systemd[1]: Reached target system-config.target. May 17 00:43:36.519582 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:43:36.519625 systemd[1]: Reached target user-config.target. May 17 00:43:36.521698 extend-filesystems[1276]: Found loop1 May 17 00:43:36.523001 extend-filesystems[1276]: Found vda May 17 00:43:36.542709 extend-filesystems[1276]: Found vda1 May 17 00:43:36.542709 extend-filesystems[1276]: Found vda2 May 17 00:43:36.542709 extend-filesystems[1276]: Found vda3 May 17 00:43:36.542709 extend-filesystems[1276]: Found usr May 17 00:43:36.542709 extend-filesystems[1276]: Found vda4 May 17 00:43:36.542709 extend-filesystems[1276]: Found vda6 May 17 00:43:36.566536 extend-filesystems[1276]: Found vda7 May 17 00:43:36.566536 extend-filesystems[1276]: Found vda9 May 17 00:43:36.566536 extend-filesystems[1276]: Checking size of /dev/vda9 May 17 00:43:36.618625 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:43:36.619010 systemd[1]: Finished motdgen.service. May 17 00:43:36.626075 extend-filesystems[1276]: Resized partition /dev/vda9 May 17 00:43:36.650205 extend-filesystems[1331]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:43:36.666427 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:43:36.670350 update_engine[1289]: I0517 00:43:36.669667 1289 main.cc:92] Flatcar Update Engine starting May 17 00:43:36.676620 systemd[1]: Started update-engine.service. May 17 00:43:36.678049 update_engine[1289]: I0517 00:43:36.677627 1289 update_check_scheduler.cc:74] Next update check in 6m36s May 17 00:43:36.680237 systemd[1]: Started locksmithd.service. May 17 00:43:36.760363 bash[1335]: Updated "/home/core/.ssh/authorized_keys" May 17 00:43:36.762439 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:43:36.815421 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:43:36.855256 extend-filesystems[1331]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:43:36.855256 extend-filesystems[1331]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:43:36.855256 extend-filesystems[1331]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:43:36.854344 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:43:36.860955 extend-filesystems[1276]: Resized filesystem in /dev/vda9 May 17 00:43:36.860955 extend-filesystems[1276]: Found vdb May 17 00:43:36.854739 systemd[1]: Finished extend-filesystems.service. May 17 00:43:36.877684 env[1303]: time="2025-05-17T00:43:36.864199833Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:43:36.892168 systemd-logind[1288]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:43:36.892658 systemd-logind[1288]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:43:36.897606 systemd-logind[1288]: New seat seat0. May 17 00:43:36.907540 coreos-metadata[1271]: May 17 00:43:36.895 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:43:36.913416 systemd[1]: Started systemd-logind.service. May 17 00:43:36.934523 coreos-metadata[1271]: May 17 00:43:36.934 INFO Fetch successful May 17 00:43:36.960915 env[1303]: time="2025-05-17T00:43:36.960789705Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:43:36.961123 env[1303]: time="2025-05-17T00:43:36.961094546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:43:36.965431 unknown[1271]: wrote ssh authorized keys file for user: core May 17 00:43:36.965964 env[1303]: time="2025-05-17T00:43:36.965799298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:43:36.965964 env[1303]: time="2025-05-17T00:43:36.965848035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.978925301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.978998664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.979035977Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.979068991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.979265669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.979731808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.980124401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.980166036Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.980263446Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:43:36.980683 env[1303]: time="2025-05-17T00:43:36.980282263Z" level=info msg="metadata content store policy set" policy=shared May 17 00:43:36.992768 update-ssh-keys[1345]: Updated "/home/core/.ssh/authorized_keys" May 17 00:43:36.994119 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:43:36.996173 env[1303]: time="2025-05-17T00:43:36.996039498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:43:36.996173 env[1303]: time="2025-05-17T00:43:36.996107005Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997171986Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997278651Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997413412Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997439372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997459829Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997482926Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997501428Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997523807Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997546425Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997570652Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.997762401Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:36.999375999Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:37.000131472Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:43:37.002167 env[1303]: time="2025-05-17T00:43:37.000184214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000206302Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000274678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000292857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000310301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000345285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000366840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000383564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000568852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000623424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000658281Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000856419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000880591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000902832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:43:37.006013 env[1303]: time="2025-05-17T00:43:37.000921095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:43:37.008998 env[1303]: time="2025-05-17T00:43:37.000945571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:43:37.008998 env[1303]: time="2025-05-17T00:43:37.000963217Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:43:37.008998 env[1303]: time="2025-05-17T00:43:37.000989390Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:43:37.010259 env[1303]: time="2025-05-17T00:43:37.001036330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:43:37.010901 env[1303]: time="2025-05-17T00:43:37.010798291Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:43:37.012949 env[1303]: time="2025-05-17T00:43:37.011293596Z" level=info msg="Connect containerd service" May 17 00:43:37.012949 env[1303]: time="2025-05-17T00:43:37.011407879Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:43:37.013881 env[1303]: time="2025-05-17T00:43:37.013834920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:43:37.015707 env[1303]: time="2025-05-17T00:43:37.015669191Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:43:37.015898 env[1303]: time="2025-05-17T00:43:37.015881282Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:43:37.016180 env[1303]: time="2025-05-17T00:43:37.016160412Z" level=info msg="containerd successfully booted in 0.215149s" May 17 00:43:37.016404 systemd[1]: Started containerd.service. May 17 00:43:37.043966 env[1303]: time="2025-05-17T00:43:37.043872602Z" level=info msg="Start subscribing containerd event" May 17 00:43:37.043966 env[1303]: time="2025-05-17T00:43:37.043990803Z" level=info msg="Start recovering state" May 17 00:43:37.044177 env[1303]: time="2025-05-17T00:43:37.044110907Z" level=info msg="Start event monitor" May 17 00:43:37.044177 env[1303]: time="2025-05-17T00:43:37.044134929Z" level=info msg="Start snapshots syncer" May 17 00:43:37.044177 env[1303]: time="2025-05-17T00:43:37.044156870Z" level=info msg="Start cni network conf syncer for default" May 17 00:43:37.044177 env[1303]: time="2025-05-17T00:43:37.044169057Z" level=info msg="Start streaming server" May 17 00:43:37.372828 systemd[1]: Created slice system-sshd.slice. May 17 00:43:37.826559 tar[1297]: linux-amd64/LICENSE May 17 00:43:37.826559 tar[1297]: linux-amd64/README.md May 17 00:43:37.833307 systemd[1]: Finished prepare-helm.service. May 17 00:43:37.904129 locksmithd[1336]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:43:38.309418 sshd_keygen[1301]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:43:38.348170 systemd[1]: Finished sshd-keygen.service. May 17 00:43:38.351637 systemd[1]: Starting issuegen.service... May 17 00:43:38.354923 systemd[1]: Started sshd@0-137.184.126.228:22-147.75.109.163:48600.service. May 17 00:43:38.375172 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:43:38.375597 systemd[1]: Finished issuegen.service. May 17 00:43:38.379342 systemd[1]: Starting systemd-user-sessions.service... May 17 00:43:38.395958 systemd[1]: Finished systemd-user-sessions.service. May 17 00:43:38.399192 systemd[1]: Started getty@tty1.service. May 17 00:43:38.403709 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:43:38.405393 systemd[1]: Reached target getty.target. May 17 00:43:38.467922 sshd[1366]: Accepted publickey for core from 147.75.109.163 port 48600 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:38.472154 sshd[1366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:38.493871 systemd[1]: Created slice user-500.slice. May 17 00:43:38.497086 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:43:38.505836 systemd-logind[1288]: New session 1 of user core. May 17 00:43:38.521801 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:43:38.525703 systemd[1]: Starting user@500.service... May 17 00:43:38.542474 (systemd)[1379]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:38.568467 systemd[1]: Started kubelet.service. May 17 00:43:38.570412 systemd[1]: Reached target multi-user.target. May 17 00:43:38.574288 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:43:38.598456 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:43:38.598953 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:43:38.701378 systemd[1379]: Queued start job for default target default.target. May 17 00:43:38.701756 systemd[1379]: Reached target paths.target. May 17 00:43:38.701780 systemd[1379]: Reached target sockets.target. May 17 00:43:38.701797 systemd[1379]: Reached target timers.target. May 17 00:43:38.701814 systemd[1379]: Reached target basic.target. May 17 00:43:38.702028 systemd[1]: Started user@500.service. May 17 00:43:38.703986 systemd[1]: Started session-1.scope. May 17 00:43:38.707030 systemd[1]: Startup finished in 6.520s (kernel) + 8.843s (userspace) = 15.363s. May 17 00:43:38.708186 systemd[1379]: Reached target default.target. May 17 00:43:38.708635 systemd[1379]: Startup finished in 153ms. May 17 00:43:38.781139 systemd[1]: Started sshd@1-137.184.126.228:22-147.75.109.163:55598.service. May 17 00:43:38.856275 sshd[1400]: Accepted publickey for core from 147.75.109.163 port 55598 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:38.859241 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:38.872859 systemd[1]: Started session-2.scope. May 17 00:43:38.873960 systemd-logind[1288]: New session 2 of user core. May 17 00:43:38.950033 sshd[1400]: pam_unix(sshd:session): session closed for user core May 17 00:43:38.958617 systemd[1]: Started sshd@2-137.184.126.228:22-147.75.109.163:55612.service. May 17 00:43:38.962111 systemd[1]: sshd@1-137.184.126.228:22-147.75.109.163:55598.service: Deactivated successfully. May 17 00:43:38.963475 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:43:38.971638 systemd-logind[1288]: Session 2 logged out. Waiting for processes to exit. May 17 00:43:38.977832 systemd-logind[1288]: Removed session 2. May 17 00:43:39.016625 sshd[1405]: Accepted publickey for core from 147.75.109.163 port 55612 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:39.019429 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:39.027675 systemd-logind[1288]: New session 3 of user core. May 17 00:43:39.028377 systemd[1]: Started session-3.scope. May 17 00:43:39.100889 sshd[1405]: pam_unix(sshd:session): session closed for user core May 17 00:43:39.105438 systemd[1]: Started sshd@3-137.184.126.228:22-147.75.109.163:55624.service. May 17 00:43:39.109062 systemd[1]: sshd@2-137.184.126.228:22-147.75.109.163:55612.service: Deactivated successfully. May 17 00:43:39.110274 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:43:39.115307 systemd-logind[1288]: Session 3 logged out. Waiting for processes to exit. May 17 00:43:39.117727 systemd-logind[1288]: Removed session 3. May 17 00:43:39.168934 sshd[1412]: Accepted publickey for core from 147.75.109.163 port 55624 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:39.170351 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:39.178526 systemd[1]: Started session-4.scope. May 17 00:43:39.180464 systemd-logind[1288]: New session 4 of user core. May 17 00:43:39.253165 sshd[1412]: pam_unix(sshd:session): session closed for user core May 17 00:43:39.257744 systemd[1]: Started sshd@4-137.184.126.228:22-147.75.109.163:55640.service. May 17 00:43:39.261813 systemd[1]: sshd@3-137.184.126.228:22-147.75.109.163:55624.service: Deactivated successfully. May 17 00:43:39.266726 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:43:39.267906 systemd-logind[1288]: Session 4 logged out. Waiting for processes to exit. May 17 00:43:39.271093 systemd-logind[1288]: Removed session 4. May 17 00:43:39.316511 sshd[1419]: Accepted publickey for core from 147.75.109.163 port 55640 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:39.319268 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:39.326554 systemd[1]: Started session-5.scope. May 17 00:43:39.326936 systemd-logind[1288]: New session 5 of user core. May 17 00:43:39.415618 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:43:39.418705 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:43:39.470203 systemd[1]: Starting docker.service... May 17 00:43:39.545277 env[1435]: time="2025-05-17T00:43:39.545187177Z" level=info msg="Starting up" May 17 00:43:39.553083 env[1435]: time="2025-05-17T00:43:39.552779442Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:43:39.553083 env[1435]: time="2025-05-17T00:43:39.552844777Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:43:39.553083 env[1435]: time="2025-05-17T00:43:39.552909096Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:43:39.553083 env[1435]: time="2025-05-17T00:43:39.552930811Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:43:39.558378 env[1435]: time="2025-05-17T00:43:39.556373222Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:43:39.558378 env[1435]: time="2025-05-17T00:43:39.556402126Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:43:39.558378 env[1435]: time="2025-05-17T00:43:39.556425555Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:43:39.558378 env[1435]: time="2025-05-17T00:43:39.556435142Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:43:39.565041 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2282782459-merged.mount: Deactivated successfully. May 17 00:43:39.585575 kubelet[1387]: E0517 00:43:39.585356 1387 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:43:39.588926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:43:39.589185 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:43:39.649377 env[1435]: time="2025-05-17T00:43:39.649287207Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:43:39.649377 env[1435]: time="2025-05-17T00:43:39.649353494Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:43:39.649690 env[1435]: time="2025-05-17T00:43:39.649622066Z" level=info msg="Loading containers: start." May 17 00:43:39.831404 kernel: Initializing XFRM netlink socket May 17 00:43:39.877578 env[1435]: time="2025-05-17T00:43:39.877526296Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:43:39.992748 systemd-networkd[1071]: docker0: Link UP May 17 00:43:40.013848 env[1435]: time="2025-05-17T00:43:40.013795970Z" level=info msg="Loading containers: done." May 17 00:43:40.028920 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck816067193-merged.mount: Deactivated successfully. May 17 00:43:40.033022 env[1435]: time="2025-05-17T00:43:40.032941498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:43:40.033597 env[1435]: time="2025-05-17T00:43:40.033565594Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:43:40.033906 env[1435]: time="2025-05-17T00:43:40.033883582Z" level=info msg="Daemon has completed initialization" May 17 00:43:40.050267 systemd[1]: Started docker.service. May 17 00:43:40.061418 env[1435]: time="2025-05-17T00:43:40.061309186Z" level=info msg="API listen on /run/docker.sock" May 17 00:43:40.090026 systemd[1]: Starting coreos-metadata.service... May 17 00:43:40.169737 coreos-metadata[1553]: May 17 00:43:40.169 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:43:40.182075 coreos-metadata[1553]: May 17 00:43:40.182 INFO Fetch successful May 17 00:43:40.199684 systemd[1]: Finished coreos-metadata.service. May 17 00:43:41.155972 env[1303]: time="2025-05-17T00:43:41.155914722Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:43:41.743639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248894341.mount: Deactivated successfully. May 17 00:43:43.368610 env[1303]: time="2025-05-17T00:43:43.368526819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:43.372372 env[1303]: time="2025-05-17T00:43:43.371227151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:43.373524 env[1303]: time="2025-05-17T00:43:43.373451102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:43.376243 env[1303]: time="2025-05-17T00:43:43.376191860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:43.377512 env[1303]: time="2025-05-17T00:43:43.377461986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:43:43.378574 env[1303]: time="2025-05-17T00:43:43.378537735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:43:45.112376 env[1303]: time="2025-05-17T00:43:45.112255330Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:45.114384 env[1303]: time="2025-05-17T00:43:45.114292314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:45.116762 env[1303]: time="2025-05-17T00:43:45.116712758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:45.118835 env[1303]: time="2025-05-17T00:43:45.118788845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:45.119715 env[1303]: time="2025-05-17T00:43:45.119669299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:43:45.120923 env[1303]: time="2025-05-17T00:43:45.120888043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:43:46.545077 env[1303]: time="2025-05-17T00:43:46.545013062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:46.546857 env[1303]: time="2025-05-17T00:43:46.546800656Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:46.549076 env[1303]: time="2025-05-17T00:43:46.549034503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:46.550214 env[1303]: time="2025-05-17T00:43:46.550171515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:43:46.550823 env[1303]: time="2025-05-17T00:43:46.550786593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:43:46.551802 env[1303]: time="2025-05-17T00:43:46.551763941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:47.719010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478869156.mount: Deactivated successfully. May 17 00:43:48.566260 env[1303]: time="2025-05-17T00:43:48.566165876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:48.569616 env[1303]: time="2025-05-17T00:43:48.569244608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:48.570716 env[1303]: time="2025-05-17T00:43:48.570674974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:48.572088 env[1303]: time="2025-05-17T00:43:48.572050036Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:48.572730 env[1303]: time="2025-05-17T00:43:48.572695214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:43:48.573606 env[1303]: time="2025-05-17T00:43:48.573576203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:43:49.074456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809694469.mount: Deactivated successfully. May 17 00:43:49.821459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:43:49.821749 systemd[1]: Stopped kubelet.service. May 17 00:43:49.824952 systemd[1]: Starting kubelet.service... May 17 00:43:49.968563 systemd[1]: Started kubelet.service. May 17 00:43:50.059873 kubelet[1582]: E0517 00:43:50.059820 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:43:50.063989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:43:50.064276 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:43:50.326651 env[1303]: time="2025-05-17T00:43:50.326556749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.328878 env[1303]: time="2025-05-17T00:43:50.328827974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.331331 env[1303]: time="2025-05-17T00:43:50.331266729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.334383 env[1303]: time="2025-05-17T00:43:50.334332225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.335496 env[1303]: time="2025-05-17T00:43:50.335450749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:43:50.336034 env[1303]: time="2025-05-17T00:43:50.336006125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:43:50.836985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139274217.mount: Deactivated successfully. May 17 00:43:50.846097 env[1303]: time="2025-05-17T00:43:50.846010805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.848640 env[1303]: time="2025-05-17T00:43:50.848574095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.850414 env[1303]: time="2025-05-17T00:43:50.850309344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.852633 env[1303]: time="2025-05-17T00:43:50.852562213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:50.853262 env[1303]: time="2025-05-17T00:43:50.853226100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:43:50.853886 env[1303]: time="2025-05-17T00:43:50.853862014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:43:51.356099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734627819.mount: Deactivated successfully. May 17 00:43:53.958160 env[1303]: time="2025-05-17T00:43:53.958035910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:53.960779 env[1303]: time="2025-05-17T00:43:53.960719174Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:53.963689 env[1303]: time="2025-05-17T00:43:53.963618936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:53.965977 env[1303]: time="2025-05-17T00:43:53.965932737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:53.968506 env[1303]: time="2025-05-17T00:43:53.968443721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:43:56.845105 systemd[1]: Stopped kubelet.service. May 17 00:43:56.849255 systemd[1]: Starting kubelet.service... May 17 00:43:56.897039 systemd[1]: Reloading. May 17 00:43:57.012636 /usr/lib/systemd/system-generators/torcx-generator[1638]: time="2025-05-17T00:43:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:43:57.013162 /usr/lib/systemd/system-generators/torcx-generator[1638]: time="2025-05-17T00:43:57Z" level=info msg="torcx already run" May 17 00:43:57.177820 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:43:57.178097 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:43:57.209667 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:43:57.348462 systemd[1]: Started kubelet.service. May 17 00:43:57.355245 systemd[1]: Stopping kubelet.service... May 17 00:43:57.357300 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:43:57.357695 systemd[1]: Stopped kubelet.service. May 17 00:43:57.360660 systemd[1]: Starting kubelet.service... May 17 00:43:57.537565 systemd[1]: Started kubelet.service. May 17 00:43:57.628306 kubelet[1706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:43:57.628794 kubelet[1706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:43:57.628863 kubelet[1706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:43:57.629014 kubelet[1706]: I0517 00:43:57.628983 1706 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:43:57.877204 kubelet[1706]: I0517 00:43:57.877150 1706 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:43:57.877471 kubelet[1706]: I0517 00:43:57.877449 1706 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:43:57.877844 kubelet[1706]: I0517 00:43:57.877827 1706 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:43:57.923586 kubelet[1706]: E0517 00:43:57.923530 1706 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.126.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:57.939814 kubelet[1706]: I0517 00:43:57.939748 1706 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:43:57.950075 kubelet[1706]: E0517 00:43:57.950015 1706 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:43:57.950387 kubelet[1706]: I0517 00:43:57.950358 1706 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:43:57.958345 kubelet[1706]: I0517 00:43:57.958264 1706 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:43:57.960000 kubelet[1706]: I0517 00:43:57.959961 1706 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:43:57.960486 kubelet[1706]: I0517 00:43:57.960431 1706 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:43:57.960864 kubelet[1706]: I0517 00:43:57.960629 1706 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-d30b09a4ce","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:43:57.961035 kubelet[1706]: I0517 00:43:57.961019 1706 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:43:57.961102 kubelet[1706]: I0517 00:43:57.961092 1706 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:43:57.961271 kubelet[1706]: I0517 00:43:57.961259 1706 state_mem.go:36] "Initialized new in-memory state store" May 17 00:43:57.967211 kubelet[1706]: I0517 00:43:57.967137 1706 kubelet.go:408] "Attempting to sync node with API server" May 17 00:43:57.967566 kubelet[1706]: I0517 00:43:57.967521 1706 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:43:57.967782 kubelet[1706]: I0517 00:43:57.967758 1706 kubelet.go:314] "Adding apiserver pod source" May 17 00:43:57.967961 kubelet[1706]: I0517 00:43:57.967938 1706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:43:57.975116 kubelet[1706]: I0517 00:43:57.975077 1706 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:43:57.975649 kubelet[1706]: I0517 00:43:57.975620 1706 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:43:57.975756 kubelet[1706]: W0517 00:43:57.975700 1706 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:43:57.986678 kubelet[1706]: I0517 00:43:57.986619 1706 server.go:1274] "Started kubelet" May 17 00:43:57.987411 kubelet[1706]: W0517 00:43:57.987072 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.126.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-d30b09a4ce&limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:57.987411 kubelet[1706]: E0517 00:43:57.987184 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.126.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-d30b09a4ce&limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:57.990084 kubelet[1706]: W0517 00:43:57.990013 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.126.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:57.990346 kubelet[1706]: E0517 00:43:57.990289 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.126.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:57.992491 kubelet[1706]: I0517 00:43:57.992439 1706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:43:57.994967 kubelet[1706]: I0517 00:43:57.994933 1706 server.go:449] "Adding debug handlers to kubelet server" May 17 00:43:58.001509 kubelet[1706]: I0517 00:43:58.001430 1706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:43:58.002153 kubelet[1706]: I0517 00:43:58.002119 1706 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:43:58.011509 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:43:58.011671 kubelet[1706]: E0517 00:43:58.009216 1706 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.126.228:6443/api/v1/namespaces/default/events\": dial tcp 137.184.126.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-d30b09a4ce.184029d2268f6427 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-d30b09a4ce,UID:ci-3510.3.7-n-d30b09a4ce,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-d30b09a4ce,},FirstTimestamp:2025-05-17 00:43:57.986554919 +0000 UTC m=+0.431507347,LastTimestamp:2025-05-17 00:43:57.986554919 +0000 UTC m=+0.431507347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-d30b09a4ce,}" May 17 00:43:58.012290 kubelet[1706]: I0517 00:43:58.012268 1706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:43:58.013112 kubelet[1706]: I0517 00:43:58.013085 1706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:43:58.015869 kubelet[1706]: E0517 00:43:58.015827 1706 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:43:58.017485 kubelet[1706]: I0517 00:43:58.017455 1706 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:43:58.017703 kubelet[1706]: I0517 00:43:58.017621 1706 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:43:58.017703 kubelet[1706]: I0517 00:43:58.017675 1706 reconciler.go:26] "Reconciler: start to sync state" May 17 00:43:58.018483 kubelet[1706]: I0517 00:43:58.018456 1706 factory.go:221] Registration of the systemd container factory successfully May 17 00:43:58.018589 kubelet[1706]: I0517 00:43:58.018542 1706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:43:58.019238 kubelet[1706]: W0517 00:43:58.019177 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.126.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:58.019390 kubelet[1706]: E0517 00:43:58.019250 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.126.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:58.019488 kubelet[1706]: E0517 00:43:58.019201 1706 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-d30b09a4ce\" not found" May 17 00:43:58.020093 kubelet[1706]: E0517 00:43:58.020054 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.126.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-d30b09a4ce?timeout=10s\": dial tcp 137.184.126.228:6443: connect: connection refused" interval="200ms" May 17 00:43:58.020996 kubelet[1706]: I0517 00:43:58.020971 1706 factory.go:221] Registration of the containerd container factory successfully May 17 00:43:58.067047 kubelet[1706]: I0517 00:43:58.067003 1706 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:43:58.067047 kubelet[1706]: I0517 00:43:58.067030 1706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:43:58.067047 kubelet[1706]: I0517 00:43:58.067057 1706 state_mem.go:36] "Initialized new in-memory state store" May 17 00:43:58.069656 kubelet[1706]: I0517 00:43:58.069593 1706 policy_none.go:49] "None policy: Start" May 17 00:43:58.070409 kubelet[1706]: I0517 00:43:58.070344 1706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:43:58.071773 kubelet[1706]: I0517 00:43:58.071432 1706 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:43:58.072200 kubelet[1706]: I0517 00:43:58.072177 1706 state_mem.go:35] "Initializing new in-memory state store" May 17 00:43:58.075764 kubelet[1706]: I0517 00:43:58.075730 1706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:43:58.075979 kubelet[1706]: I0517 00:43:58.075962 1706 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:43:58.076083 kubelet[1706]: I0517 00:43:58.076070 1706 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:43:58.076270 kubelet[1706]: E0517 00:43:58.076235 1706 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:43:58.077071 kubelet[1706]: W0517 00:43:58.076963 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.126.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:58.077071 kubelet[1706]: E0517 00:43:58.077026 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.126.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:58.093298 kubelet[1706]: I0517 00:43:58.093253 1706 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:43:58.093722 kubelet[1706]: I0517 00:43:58.093694 1706 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:43:58.093919 kubelet[1706]: I0517 00:43:58.093868 1706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:43:58.095869 kubelet[1706]: E0517 00:43:58.095606 1706 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-d30b09a4ce\" not found" May 17 00:43:58.096622 kubelet[1706]: I0517 00:43:58.096157 1706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:43:58.197176 kubelet[1706]: I0517 00:43:58.197038 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.198921 kubelet[1706]: E0517 00:43:58.198758 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.126.228:6443/api/v1/nodes\": dial tcp 137.184.126.228:6443: connect: connection refused" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.219054 kubelet[1706]: I0517 00:43:58.218970 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/177b2839abcb51cc89a0f11f18c4040e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" (UID: \"177b2839abcb51cc89a0f11f18c4040e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.219432 kubelet[1706]: I0517 00:43:58.219395 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.219648 kubelet[1706]: I0517 00:43:58.219617 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.219810 kubelet[1706]: I0517 00:43:58.219791 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.219948 kubelet[1706]: I0517 00:43:58.219929 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/731037837f9165e6e875d8c0c3545e5a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-d30b09a4ce\" (UID: \"731037837f9165e6e875d8c0c3545e5a\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.220103 kubelet[1706]: I0517 00:43:58.220076 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/177b2839abcb51cc89a0f11f18c4040e-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" (UID: \"177b2839abcb51cc89a0f11f18c4040e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.220595 kubelet[1706]: I0517 00:43:58.220561 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.220919 kubelet[1706]: I0517 00:43:58.220898 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.221089 kubelet[1706]: I0517 00:43:58.221067 1706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/177b2839abcb51cc89a0f11f18c4040e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" (UID: \"177b2839abcb51cc89a0f11f18c4040e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.221206 kubelet[1706]: E0517 00:43:58.220833 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.126.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-d30b09a4ce?timeout=10s\": dial tcp 137.184.126.228:6443: connect: connection refused" interval="400ms" May 17 00:43:58.401685 kubelet[1706]: I0517 00:43:58.401625 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.402229 kubelet[1706]: E0517 00:43:58.402190 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.126.228:6443/api/v1/nodes\": dial tcp 137.184.126.228:6443: connect: connection refused" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.483607 kubelet[1706]: E0517 00:43:58.483460 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:58.485332 env[1303]: time="2025-05-17T00:43:58.484899900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-d30b09a4ce,Uid:731037837f9165e6e875d8c0c3545e5a,Namespace:kube-system,Attempt:0,}" May 17 00:43:58.486372 kubelet[1706]: E0517 00:43:58.486341 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:58.488623 env[1303]: time="2025-05-17T00:43:58.488252572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-d30b09a4ce,Uid:177b2839abcb51cc89a0f11f18c4040e,Namespace:kube-system,Attempt:0,}" May 17 00:43:58.489272 kubelet[1706]: E0517 00:43:58.489082 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:58.490116 env[1303]: time="2025-05-17T00:43:58.490063394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-d30b09a4ce,Uid:2f5d6749271d5988d19d07c7541e19e7,Namespace:kube-system,Attempt:0,}" May 17 00:43:58.622540 kubelet[1706]: E0517 00:43:58.622433 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.126.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-d30b09a4ce?timeout=10s\": dial tcp 137.184.126.228:6443: connect: connection refused" interval="800ms" May 17 00:43:58.804239 kubelet[1706]: I0517 00:43:58.804099 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:58.805194 kubelet[1706]: E0517 00:43:58.805139 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.126.228:6443/api/v1/nodes\": dial tcp 137.184.126.228:6443: connect: connection refused" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:59.042408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867021229.mount: Deactivated successfully. May 17 00:43:59.051682 env[1303]: time="2025-05-17T00:43:59.051612202Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.052816 env[1303]: time="2025-05-17T00:43:59.052748399Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.058998 env[1303]: time="2025-05-17T00:43:59.058704561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.064733 env[1303]: time="2025-05-17T00:43:59.064254082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.068406 env[1303]: time="2025-05-17T00:43:59.068326115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.071294 env[1303]: time="2025-05-17T00:43:59.071229871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.074555 env[1303]: time="2025-05-17T00:43:59.074499016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.076261 env[1303]: time="2025-05-17T00:43:59.076201809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.077744 env[1303]: time="2025-05-17T00:43:59.077697230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.079833 env[1303]: time="2025-05-17T00:43:59.079782819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.082108 env[1303]: time="2025-05-17T00:43:59.081939197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.088435 env[1303]: time="2025-05-17T00:43:59.088361247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:59.098686 kubelet[1706]: W0517 00:43:59.098233 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.126.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:59.098686 kubelet[1706]: E0517 00:43:59.098307 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.126.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:59.123619 env[1303]: time="2025-05-17T00:43:59.123503619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:59.123619 env[1303]: time="2025-05-17T00:43:59.123563527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:59.123927 env[1303]: time="2025-05-17T00:43:59.123578775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:59.123927 env[1303]: time="2025-05-17T00:43:59.123786188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdca6eda188d4ec36aac24fbb9cfe3e6ef3c3a368006ccd3010a20fbc10802a7 pid=1754 runtime=io.containerd.runc.v2 May 17 00:43:59.136913 env[1303]: time="2025-05-17T00:43:59.136776019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:59.137228 env[1303]: time="2025-05-17T00:43:59.136853079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:59.137228 env[1303]: time="2025-05-17T00:43:59.136887704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:59.137609 env[1303]: time="2025-05-17T00:43:59.137542279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/232ba0185bd75136656292255d34175c18b6a4a958266ba67bcc3a2dbef1a0ff pid=1755 runtime=io.containerd.runc.v2 May 17 00:43:59.206688 env[1303]: time="2025-05-17T00:43:59.201919274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:59.206688 env[1303]: time="2025-05-17T00:43:59.201983334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:59.206688 env[1303]: time="2025-05-17T00:43:59.202002740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:59.206688 env[1303]: time="2025-05-17T00:43:59.202248807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef4dc45d36873e409564fc2084e312e6523e3b7da02102e04b13110fe381ccd1 pid=1791 runtime=io.containerd.runc.v2 May 17 00:43:59.291254 env[1303]: time="2025-05-17T00:43:59.291140845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-d30b09a4ce,Uid:731037837f9165e6e875d8c0c3545e5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdca6eda188d4ec36aac24fbb9cfe3e6ef3c3a368006ccd3010a20fbc10802a7\"" May 17 00:43:59.293869 kubelet[1706]: E0517 00:43:59.293499 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:59.297207 env[1303]: time="2025-05-17T00:43:59.297135433Z" level=info msg="CreateContainer within sandbox \"cdca6eda188d4ec36aac24fbb9cfe3e6ef3c3a368006ccd3010a20fbc10802a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:43:59.329787 env[1303]: time="2025-05-17T00:43:59.329709026Z" level=info msg="CreateContainer within sandbox \"cdca6eda188d4ec36aac24fbb9cfe3e6ef3c3a368006ccd3010a20fbc10802a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8686e34af67e0cec34f5472bca73980cabcb113c942d4252537701557340b1dd\"" May 17 00:43:59.331847 env[1303]: time="2025-05-17T00:43:59.331772810Z" level=info msg="StartContainer for \"8686e34af67e0cec34f5472bca73980cabcb113c942d4252537701557340b1dd\"" May 17 00:43:59.350136 env[1303]: time="2025-05-17T00:43:59.350075571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-d30b09a4ce,Uid:2f5d6749271d5988d19d07c7541e19e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"232ba0185bd75136656292255d34175c18b6a4a958266ba67bcc3a2dbef1a0ff\"" May 17 00:43:59.352356 kubelet[1706]: E0517 00:43:59.351982 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:59.356371 env[1303]: time="2025-05-17T00:43:59.356290950Z" level=info msg="CreateContainer within sandbox \"232ba0185bd75136656292255d34175c18b6a4a958266ba67bcc3a2dbef1a0ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:43:59.365466 env[1303]: time="2025-05-17T00:43:59.365405318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-d30b09a4ce,Uid:177b2839abcb51cc89a0f11f18c4040e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef4dc45d36873e409564fc2084e312e6523e3b7da02102e04b13110fe381ccd1\"" May 17 00:43:59.373209 kubelet[1706]: E0517 00:43:59.372908 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:59.375609 env[1303]: time="2025-05-17T00:43:59.375563675Z" level=info msg="CreateContainer within sandbox \"232ba0185bd75136656292255d34175c18b6a4a958266ba67bcc3a2dbef1a0ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e074aefc6e34fea029ea534102d4a43be900b802d05e7ebbf4ef913555f7cc5\"" May 17 00:43:59.376589 env[1303]: time="2025-05-17T00:43:59.376549847Z" level=info msg="StartContainer for \"6e074aefc6e34fea029ea534102d4a43be900b802d05e7ebbf4ef913555f7cc5\"" May 17 00:43:59.378596 env[1303]: time="2025-05-17T00:43:59.378540294Z" level=info msg="CreateContainer within sandbox \"ef4dc45d36873e409564fc2084e312e6523e3b7da02102e04b13110fe381ccd1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:43:59.393646 kubelet[1706]: W0517 00:43:59.393267 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.126.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-d30b09a4ce&limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:59.393646 kubelet[1706]: E0517 00:43:59.393411 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.126.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-d30b09a4ce&limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:59.398220 kubelet[1706]: W0517 00:43:59.395552 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.126.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:59.398220 kubelet[1706]: E0517 00:43:59.395635 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.126.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:59.431344 kubelet[1706]: E0517 00:43:59.426593 1706 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.126.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-d30b09a4ce?timeout=10s\": dial tcp 137.184.126.228:6443: connect: connection refused" interval="1.6s" May 17 00:43:59.456830 env[1303]: time="2025-05-17T00:43:59.454742031Z" level=info msg="CreateContainer within sandbox \"ef4dc45d36873e409564fc2084e312e6523e3b7da02102e04b13110fe381ccd1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f58e5e05fdfd54a80b601e5f5fb4e7dd3ef87442bc7577c6a2166575a3d1f6b1\"" May 17 00:43:59.458442 env[1303]: time="2025-05-17T00:43:59.458292604Z" level=info msg="StartContainer for \"f58e5e05fdfd54a80b601e5f5fb4e7dd3ef87442bc7577c6a2166575a3d1f6b1\"" May 17 00:43:59.499696 env[1303]: time="2025-05-17T00:43:59.499614256Z" level=info msg="StartContainer for \"8686e34af67e0cec34f5472bca73980cabcb113c942d4252537701557340b1dd\" returns successfully" May 17 00:43:59.521249 kubelet[1706]: W0517 00:43:59.521081 1706 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.126.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.126.228:6443: connect: connection refused May 17 00:43:59.521249 kubelet[1706]: E0517 00:43:59.521191 1706 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.126.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:43:59.608719 kubelet[1706]: I0517 00:43:59.607981 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:59.608719 kubelet[1706]: E0517 00:43:59.608557 1706 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.126.228:6443/api/v1/nodes\": dial tcp 137.184.126.228:6443: connect: connection refused" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:43:59.616835 env[1303]: time="2025-05-17T00:43:59.616778723Z" level=info msg="StartContainer for \"6e074aefc6e34fea029ea534102d4a43be900b802d05e7ebbf4ef913555f7cc5\" returns successfully" May 17 00:43:59.682443 env[1303]: time="2025-05-17T00:43:59.682382817Z" level=info msg="StartContainer for \"f58e5e05fdfd54a80b601e5f5fb4e7dd3ef87442bc7577c6a2166575a3d1f6b1\" returns successfully" May 17 00:44:00.075261 kubelet[1706]: E0517 00:44:00.075203 1706 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.126.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.126.228:6443: connect: connection refused" logger="UnhandledError" May 17 00:44:00.090031 kubelet[1706]: E0517 00:44:00.089983 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:00.098022 kubelet[1706]: E0517 00:44:00.097976 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:00.111366 kubelet[1706]: E0517 00:44:00.111283 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:01.105668 kubelet[1706]: E0517 00:44:01.105626 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:01.187070 kubelet[1706]: E0517 00:44:01.187024 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:01.209919 kubelet[1706]: I0517 00:44:01.209876 1706 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:44:02.108013 kubelet[1706]: E0517 00:44:02.107958 1706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:03.030454 kubelet[1706]: E0517 00:44:03.030391 1706 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-d30b09a4ce\" not found" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:44:03.095868 kubelet[1706]: I0517 00:44:03.095817 1706 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:44:03.096263 kubelet[1706]: E0517 00:44:03.096236 1706 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-d30b09a4ce\": node \"ci-3510.3.7-n-d30b09a4ce\" not found" May 17 00:44:03.140485 kubelet[1706]: E0517 00:44:03.140098 1706 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.7-n-d30b09a4ce.184029d2268f6427 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-d30b09a4ce,UID:ci-3510.3.7-n-d30b09a4ce,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-d30b09a4ce,},FirstTimestamp:2025-05-17 00:43:57.986554919 +0000 UTC m=+0.431507347,LastTimestamp:2025-05-17 00:43:57.986554919 +0000 UTC m=+0.431507347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-d30b09a4ce,}" May 17 00:44:03.991184 kubelet[1706]: I0517 00:44:03.991135 1706 apiserver.go:52] "Watching apiserver" May 17 00:44:04.018506 kubelet[1706]: I0517 00:44:04.018446 1706 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:44:05.512105 systemd[1]: Reloading. May 17 00:44:05.625656 /usr/lib/systemd/system-generators/torcx-generator[2005]: time="2025-05-17T00:44:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:44:05.634702 /usr/lib/systemd/system-generators/torcx-generator[2005]: time="2025-05-17T00:44:05Z" level=info msg="torcx already run" May 17 00:44:05.773858 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:44:05.774187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:44:05.804087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:44:05.927974 systemd[1]: Stopping kubelet.service... May 17 00:44:05.951200 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:44:05.951736 systemd[1]: Stopped kubelet.service. May 17 00:44:05.956004 systemd[1]: Starting kubelet.service... May 17 00:44:07.203507 systemd[1]: Started kubelet.service. May 17 00:44:07.338252 kubelet[2064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:44:07.338738 kubelet[2064]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:44:07.339059 kubelet[2064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:44:07.341636 kubelet[2064]: I0517 00:44:07.341449 2064 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:44:07.366968 kubelet[2064]: I0517 00:44:07.366267 2064 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:44:07.366968 kubelet[2064]: I0517 00:44:07.366330 2064 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:44:07.378605 kubelet[2064]: I0517 00:44:07.374452 2064 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:44:07.388595 kubelet[2064]: I0517 00:44:07.387485 2064 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:44:07.404531 kubelet[2064]: I0517 00:44:07.402247 2064 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:44:07.425782 kubelet[2064]: E0517 00:44:07.425706 2064 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:44:07.425782 kubelet[2064]: I0517 00:44:07.425792 2064 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:44:07.430125 sudo[2079]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:44:07.431492 sudo[2079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:44:07.431735 kubelet[2064]: I0517 00:44:07.431708 2064 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:44:07.433122 kubelet[2064]: I0517 00:44:07.433008 2064 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:44:07.433256 kubelet[2064]: I0517 00:44:07.433175 2064 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:44:07.433559 kubelet[2064]: I0517 00:44:07.433216 2064 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-d30b09a4ce","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:44:07.433559 kubelet[2064]: I0517 00:44:07.433501 2064 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:44:07.433559 kubelet[2064]: I0517 00:44:07.433514 2064 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:44:07.433559 kubelet[2064]: I0517 00:44:07.433562 2064 state_mem.go:36] "Initialized new in-memory state store" May 17 00:44:07.434039 kubelet[2064]: I0517 00:44:07.433704 2064 kubelet.go:408] "Attempting to sync node with API server" May 17 00:44:07.434039 kubelet[2064]: I0517 00:44:07.433721 2064 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:44:07.439264 kubelet[2064]: I0517 00:44:07.434552 2064 kubelet.go:314] "Adding apiserver pod source" May 17 00:44:07.439264 kubelet[2064]: I0517 00:44:07.434572 2064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:44:07.457109 kubelet[2064]: I0517 00:44:07.450801 2064 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:44:07.457109 kubelet[2064]: I0517 00:44:07.453629 2064 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:44:07.457109 kubelet[2064]: I0517 00:44:07.454327 2064 server.go:1274] "Started kubelet" May 17 00:44:07.458504 kubelet[2064]: I0517 00:44:07.458472 2064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:44:07.471394 kubelet[2064]: I0517 00:44:07.467917 2064 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:44:07.474450 kubelet[2064]: I0517 00:44:07.472210 2064 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:44:07.474450 kubelet[2064]: E0517 00:44:07.472599 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-d30b09a4ce\" not found" May 17 00:44:07.479679 kubelet[2064]: I0517 00:44:07.479604 2064 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:44:07.481962 kubelet[2064]: I0517 00:44:07.481917 2064 server.go:449] "Adding debug handlers to kubelet server" May 17 00:44:07.488099 kubelet[2064]: I0517 00:44:07.488018 2064 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:44:07.489885 kubelet[2064]: I0517 00:44:07.489841 2064 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:44:07.498802 kubelet[2064]: I0517 00:44:07.498735 2064 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:44:07.499346 kubelet[2064]: I0517 00:44:07.499294 2064 reconciler.go:26] "Reconciler: start to sync state" May 17 00:44:07.511765 kubelet[2064]: I0517 00:44:07.510980 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:44:07.518211 kubelet[2064]: E0517 00:44:07.518157 2064 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:44:07.519977 kubelet[2064]: I0517 00:44:07.513971 2064 factory.go:221] Registration of the systemd container factory successfully May 17 00:44:07.520175 kubelet[2064]: I0517 00:44:07.520143 2064 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:44:07.528124 kubelet[2064]: I0517 00:44:07.527546 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:44:07.528124 kubelet[2064]: I0517 00:44:07.527618 2064 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:44:07.528124 kubelet[2064]: I0517 00:44:07.527652 2064 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:44:07.528124 kubelet[2064]: E0517 00:44:07.527754 2064 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:44:07.528680 kubelet[2064]: I0517 00:44:07.528659 2064 factory.go:221] Registration of the containerd container factory successfully May 17 00:44:07.627923 kubelet[2064]: E0517 00:44:07.627875 2064 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:44:07.631192 kubelet[2064]: I0517 00:44:07.631163 2064 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:44:07.631460 kubelet[2064]: I0517 00:44:07.631441 2064 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:44:07.631551 kubelet[2064]: I0517 00:44:07.631539 2064 state_mem.go:36] "Initialized new in-memory state store" May 17 00:44:07.631823 kubelet[2064]: I0517 00:44:07.631804 2064 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:44:07.631946 kubelet[2064]: I0517 00:44:07.631912 2064 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:44:07.632028 kubelet[2064]: I0517 00:44:07.632016 2064 policy_none.go:49] "None policy: Start" May 17 00:44:07.633298 kubelet[2064]: I0517 00:44:07.633261 2064 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:44:07.633523 kubelet[2064]: I0517 00:44:07.633508 2064 state_mem.go:35] "Initializing new in-memory state store" May 17 00:44:07.633900 kubelet[2064]: I0517 00:44:07.633885 2064 state_mem.go:75] "Updated machine memory state" May 17 00:44:07.635586 kubelet[2064]: I0517 00:44:07.635554 2064 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:44:07.635965 kubelet[2064]: I0517 00:44:07.635922 2064 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:44:07.636114 kubelet[2064]: I0517 00:44:07.636071 2064 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:44:07.638107 kubelet[2064]: I0517 00:44:07.638078 2064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:44:07.745182 kubelet[2064]: I0517 00:44:07.745056 2064 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.757260 kubelet[2064]: I0517 00:44:07.757221 2064 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.757545 kubelet[2064]: I0517 00:44:07.757533 2064 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.844013 kubelet[2064]: W0517 00:44:07.843954 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:44:07.844267 kubelet[2064]: W0517 00:44:07.844244 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:44:07.844426 kubelet[2064]: W0517 00:44:07.843977 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:44:07.902240 kubelet[2064]: I0517 00:44:07.902172 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/177b2839abcb51cc89a0f11f18c4040e-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" (UID: \"177b2839abcb51cc89a0f11f18c4040e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902240 kubelet[2064]: I0517 00:44:07.902237 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/177b2839abcb51cc89a0f11f18c4040e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" (UID: \"177b2839abcb51cc89a0f11f18c4040e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902465 kubelet[2064]: I0517 00:44:07.902263 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902465 kubelet[2064]: I0517 00:44:07.902293 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902465 kubelet[2064]: I0517 00:44:07.902337 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/731037837f9165e6e875d8c0c3545e5a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-d30b09a4ce\" (UID: \"731037837f9165e6e875d8c0c3545e5a\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902465 kubelet[2064]: I0517 00:44:07.902360 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/177b2839abcb51cc89a0f11f18c4040e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" (UID: \"177b2839abcb51cc89a0f11f18c4040e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902465 kubelet[2064]: I0517 00:44:07.902380 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902677 kubelet[2064]: I0517 00:44:07.902401 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:07.902677 kubelet[2064]: I0517 00:44:07.902424 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f5d6749271d5988d19d07c7541e19e7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" (UID: \"2f5d6749271d5988d19d07c7541e19e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:08.145711 kubelet[2064]: E0517 00:44:08.145657 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:08.146228 kubelet[2064]: E0517 00:44:08.146195 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:08.146551 kubelet[2064]: E0517 00:44:08.146522 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:08.277374 sudo[2079]: pam_unix(sudo:session): session closed for user root May 17 00:44:08.445674 kubelet[2064]: I0517 00:44:08.445542 2064 apiserver.go:52] "Watching apiserver" May 17 00:44:08.499403 kubelet[2064]: I0517 00:44:08.499149 2064 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:44:08.521289 kubelet[2064]: I0517 00:44:08.521214 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" podStartSLOduration=1.521191167 podStartE2EDuration="1.521191167s" podCreationTimestamp="2025-05-17 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:08.52079917 +0000 UTC m=+1.285438314" watchObservedRunningTime="2025-05-17 00:44:08.521191167 +0000 UTC m=+1.285830309" May 17 00:44:08.550923 kubelet[2064]: I0517 00:44:08.550709 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-d30b09a4ce" podStartSLOduration=1.550684366 podStartE2EDuration="1.550684366s" podCreationTimestamp="2025-05-17 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:08.534693667 +0000 UTC m=+1.299332813" watchObservedRunningTime="2025-05-17 00:44:08.550684366 +0000 UTC m=+1.315323511" May 17 00:44:08.584555 kubelet[2064]: E0517 00:44:08.584519 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:08.586974 kubelet[2064]: I0517 00:44:08.586908 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" podStartSLOduration=1.58685649 podStartE2EDuration="1.58685649s" podCreationTimestamp="2025-05-17 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:08.552583352 +0000 UTC m=+1.317222491" watchObservedRunningTime="2025-05-17 00:44:08.58685649 +0000 UTC m=+1.351495621" May 17 00:44:08.621552 kubelet[2064]: W0517 00:44:08.621463 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:44:08.621889 kubelet[2064]: E0517 00:44:08.621867 2064 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.7-n-d30b09a4ce\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:08.622150 kubelet[2064]: W0517 00:44:08.622010 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:44:08.622344 kubelet[2064]: E0517 00:44:08.622301 2064 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-d30b09a4ce\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-d30b09a4ce" May 17 00:44:08.622640 kubelet[2064]: E0517 00:44:08.622624 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:08.622821 kubelet[2064]: E0517 00:44:08.622336 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:09.587043 kubelet[2064]: E0517 00:44:09.586175 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:09.587043 kubelet[2064]: E0517 00:44:09.587043 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:10.508375 sudo[1425]: pam_unix(sudo:session): session closed for user root May 17 00:44:10.511989 sshd[1419]: pam_unix(sshd:session): session closed for user core May 17 00:44:10.515619 systemd[1]: sshd@4-137.184.126.228:22-147.75.109.163:55640.service: Deactivated successfully. May 17 00:44:10.517411 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:44:10.517790 systemd-logind[1288]: Session 5 logged out. Waiting for processes to exit. May 17 00:44:10.519633 systemd-logind[1288]: Removed session 5. May 17 00:44:11.886080 kubelet[2064]: I0517 00:44:11.886029 2064 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:44:11.887344 env[1303]: time="2025-05-17T00:44:11.887278229Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:44:11.888126 kubelet[2064]: I0517 00:44:11.888092 2064 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:44:12.836300 kubelet[2064]: E0517 00:44:12.836247 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:12.907347 kubelet[2064]: W0517 00:44:12.905740 2064 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-n-d30b09a4ce" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-d30b09a4ce' and this object May 17 00:44:12.907347 kubelet[2064]: E0517 00:44:12.905809 2064 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.7-n-d30b09a4ce\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-d30b09a4ce' and this object" logger="UnhandledError" May 17 00:44:12.907347 kubelet[2064]: W0517 00:44:12.906233 2064 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-n-d30b09a4ce" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-d30b09a4ce' and this object May 17 00:44:12.907347 kubelet[2064]: E0517 00:44:12.906264 2064 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.7-n-d30b09a4ce\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-d30b09a4ce' and this object" logger="UnhandledError" May 17 00:44:12.941350 kubelet[2064]: I0517 00:44:12.941280 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f71ead7-2ba9-4c18-831d-f2c43c4aed46-kube-proxy\") pod \"kube-proxy-4w68k\" (UID: \"3f71ead7-2ba9-4c18-831d-f2c43c4aed46\") " pod="kube-system/kube-proxy-4w68k" May 17 00:44:12.941350 kubelet[2064]: I0517 00:44:12.941352 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f71ead7-2ba9-4c18-831d-f2c43c4aed46-lib-modules\") pod \"kube-proxy-4w68k\" (UID: \"3f71ead7-2ba9-4c18-831d-f2c43c4aed46\") " pod="kube-system/kube-proxy-4w68k" May 17 00:44:12.941673 kubelet[2064]: I0517 00:44:12.941369 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-bpf-maps\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941673 kubelet[2064]: I0517 00:44:12.941386 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-cgroup\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941673 kubelet[2064]: I0517 00:44:12.941417 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f71ead7-2ba9-4c18-831d-f2c43c4aed46-xtables-lock\") pod \"kube-proxy-4w68k\" (UID: \"3f71ead7-2ba9-4c18-831d-f2c43c4aed46\") " pod="kube-system/kube-proxy-4w68k" May 17 00:44:12.941673 kubelet[2064]: I0517 00:44:12.941434 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8vv\" (UniqueName: \"kubernetes.io/projected/3f71ead7-2ba9-4c18-831d-f2c43c4aed46-kube-api-access-7x8vv\") pod \"kube-proxy-4w68k\" (UID: \"3f71ead7-2ba9-4c18-831d-f2c43c4aed46\") " pod="kube-system/kube-proxy-4w68k" May 17 00:44:12.941673 kubelet[2064]: I0517 00:44:12.941451 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-hostproc\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941673 kubelet[2064]: I0517 00:44:12.941466 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfpr\" (UniqueName: \"kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-kube-api-access-ckfpr\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941935 kubelet[2064]: I0517 00:44:12.941490 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-kernel\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941935 kubelet[2064]: I0517 00:44:12.941512 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-config-path\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941935 kubelet[2064]: I0517 00:44:12.941539 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-xtables-lock\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941935 kubelet[2064]: I0517 00:44:12.941572 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-hubble-tls\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941935 kubelet[2064]: I0517 00:44:12.941587 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-run\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.941935 kubelet[2064]: I0517 00:44:12.941603 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/122b5d14-a08b-4e78-a8c0-5aadccfba353-clustermesh-secrets\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.942221 kubelet[2064]: I0517 00:44:12.941620 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cni-path\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.942221 kubelet[2064]: I0517 00:44:12.941645 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-etc-cni-netd\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.942221 kubelet[2064]: I0517 00:44:12.941660 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-lib-modules\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:12.942221 kubelet[2064]: I0517 00:44:12.941676 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-net\") pod \"cilium-kvptf\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " pod="kube-system/cilium-kvptf" May 17 00:44:13.043078 kubelet[2064]: I0517 00:44:13.043020 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcf8x\" (UniqueName: \"kubernetes.io/projected/c550b965-b6f9-4419-ac4e-7dd2f6774589-kube-api-access-gcf8x\") pod \"cilium-operator-5d85765b45-2xsw5\" (UID: \"c550b965-b6f9-4419-ac4e-7dd2f6774589\") " pod="kube-system/cilium-operator-5d85765b45-2xsw5" May 17 00:44:13.043335 kubelet[2064]: I0517 00:44:13.043135 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c550b965-b6f9-4419-ac4e-7dd2f6774589-cilium-config-path\") pod \"cilium-operator-5d85765b45-2xsw5\" (UID: \"c550b965-b6f9-4419-ac4e-7dd2f6774589\") " pod="kube-system/cilium-operator-5d85765b45-2xsw5" May 17 00:44:13.058937 kubelet[2064]: I0517 00:44:13.058887 2064 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:44:13.182825 kubelet[2064]: E0517 00:44:13.182691 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:13.184673 env[1303]: time="2025-05-17T00:44:13.184344720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4w68k,Uid:3f71ead7-2ba9-4c18-831d-f2c43c4aed46,Namespace:kube-system,Attempt:0,}" May 17 00:44:13.208092 env[1303]: time="2025-05-17T00:44:13.207957661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:13.208092 env[1303]: time="2025-05-17T00:44:13.208042393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:13.208714 env[1303]: time="2025-05-17T00:44:13.208059183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:13.209382 env[1303]: time="2025-05-17T00:44:13.209125229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cec13c9131c601fedfc21227d4da10f30bf2703003d12a7f6228ea231b115bc pid=2144 runtime=io.containerd.runc.v2 May 17 00:44:13.271846 env[1303]: time="2025-05-17T00:44:13.271779832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4w68k,Uid:3f71ead7-2ba9-4c18-831d-f2c43c4aed46,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cec13c9131c601fedfc21227d4da10f30bf2703003d12a7f6228ea231b115bc\"" May 17 00:44:13.273653 kubelet[2064]: E0517 00:44:13.273009 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:13.276565 kubelet[2064]: E0517 00:44:13.274861 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:13.276634 env[1303]: time="2025-05-17T00:44:13.275624722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2xsw5,Uid:c550b965-b6f9-4419-ac4e-7dd2f6774589,Namespace:kube-system,Attempt:0,}" May 17 00:44:13.281176 env[1303]: time="2025-05-17T00:44:13.281112814Z" level=info msg="CreateContainer within sandbox \"5cec13c9131c601fedfc21227d4da10f30bf2703003d12a7f6228ea231b115bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:44:13.301446 env[1303]: time="2025-05-17T00:44:13.301372757Z" level=info msg="CreateContainer within sandbox \"5cec13c9131c601fedfc21227d4da10f30bf2703003d12a7f6228ea231b115bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65b7c4fcf952c60f9b8f3444e54d879400149b3a8322a156ce92003fa78738b9\"" May 17 00:44:13.304851 env[1303]: time="2025-05-17T00:44:13.304807087Z" level=info msg="StartContainer for \"65b7c4fcf952c60f9b8f3444e54d879400149b3a8322a156ce92003fa78738b9\"" May 17 00:44:13.308908 env[1303]: time="2025-05-17T00:44:13.308780635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:13.309130 env[1303]: time="2025-05-17T00:44:13.308871000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:13.309130 env[1303]: time="2025-05-17T00:44:13.308919412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:13.309442 env[1303]: time="2025-05-17T00:44:13.309368506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9 pid=2182 runtime=io.containerd.runc.v2 May 17 00:44:13.412758 env[1303]: time="2025-05-17T00:44:13.412699195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2xsw5,Uid:c550b965-b6f9-4419-ac4e-7dd2f6774589,Namespace:kube-system,Attempt:0,} returns sandbox id \"56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9\"" May 17 00:44:13.413741 kubelet[2064]: E0517 00:44:13.413697 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:13.418893 env[1303]: time="2025-05-17T00:44:13.418840071Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:44:13.434278 env[1303]: time="2025-05-17T00:44:13.434102088Z" level=info msg="StartContainer for \"65b7c4fcf952c60f9b8f3444e54d879400149b3a8322a156ce92003fa78738b9\" returns successfully" May 17 00:44:13.597044 kubelet[2064]: E0517 00:44:13.597001 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:13.598294 kubelet[2064]: E0517 00:44:13.598063 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:14.045784 kubelet[2064]: E0517 00:44:14.045724 2064 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 17 00:44:14.046385 kubelet[2064]: E0517 00:44:14.045849 2064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/122b5d14-a08b-4e78-a8c0-5aadccfba353-clustermesh-secrets podName:122b5d14-a08b-4e78-a8c0-5aadccfba353 nodeName:}" failed. No retries permitted until 2025-05-17 00:44:14.545820142 +0000 UTC m=+7.310459279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/122b5d14-a08b-4e78-a8c0-5aadccfba353-clustermesh-secrets") pod "cilium-kvptf" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353") : failed to sync secret cache: timed out waiting for the condition May 17 00:44:14.690716 kubelet[2064]: E0517 00:44:14.690659 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:14.693470 env[1303]: time="2025-05-17T00:44:14.693419247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvptf,Uid:122b5d14-a08b-4e78-a8c0-5aadccfba353,Namespace:kube-system,Attempt:0,}" May 17 00:44:14.715168 env[1303]: time="2025-05-17T00:44:14.715041768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:14.715168 env[1303]: time="2025-05-17T00:44:14.715097220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:14.715168 env[1303]: time="2025-05-17T00:44:14.715108332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:14.715883 env[1303]: time="2025-05-17T00:44:14.715764210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2 pid=2388 runtime=io.containerd.runc.v2 May 17 00:44:14.776365 env[1303]: time="2025-05-17T00:44:14.776264267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvptf,Uid:122b5d14-a08b-4e78-a8c0-5aadccfba353,Namespace:kube-system,Attempt:0,} returns sandbox id \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\"" May 17 00:44:14.778229 kubelet[2064]: E0517 00:44:14.777446 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:15.069435 systemd[1]: run-containerd-runc-k8s.io-36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2-runc.OHS9kX.mount: Deactivated successfully. May 17 00:44:15.756308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170511676.mount: Deactivated successfully. May 17 00:44:16.774739 env[1303]: time="2025-05-17T00:44:16.774663338Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:16.776569 env[1303]: time="2025-05-17T00:44:16.776513855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:16.778375 env[1303]: time="2025-05-17T00:44:16.778305962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:16.778985 env[1303]: time="2025-05-17T00:44:16.778948374Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:44:16.781820 env[1303]: time="2025-05-17T00:44:16.781132595Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:44:16.786720 env[1303]: time="2025-05-17T00:44:16.786675799Z" level=info msg="CreateContainer within sandbox \"56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:44:16.802214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630524489.mount: Deactivated successfully. May 17 00:44:16.812732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228079210.mount: Deactivated successfully. May 17 00:44:16.816440 env[1303]: time="2025-05-17T00:44:16.816384848Z" level=info msg="CreateContainer within sandbox \"56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\"" May 17 00:44:16.818900 env[1303]: time="2025-05-17T00:44:16.818856852Z" level=info msg="StartContainer for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\"" May 17 00:44:16.899406 env[1303]: time="2025-05-17T00:44:16.899350848Z" level=info msg="StartContainer for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" returns successfully" May 17 00:44:17.430171 kubelet[2064]: E0517 00:44:17.430107 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:17.523690 kubelet[2064]: I0517 00:44:17.523619 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4w68k" podStartSLOduration=5.52359471 podStartE2EDuration="5.52359471s" podCreationTimestamp="2025-05-17 00:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:13.635693942 +0000 UTC m=+6.400333081" watchObservedRunningTime="2025-05-17 00:44:17.52359471 +0000 UTC m=+10.288233859" May 17 00:44:17.611068 kubelet[2064]: E0517 00:44:17.611029 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:17.611524 kubelet[2064]: E0517 00:44:17.611505 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:17.824184 kubelet[2064]: I0517 00:44:17.824090 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2xsw5" podStartSLOduration=2.459488452 podStartE2EDuration="5.824066547s" podCreationTimestamp="2025-05-17 00:44:12 +0000 UTC" firstStartedPulling="2025-05-17 00:44:13.416337886 +0000 UTC m=+6.180977025" lastFinishedPulling="2025-05-17 00:44:16.780915998 +0000 UTC m=+9.545555120" observedRunningTime="2025-05-17 00:44:17.7687594 +0000 UTC m=+10.533398543" watchObservedRunningTime="2025-05-17 00:44:17.824066547 +0000 UTC m=+10.588705688" May 17 00:44:17.859415 kubelet[2064]: E0517 00:44:17.859376 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:18.653057 kubelet[2064]: E0517 00:44:18.652780 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:18.653057 kubelet[2064]: E0517 00:44:18.652951 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:22.364735 update_engine[1289]: I0517 00:44:22.364624 1289 update_attempter.cc:509] Updating boot flags... May 17 00:44:22.750856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855842902.mount: Deactivated successfully. May 17 00:44:26.283514 env[1303]: time="2025-05-17T00:44:26.283447256Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:26.286006 env[1303]: time="2025-05-17T00:44:26.285954604Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:26.288160 env[1303]: time="2025-05-17T00:44:26.288119042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:44:26.289179 env[1303]: time="2025-05-17T00:44:26.289130764Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:44:26.297757 env[1303]: time="2025-05-17T00:44:26.297552497Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:44:26.318233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218774950.mount: Deactivated successfully. May 17 00:44:26.329103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974599346.mount: Deactivated successfully. May 17 00:44:26.331482 env[1303]: time="2025-05-17T00:44:26.331430625Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\"" May 17 00:44:26.334332 env[1303]: time="2025-05-17T00:44:26.334265716Z" level=info msg="StartContainer for \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\"" May 17 00:44:26.435865 env[1303]: time="2025-05-17T00:44:26.433502426Z" level=info msg="StartContainer for \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\" returns successfully" May 17 00:44:26.470064 env[1303]: time="2025-05-17T00:44:26.470015503Z" level=info msg="shim disconnected" id=fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867 May 17 00:44:26.470450 env[1303]: time="2025-05-17T00:44:26.470414495Z" level=warning msg="cleaning up after shim disconnected" id=fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867 namespace=k8s.io May 17 00:44:26.470605 env[1303]: time="2025-05-17T00:44:26.470582911Z" level=info msg="cleaning up dead shim" May 17 00:44:26.481823 env[1303]: time="2025-05-17T00:44:26.481772609Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2522 runtime=io.containerd.runc.v2\n" May 17 00:44:26.671602 kubelet[2064]: E0517 00:44:26.671512 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:26.681963 env[1303]: time="2025-05-17T00:44:26.681646953Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:44:26.702790 env[1303]: time="2025-05-17T00:44:26.702718441Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\"" May 17 00:44:26.703827 env[1303]: time="2025-05-17T00:44:26.703774554Z" level=info msg="StartContainer for \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\"" May 17 00:44:26.791388 env[1303]: time="2025-05-17T00:44:26.789232977Z" level=info msg="StartContainer for \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\" returns successfully" May 17 00:44:26.809625 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:44:26.810028 systemd[1]: Stopped systemd-sysctl.service. May 17 00:44:26.810193 systemd[1]: Stopping systemd-sysctl.service... May 17 00:44:26.814772 systemd[1]: Starting systemd-sysctl.service... May 17 00:44:26.839880 systemd[1]: Finished systemd-sysctl.service. May 17 00:44:26.859156 env[1303]: time="2025-05-17T00:44:26.859100091Z" level=info msg="shim disconnected" id=1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31 May 17 00:44:26.859563 env[1303]: time="2025-05-17T00:44:26.859538507Z" level=warning msg="cleaning up after shim disconnected" id=1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31 namespace=k8s.io May 17 00:44:26.859678 env[1303]: time="2025-05-17T00:44:26.859657957Z" level=info msg="cleaning up dead shim" May 17 00:44:26.871143 env[1303]: time="2025-05-17T00:44:26.871070412Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2587 runtime=io.containerd.runc.v2\n" May 17 00:44:27.313984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867-rootfs.mount: Deactivated successfully. May 17 00:44:27.677163 kubelet[2064]: E0517 00:44:27.677110 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:27.686350 env[1303]: time="2025-05-17T00:44:27.685406825Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:44:27.707984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361895775.mount: Deactivated successfully. May 17 00:44:27.719572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409092473.mount: Deactivated successfully. May 17 00:44:27.726102 env[1303]: time="2025-05-17T00:44:27.725999773Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\"" May 17 00:44:27.728519 env[1303]: time="2025-05-17T00:44:27.728464955Z" level=info msg="StartContainer for \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\"" May 17 00:44:27.822904 env[1303]: time="2025-05-17T00:44:27.822852139Z" level=info msg="StartContainer for \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\" returns successfully" May 17 00:44:27.866465 env[1303]: time="2025-05-17T00:44:27.866367508Z" level=info msg="shim disconnected" id=950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac May 17 00:44:27.866465 env[1303]: time="2025-05-17T00:44:27.866455317Z" level=warning msg="cleaning up after shim disconnected" id=950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac namespace=k8s.io May 17 00:44:27.866465 env[1303]: time="2025-05-17T00:44:27.866469243Z" level=info msg="cleaning up dead shim" May 17 00:44:27.880727 env[1303]: time="2025-05-17T00:44:27.880663269Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2645 runtime=io.containerd.runc.v2\n" May 17 00:44:28.682606 kubelet[2064]: E0517 00:44:28.682567 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:28.687915 env[1303]: time="2025-05-17T00:44:28.687853992Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:44:28.710215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386320655.mount: Deactivated successfully. May 17 00:44:28.723355 env[1303]: time="2025-05-17T00:44:28.723265844Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\"" May 17 00:44:28.725051 env[1303]: time="2025-05-17T00:44:28.724999665Z" level=info msg="StartContainer for \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\"" May 17 00:44:28.876938 env[1303]: time="2025-05-17T00:44:28.876872848Z" level=info msg="StartContainer for \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\" returns successfully" May 17 00:44:28.911281 env[1303]: time="2025-05-17T00:44:28.911218564Z" level=info msg="shim disconnected" id=9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621 May 17 00:44:28.911762 env[1303]: time="2025-05-17T00:44:28.911730627Z" level=warning msg="cleaning up after shim disconnected" id=9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621 namespace=k8s.io May 17 00:44:28.911882 env[1303]: time="2025-05-17T00:44:28.911862708Z" level=info msg="cleaning up dead shim" May 17 00:44:28.924923 env[1303]: time="2025-05-17T00:44:28.924858743Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2699 runtime=io.containerd.runc.v2\n" May 17 00:44:29.313951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621-rootfs.mount: Deactivated successfully. May 17 00:44:29.688997 kubelet[2064]: E0517 00:44:29.688959 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:29.704670 env[1303]: time="2025-05-17T00:44:29.699871507Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:44:29.720276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367022016.mount: Deactivated successfully. May 17 00:44:29.729863 env[1303]: time="2025-05-17T00:44:29.729557499Z" level=info msg="CreateContainer within sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\"" May 17 00:44:29.731377 env[1303]: time="2025-05-17T00:44:29.731292127Z" level=info msg="StartContainer for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\"" May 17 00:44:29.828731 env[1303]: time="2025-05-17T00:44:29.828667799Z" level=info msg="StartContainer for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" returns successfully" May 17 00:44:29.979053 kubelet[2064]: I0517 00:44:29.978154 2064 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:44:30.182274 kubelet[2064]: I0517 00:44:30.182200 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f103726b-abea-42f8-b760-cc89784fc310-config-volume\") pod \"coredns-7c65d6cfc9-tjv4f\" (UID: \"f103726b-abea-42f8-b760-cc89784fc310\") " pod="kube-system/coredns-7c65d6cfc9-tjv4f" May 17 00:44:30.182619 kubelet[2064]: I0517 00:44:30.182283 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpvq7\" (UniqueName: \"kubernetes.io/projected/0a1fdc44-67a1-4be5-b282-5af9d0bf997e-kube-api-access-mpvq7\") pod \"coredns-7c65d6cfc9-xsb8w\" (UID: \"0a1fdc44-67a1-4be5-b282-5af9d0bf997e\") " pod="kube-system/coredns-7c65d6cfc9-xsb8w" May 17 00:44:30.182619 kubelet[2064]: I0517 00:44:30.182348 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a1fdc44-67a1-4be5-b282-5af9d0bf997e-config-volume\") pod \"coredns-7c65d6cfc9-xsb8w\" (UID: \"0a1fdc44-67a1-4be5-b282-5af9d0bf997e\") " pod="kube-system/coredns-7c65d6cfc9-xsb8w" May 17 00:44:30.182619 kubelet[2064]: I0517 00:44:30.182378 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6xc\" (UniqueName: \"kubernetes.io/projected/f103726b-abea-42f8-b760-cc89784fc310-kube-api-access-tr6xc\") pod \"coredns-7c65d6cfc9-tjv4f\" (UID: \"f103726b-abea-42f8-b760-cc89784fc310\") " pod="kube-system/coredns-7c65d6cfc9-tjv4f" May 17 00:44:30.330336 kubelet[2064]: E0517 00:44:30.330273 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:30.331742 env[1303]: time="2025-05-17T00:44:30.331324827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tjv4f,Uid:f103726b-abea-42f8-b760-cc89784fc310,Namespace:kube-system,Attempt:0,}" May 17 00:44:30.357590 kubelet[2064]: E0517 00:44:30.357548 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:30.365002 env[1303]: time="2025-05-17T00:44:30.364952077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xsb8w,Uid:0a1fdc44-67a1-4be5-b282-5af9d0bf997e,Namespace:kube-system,Attempt:0,}" May 17 00:44:30.694729 kubelet[2064]: E0517 00:44:30.694585 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:31.696957 kubelet[2064]: E0517 00:44:31.696923 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:32.307411 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:44:32.306550 systemd-networkd[1071]: cilium_host: Link UP May 17 00:44:32.306742 systemd-networkd[1071]: cilium_net: Link UP May 17 00:44:32.306748 systemd-networkd[1071]: cilium_net: Gained carrier May 17 00:44:32.312928 systemd-networkd[1071]: cilium_host: Gained carrier May 17 00:44:32.434132 systemd-networkd[1071]: cilium_net: Gained IPv6LL May 17 00:44:32.508481 systemd-networkd[1071]: cilium_vxlan: Link UP May 17 00:44:32.508492 systemd-networkd[1071]: cilium_vxlan: Gained carrier May 17 00:44:32.699537 kubelet[2064]: E0517 00:44:32.699491 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:33.007712 kernel: NET: Registered PF_ALG protocol family May 17 00:44:33.185556 systemd-networkd[1071]: cilium_host: Gained IPv6LL May 17 00:44:33.560560 systemd-networkd[1071]: cilium_vxlan: Gained IPv6LL May 17 00:44:34.057080 systemd-networkd[1071]: lxc_health: Link UP May 17 00:44:34.074909 systemd-networkd[1071]: lxc_health: Gained carrier May 17 00:44:34.075448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:44:34.397631 systemd-networkd[1071]: lxc84cd68d5d457: Link UP May 17 00:44:34.423424 kernel: eth0: renamed from tmp3273e May 17 00:44:34.434076 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc84cd68d5d457: link becomes ready May 17 00:44:34.429518 systemd-networkd[1071]: lxc84cd68d5d457: Gained carrier May 17 00:44:34.449734 systemd-networkd[1071]: lxcd4880a4df52b: Link UP May 17 00:44:34.456435 kernel: eth0: renamed from tmpc4cdd May 17 00:44:34.473956 systemd-networkd[1071]: lxcd4880a4df52b: Gained carrier May 17 00:44:34.474872 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd4880a4df52b: link becomes ready May 17 00:44:34.697706 kubelet[2064]: E0517 00:44:34.697289 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:34.704079 kubelet[2064]: E0517 00:44:34.704022 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:34.731464 kubelet[2064]: I0517 00:44:34.731376 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvptf" podStartSLOduration=11.219517568 podStartE2EDuration="22.731357102s" podCreationTimestamp="2025-05-17 00:44:12 +0000 UTC" firstStartedPulling="2025-05-17 00:44:14.779643336 +0000 UTC m=+7.544282472" lastFinishedPulling="2025-05-17 00:44:26.29148288 +0000 UTC m=+19.056122006" observedRunningTime="2025-05-17 00:44:30.726142672 +0000 UTC m=+23.490781831" watchObservedRunningTime="2025-05-17 00:44:34.731357102 +0000 UTC m=+27.495996246" May 17 00:44:35.706697 kubelet[2064]: E0517 00:44:35.706644 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:35.992624 systemd-networkd[1071]: lxc_health: Gained IPv6LL May 17 00:44:35.993107 systemd-networkd[1071]: lxcd4880a4df52b: Gained IPv6LL May 17 00:44:36.184555 systemd-networkd[1071]: lxc84cd68d5d457: Gained IPv6LL May 17 00:44:40.457992 env[1303]: time="2025-05-17T00:44:40.457873488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:40.457992 env[1303]: time="2025-05-17T00:44:40.457931035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:40.457992 env[1303]: time="2025-05-17T00:44:40.457947547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:40.459196 env[1303]: time="2025-05-17T00:44:40.459113466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4cdd3980929b9f9ef86e03707d5d841a5c10915c0d1e8a02c52a3ab7fd7c456 pid=3246 runtime=io.containerd.runc.v2 May 17 00:44:40.568361 env[1303]: time="2025-05-17T00:44:40.563438013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:40.568361 env[1303]: time="2025-05-17T00:44:40.563563492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:40.568361 env[1303]: time="2025-05-17T00:44:40.563579567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:40.578695 env[1303]: time="2025-05-17T00:44:40.571706426Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3273e6b07cb731bb77044d178033bab110641902b35baee69d733eeccdfe0df3 pid=3285 runtime=io.containerd.runc.v2 May 17 00:44:40.615225 env[1303]: time="2025-05-17T00:44:40.615155717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xsb8w,Uid:0a1fdc44-67a1-4be5-b282-5af9d0bf997e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4cdd3980929b9f9ef86e03707d5d841a5c10915c0d1e8a02c52a3ab7fd7c456\"" May 17 00:44:40.620940 kubelet[2064]: E0517 00:44:40.618375 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:40.629785 env[1303]: time="2025-05-17T00:44:40.629623823Z" level=info msg="CreateContainer within sandbox \"c4cdd3980929b9f9ef86e03707d5d841a5c10915c0d1e8a02c52a3ab7fd7c456\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:44:40.660241 systemd[1]: run-containerd-runc-k8s.io-3273e6b07cb731bb77044d178033bab110641902b35baee69d733eeccdfe0df3-runc.iIO6mp.mount: Deactivated successfully. May 17 00:44:40.696535 env[1303]: time="2025-05-17T00:44:40.696437726Z" level=info msg="CreateContainer within sandbox \"c4cdd3980929b9f9ef86e03707d5d841a5c10915c0d1e8a02c52a3ab7fd7c456\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9109fc98c09ae95d8643a94fe215112867e1f44b53eb968b3d089e5b5868cfbf\"" May 17 00:44:40.698072 env[1303]: time="2025-05-17T00:44:40.698023142Z" level=info msg="StartContainer for \"9109fc98c09ae95d8643a94fe215112867e1f44b53eb968b3d089e5b5868cfbf\"" May 17 00:44:40.770100 env[1303]: time="2025-05-17T00:44:40.769175651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tjv4f,Uid:f103726b-abea-42f8-b760-cc89784fc310,Namespace:kube-system,Attempt:0,} returns sandbox id \"3273e6b07cb731bb77044d178033bab110641902b35baee69d733eeccdfe0df3\"" May 17 00:44:40.771701 kubelet[2064]: E0517 00:44:40.771435 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:40.776867 env[1303]: time="2025-05-17T00:44:40.776809534Z" level=info msg="CreateContainer within sandbox \"3273e6b07cb731bb77044d178033bab110641902b35baee69d733eeccdfe0df3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:44:40.800265 env[1303]: time="2025-05-17T00:44:40.800196583Z" level=info msg="CreateContainer within sandbox \"3273e6b07cb731bb77044d178033bab110641902b35baee69d733eeccdfe0df3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"243fb7c2351159cb91f23281344e70290e0e9808d0d5225cc73e1fefc6400f19\"" May 17 00:44:40.801718 env[1303]: time="2025-05-17T00:44:40.801667152Z" level=info msg="StartContainer for \"243fb7c2351159cb91f23281344e70290e0e9808d0d5225cc73e1fefc6400f19\"" May 17 00:44:40.856727 env[1303]: time="2025-05-17T00:44:40.855437072Z" level=info msg="StartContainer for \"9109fc98c09ae95d8643a94fe215112867e1f44b53eb968b3d089e5b5868cfbf\" returns successfully" May 17 00:44:40.906016 env[1303]: time="2025-05-17T00:44:40.905798644Z" level=info msg="StartContainer for \"243fb7c2351159cb91f23281344e70290e0e9808d0d5225cc73e1fefc6400f19\" returns successfully" May 17 00:44:41.743618 kubelet[2064]: E0517 00:44:41.742567 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:41.751257 kubelet[2064]: E0517 00:44:41.751220 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:41.766194 kubelet[2064]: I0517 00:44:41.766110 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xsb8w" podStartSLOduration=29.766084469 podStartE2EDuration="29.766084469s" podCreationTimestamp="2025-05-17 00:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:41.762296704 +0000 UTC m=+34.526935849" watchObservedRunningTime="2025-05-17 00:44:41.766084469 +0000 UTC m=+34.530723613" May 17 00:44:41.801018 kubelet[2064]: I0517 00:44:41.800935 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tjv4f" podStartSLOduration=29.800906528 podStartE2EDuration="29.800906528s" podCreationTimestamp="2025-05-17 00:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:41.799570884 +0000 UTC m=+34.564210040" watchObservedRunningTime="2025-05-17 00:44:41.800906528 +0000 UTC m=+34.565545673" May 17 00:44:42.753238 kubelet[2064]: E0517 00:44:42.753200 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:42.754091 kubelet[2064]: E0517 00:44:42.754054 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:43.755179 kubelet[2064]: E0517 00:44:43.755138 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:57.658403 systemd[1]: Started sshd@5-137.184.126.228:22-147.75.109.163:55604.service. May 17 00:44:57.726040 sshd[3408]: Accepted publickey for core from 147.75.109.163 port 55604 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:57.728978 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:57.739291 systemd[1]: Started session-6.scope. May 17 00:44:57.739746 systemd-logind[1288]: New session 6 of user core. May 17 00:44:58.017580 sshd[3408]: pam_unix(sshd:session): session closed for user core May 17 00:44:58.022505 systemd[1]: sshd@5-137.184.126.228:22-147.75.109.163:55604.service: Deactivated successfully. May 17 00:44:58.024678 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:44:58.025428 systemd-logind[1288]: Session 6 logged out. Waiting for processes to exit. May 17 00:44:58.026845 systemd-logind[1288]: Removed session 6. May 17 00:45:03.031473 systemd[1]: Started sshd@6-137.184.126.228:22-147.75.109.163:35934.service. May 17 00:45:03.158440 sshd[3421]: Accepted publickey for core from 147.75.109.163 port 35934 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:03.163878 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:03.187166 systemd-logind[1288]: New session 7 of user core. May 17 00:45:03.189827 systemd[1]: Started session-7.scope. May 17 00:45:03.525523 sshd[3421]: pam_unix(sshd:session): session closed for user core May 17 00:45:03.541511 systemd[1]: sshd@6-137.184.126.228:22-147.75.109.163:35934.service: Deactivated successfully. May 17 00:45:03.543695 systemd-logind[1288]: Session 7 logged out. Waiting for processes to exit. May 17 00:45:03.544043 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:45:03.548016 systemd-logind[1288]: Removed session 7. May 17 00:45:08.529770 systemd[1]: Started sshd@7-137.184.126.228:22-147.75.109.163:58544.service. May 17 00:45:08.620624 sshd[3436]: Accepted publickey for core from 147.75.109.163 port 58544 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:08.625036 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:08.636416 systemd-logind[1288]: New session 8 of user core. May 17 00:45:08.641497 systemd[1]: Started session-8.scope. May 17 00:45:08.851465 sshd[3436]: pam_unix(sshd:session): session closed for user core May 17 00:45:08.858525 systemd[1]: sshd@7-137.184.126.228:22-147.75.109.163:58544.service: Deactivated successfully. May 17 00:45:08.860909 systemd-logind[1288]: Session 8 logged out. Waiting for processes to exit. May 17 00:45:08.861055 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:45:08.863885 systemd-logind[1288]: Removed session 8. May 17 00:45:13.857680 systemd[1]: Started sshd@8-137.184.126.228:22-147.75.109.163:58552.service. May 17 00:45:13.907394 sshd[3452]: Accepted publickey for core from 147.75.109.163 port 58552 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:13.909881 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:13.919740 systemd-logind[1288]: New session 9 of user core. May 17 00:45:13.921990 systemd[1]: Started session-9.scope. May 17 00:45:14.102641 sshd[3452]: pam_unix(sshd:session): session closed for user core May 17 00:45:14.107524 systemd-logind[1288]: Session 9 logged out. Waiting for processes to exit. May 17 00:45:14.108220 systemd[1]: sshd@8-137.184.126.228:22-147.75.109.163:58552.service: Deactivated successfully. May 17 00:45:14.109778 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:45:14.110493 systemd-logind[1288]: Removed session 9. May 17 00:45:19.109725 systemd[1]: Started sshd@9-137.184.126.228:22-147.75.109.163:42332.service. May 17 00:45:19.160338 sshd[3465]: Accepted publickey for core from 147.75.109.163 port 42332 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:19.163147 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:19.180427 systemd[1]: Started session-10.scope. May 17 00:45:19.181665 systemd-logind[1288]: New session 10 of user core. May 17 00:45:19.357416 sshd[3465]: pam_unix(sshd:session): session closed for user core May 17 00:45:19.365862 systemd[1]: Started sshd@10-137.184.126.228:22-147.75.109.163:42338.service. May 17 00:45:19.367457 systemd[1]: sshd@9-137.184.126.228:22-147.75.109.163:42332.service: Deactivated successfully. May 17 00:45:19.372572 systemd-logind[1288]: Session 10 logged out. Waiting for processes to exit. May 17 00:45:19.374832 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:45:19.377085 systemd-logind[1288]: Removed session 10. May 17 00:45:19.426294 sshd[3476]: Accepted publickey for core from 147.75.109.163 port 42338 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:19.428743 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:19.436360 systemd[1]: Started session-11.scope. May 17 00:45:19.437010 systemd-logind[1288]: New session 11 of user core. May 17 00:45:19.530421 kubelet[2064]: E0517 00:45:19.530365 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:19.680558 systemd[1]: Started sshd@11-137.184.126.228:22-147.75.109.163:42354.service. May 17 00:45:19.684647 sshd[3476]: pam_unix(sshd:session): session closed for user core May 17 00:45:19.714978 systemd[1]: sshd@10-137.184.126.228:22-147.75.109.163:42338.service: Deactivated successfully. May 17 00:45:19.716522 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:45:19.718172 systemd-logind[1288]: Session 11 logged out. Waiting for processes to exit. May 17 00:45:19.727506 systemd-logind[1288]: Removed session 11. May 17 00:45:19.783447 sshd[3487]: Accepted publickey for core from 147.75.109.163 port 42354 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:19.786082 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:19.794355 systemd[1]: Started session-12.scope. May 17 00:45:19.794913 systemd-logind[1288]: New session 12 of user core. May 17 00:45:19.983562 sshd[3487]: pam_unix(sshd:session): session closed for user core May 17 00:45:19.988290 systemd-logind[1288]: Session 12 logged out. Waiting for processes to exit. May 17 00:45:19.990712 systemd[1]: sshd@11-137.184.126.228:22-147.75.109.163:42354.service: Deactivated successfully. May 17 00:45:19.991749 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:45:19.993866 systemd-logind[1288]: Removed session 12. May 17 00:45:24.990863 systemd[1]: Started sshd@12-137.184.126.228:22-147.75.109.163:42368.service. May 17 00:45:25.042755 sshd[3503]: Accepted publickey for core from 147.75.109.163 port 42368 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:25.045870 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:25.052920 systemd[1]: Started session-13.scope. May 17 00:45:25.054527 systemd-logind[1288]: New session 13 of user core. May 17 00:45:25.214106 sshd[3503]: pam_unix(sshd:session): session closed for user core May 17 00:45:25.218729 systemd[1]: sshd@12-137.184.126.228:22-147.75.109.163:42368.service: Deactivated successfully. May 17 00:45:25.220933 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:45:25.221723 systemd-logind[1288]: Session 13 logged out. Waiting for processes to exit. May 17 00:45:25.223708 systemd-logind[1288]: Removed session 13. May 17 00:45:28.529126 kubelet[2064]: E0517 00:45:28.529077 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:29.528938 kubelet[2064]: E0517 00:45:29.528885 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:30.221403 systemd[1]: Started sshd@13-137.184.126.228:22-147.75.109.163:44464.service. May 17 00:45:30.273428 sshd[3516]: Accepted publickey for core from 147.75.109.163 port 44464 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:30.275923 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:30.284850 systemd[1]: Started session-14.scope. May 17 00:45:30.285611 systemd-logind[1288]: New session 14 of user core. May 17 00:45:30.438658 sshd[3516]: pam_unix(sshd:session): session closed for user core May 17 00:45:30.443401 systemd[1]: sshd@13-137.184.126.228:22-147.75.109.163:44464.service: Deactivated successfully. May 17 00:45:30.445684 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:45:30.446451 systemd-logind[1288]: Session 14 logged out. Waiting for processes to exit. May 17 00:45:30.447886 systemd-logind[1288]: Removed session 14. May 17 00:45:32.529502 kubelet[2064]: E0517 00:45:32.529439 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:35.445289 systemd[1]: Started sshd@14-137.184.126.228:22-147.75.109.163:44476.service. May 17 00:45:35.505228 sshd[3529]: Accepted publickey for core from 147.75.109.163 port 44476 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:35.512453 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:35.523258 systemd-logind[1288]: New session 15 of user core. May 17 00:45:35.523874 systemd[1]: Started session-15.scope. May 17 00:45:35.683854 sshd[3529]: pam_unix(sshd:session): session closed for user core May 17 00:45:35.691015 systemd[1]: Started sshd@15-137.184.126.228:22-147.75.109.163:44484.service. May 17 00:45:35.692305 systemd[1]: sshd@14-137.184.126.228:22-147.75.109.163:44476.service: Deactivated successfully. May 17 00:45:35.694886 systemd-logind[1288]: Session 15 logged out. Waiting for processes to exit. May 17 00:45:35.696270 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:45:35.698442 systemd-logind[1288]: Removed session 15. May 17 00:45:35.761536 sshd[3541]: Accepted publickey for core from 147.75.109.163 port 44484 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:35.764867 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:35.770893 systemd-logind[1288]: New session 16 of user core. May 17 00:45:35.772560 systemd[1]: Started session-16.scope. May 17 00:45:36.158397 sshd[3541]: pam_unix(sshd:session): session closed for user core May 17 00:45:36.164570 systemd[1]: Started sshd@16-137.184.126.228:22-147.75.109.163:44496.service. May 17 00:45:36.174062 systemd[1]: sshd@15-137.184.126.228:22-147.75.109.163:44484.service: Deactivated successfully. May 17 00:45:36.176251 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:45:36.176265 systemd-logind[1288]: Session 16 logged out. Waiting for processes to exit. May 17 00:45:36.177791 systemd-logind[1288]: Removed session 16. May 17 00:45:36.226541 sshd[3551]: Accepted publickey for core from 147.75.109.163 port 44496 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:36.229548 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:36.238086 systemd-logind[1288]: New session 17 of user core. May 17 00:45:36.238391 systemd[1]: Started session-17.scope. May 17 00:45:38.277781 sshd[3551]: pam_unix(sshd:session): session closed for user core May 17 00:45:38.282298 systemd[1]: Started sshd@17-137.184.126.228:22-147.75.109.163:47844.service. May 17 00:45:38.291015 systemd[1]: sshd@16-137.184.126.228:22-147.75.109.163:44496.service: Deactivated successfully. May 17 00:45:38.294277 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:45:38.297127 systemd-logind[1288]: Session 17 logged out. Waiting for processes to exit. May 17 00:45:38.298914 systemd-logind[1288]: Removed session 17. May 17 00:45:38.350920 sshd[3570]: Accepted publickey for core from 147.75.109.163 port 47844 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:38.354472 sshd[3570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:38.364154 systemd[1]: Started session-18.scope. May 17 00:45:38.364192 systemd-logind[1288]: New session 18 of user core. May 17 00:45:38.747581 sshd[3570]: pam_unix(sshd:session): session closed for user core May 17 00:45:38.755439 systemd[1]: Started sshd@18-137.184.126.228:22-147.75.109.163:47852.service. May 17 00:45:38.768891 systemd[1]: sshd@17-137.184.126.228:22-147.75.109.163:47844.service: Deactivated successfully. May 17 00:45:38.770222 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:45:38.773678 systemd-logind[1288]: Session 18 logged out. Waiting for processes to exit. May 17 00:45:38.775006 systemd-logind[1288]: Removed session 18. May 17 00:45:38.822207 sshd[3583]: Accepted publickey for core from 147.75.109.163 port 47852 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:38.824572 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:38.832367 systemd[1]: Started session-19.scope. May 17 00:45:38.834087 systemd-logind[1288]: New session 19 of user core. May 17 00:45:38.985764 sshd[3583]: pam_unix(sshd:session): session closed for user core May 17 00:45:38.989735 systemd[1]: sshd@18-137.184.126.228:22-147.75.109.163:47852.service: Deactivated successfully. May 17 00:45:38.990672 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:45:38.991365 systemd-logind[1288]: Session 19 logged out. Waiting for processes to exit. May 17 00:45:38.992512 systemd-logind[1288]: Removed session 19. May 17 00:45:43.992463 systemd[1]: Started sshd@19-137.184.126.228:22-147.75.109.163:47860.service. May 17 00:45:44.044960 sshd[3600]: Accepted publickey for core from 147.75.109.163 port 47860 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:44.047508 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:44.053726 systemd[1]: Started session-20.scope. May 17 00:45:44.054174 systemd-logind[1288]: New session 20 of user core. May 17 00:45:44.213603 sshd[3600]: pam_unix(sshd:session): session closed for user core May 17 00:45:44.217971 systemd[1]: sshd@19-137.184.126.228:22-147.75.109.163:47860.service: Deactivated successfully. May 17 00:45:44.219615 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:45:44.220132 systemd-logind[1288]: Session 20 logged out. Waiting for processes to exit. May 17 00:45:44.221677 systemd-logind[1288]: Removed session 20. May 17 00:45:48.528871 kubelet[2064]: E0517 00:45:48.528819 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:49.218465 systemd[1]: Started sshd@20-137.184.126.228:22-147.75.109.163:33224.service. May 17 00:45:49.265560 sshd[3615]: Accepted publickey for core from 147.75.109.163 port 33224 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:49.268004 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:49.274402 systemd-logind[1288]: New session 21 of user core. May 17 00:45:49.275216 systemd[1]: Started session-21.scope. May 17 00:45:49.411499 sshd[3615]: pam_unix(sshd:session): session closed for user core May 17 00:45:49.416216 systemd[1]: sshd@20-137.184.126.228:22-147.75.109.163:33224.service: Deactivated successfully. May 17 00:45:49.418567 systemd-logind[1288]: Session 21 logged out. Waiting for processes to exit. May 17 00:45:49.419401 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:45:49.422154 systemd-logind[1288]: Removed session 21. May 17 00:45:50.529248 kubelet[2064]: E0517 00:45:50.529191 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:54.418073 systemd[1]: Started sshd@21-137.184.126.228:22-147.75.109.163:33232.service. May 17 00:45:54.463505 sshd[3628]: Accepted publickey for core from 147.75.109.163 port 33232 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:54.465936 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:54.472274 systemd[1]: Started session-22.scope. May 17 00:45:54.472824 systemd-logind[1288]: New session 22 of user core. May 17 00:45:54.627752 sshd[3628]: pam_unix(sshd:session): session closed for user core May 17 00:45:54.631628 systemd[1]: sshd@21-137.184.126.228:22-147.75.109.163:33232.service: Deactivated successfully. May 17 00:45:54.633446 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:45:54.633492 systemd-logind[1288]: Session 22 logged out. Waiting for processes to exit. May 17 00:45:54.634995 systemd-logind[1288]: Removed session 22. May 17 00:45:56.529682 kubelet[2064]: E0517 00:45:56.529598 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:45:59.633780 systemd[1]: Started sshd@22-137.184.126.228:22-147.75.109.163:60116.service. May 17 00:45:59.683089 sshd[3640]: Accepted publickey for core from 147.75.109.163 port 60116 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:45:59.684421 sshd[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:45:59.690865 systemd-logind[1288]: New session 23 of user core. May 17 00:45:59.691464 systemd[1]: Started session-23.scope. May 17 00:45:59.841576 sshd[3640]: pam_unix(sshd:session): session closed for user core May 17 00:45:59.846012 systemd[1]: sshd@22-137.184.126.228:22-147.75.109.163:60116.service: Deactivated successfully. May 17 00:45:59.847938 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:45:59.848568 systemd-logind[1288]: Session 23 logged out. Waiting for processes to exit. May 17 00:45:59.850111 systemd-logind[1288]: Removed session 23. May 17 00:46:04.847777 systemd[1]: Started sshd@23-137.184.126.228:22-147.75.109.163:60130.service. May 17 00:46:04.900098 sshd[3653]: Accepted publickey for core from 147.75.109.163 port 60130 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:46:04.903376 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:04.910394 systemd-logind[1288]: New session 24 of user core. May 17 00:46:04.911476 systemd[1]: Started session-24.scope. May 17 00:46:05.057509 sshd[3653]: pam_unix(sshd:session): session closed for user core May 17 00:46:05.061198 systemd-logind[1288]: Session 24 logged out. Waiting for processes to exit. May 17 00:46:05.061661 systemd[1]: sshd@23-137.184.126.228:22-147.75.109.163:60130.service: Deactivated successfully. May 17 00:46:05.062762 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:46:05.063366 systemd-logind[1288]: Removed session 24. May 17 00:46:05.528456 kubelet[2064]: E0517 00:46:05.528399 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:10.063249 systemd[1]: Started sshd@24-137.184.126.228:22-147.75.109.163:55792.service. May 17 00:46:10.110031 sshd[3668]: Accepted publickey for core from 147.75.109.163 port 55792 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:46:10.113447 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:10.120899 systemd[1]: Started session-25.scope. May 17 00:46:10.121465 systemd-logind[1288]: New session 25 of user core. May 17 00:46:10.259959 sshd[3668]: pam_unix(sshd:session): session closed for user core May 17 00:46:10.265121 systemd[1]: Started sshd@25-137.184.126.228:22-147.75.109.163:55806.service. May 17 00:46:10.274211 systemd[1]: sshd@24-137.184.126.228:22-147.75.109.163:55792.service: Deactivated successfully. May 17 00:46:10.275156 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:46:10.275666 systemd-logind[1288]: Session 25 logged out. Waiting for processes to exit. May 17 00:46:10.277195 systemd-logind[1288]: Removed session 25. May 17 00:46:10.317091 sshd[3679]: Accepted publickey for core from 147.75.109.163 port 55806 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:46:10.320253 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:10.327234 systemd[1]: Started session-26.scope. May 17 00:46:10.327834 systemd-logind[1288]: New session 26 of user core. May 17 00:46:11.928036 env[1303]: time="2025-05-17T00:46:11.926443183Z" level=info msg="StopContainer for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" with timeout 30 (s)" May 17 00:46:11.935360 env[1303]: time="2025-05-17T00:46:11.930428589Z" level=info msg="Stop container \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" with signal terminated" May 17 00:46:11.966999 systemd[1]: run-containerd-runc-k8s.io-26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863-runc.rkFh2g.mount: Deactivated successfully. May 17 00:46:11.988002 env[1303]: time="2025-05-17T00:46:11.987912425Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:46:12.024126 env[1303]: time="2025-05-17T00:46:12.024065613Z" level=info msg="StopContainer for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" with timeout 2 (s)" May 17 00:46:12.024977 env[1303]: time="2025-05-17T00:46:12.024913450Z" level=info msg="Stop container \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" with signal terminated" May 17 00:46:12.042755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f-rootfs.mount: Deactivated successfully. May 17 00:46:12.053156 systemd-networkd[1071]: lxc_health: Link DOWN May 17 00:46:12.053166 systemd-networkd[1071]: lxc_health: Lost carrier May 17 00:46:12.055860 env[1303]: time="2025-05-17T00:46:12.055795637Z" level=info msg="shim disconnected" id=3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f May 17 00:46:12.056060 env[1303]: time="2025-05-17T00:46:12.055856621Z" level=warning msg="cleaning up after shim disconnected" id=3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f namespace=k8s.io May 17 00:46:12.056060 env[1303]: time="2025-05-17T00:46:12.055874764Z" level=info msg="cleaning up dead shim" May 17 00:46:12.088234 env[1303]: time="2025-05-17T00:46:12.086447064Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3734 runtime=io.containerd.runc.v2\n" May 17 00:46:12.089487 env[1303]: time="2025-05-17T00:46:12.088681436Z" level=info msg="StopContainer for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" returns successfully" May 17 00:46:12.091116 env[1303]: time="2025-05-17T00:46:12.090951227Z" level=info msg="StopPodSandbox for \"56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9\"" May 17 00:46:12.091116 env[1303]: time="2025-05-17T00:46:12.091046882Z" level=info msg="Container to stop \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:12.095854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9-shm.mount: Deactivated successfully. May 17 00:46:12.139543 env[1303]: time="2025-05-17T00:46:12.139459867Z" level=info msg="shim disconnected" id=26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863 May 17 00:46:12.139543 env[1303]: time="2025-05-17T00:46:12.139531418Z" level=warning msg="cleaning up after shim disconnected" id=26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863 namespace=k8s.io May 17 00:46:12.139543 env[1303]: time="2025-05-17T00:46:12.139547521Z" level=info msg="cleaning up dead shim" May 17 00:46:12.159546 env[1303]: time="2025-05-17T00:46:12.159427547Z" level=info msg="shim disconnected" id=56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9 May 17 00:46:12.159546 env[1303]: time="2025-05-17T00:46:12.159533254Z" level=warning msg="cleaning up after shim disconnected" id=56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9 namespace=k8s.io May 17 00:46:12.159546 env[1303]: time="2025-05-17T00:46:12.159547562Z" level=info msg="cleaning up dead shim" May 17 00:46:12.169283 env[1303]: time="2025-05-17T00:46:12.169216093Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3777 runtime=io.containerd.runc.v2\n" May 17 00:46:12.171550 env[1303]: time="2025-05-17T00:46:12.171468326Z" level=info msg="StopContainer for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" returns successfully" May 17 00:46:12.172181 env[1303]: time="2025-05-17T00:46:12.172132680Z" level=info msg="StopPodSandbox for \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\"" May 17 00:46:12.172407 env[1303]: time="2025-05-17T00:46:12.172219918Z" level=info msg="Container to stop \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:12.172407 env[1303]: time="2025-05-17T00:46:12.172247695Z" level=info msg="Container to stop \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:12.172407 env[1303]: time="2025-05-17T00:46:12.172265489Z" level=info msg="Container to stop \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:12.172407 env[1303]: time="2025-05-17T00:46:12.172283211Z" level=info msg="Container to stop \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:12.172407 env[1303]: time="2025-05-17T00:46:12.172299822Z" level=info msg="Container to stop \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:12.197167 env[1303]: time="2025-05-17T00:46:12.195479583Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3798 runtime=io.containerd.runc.v2\n" May 17 00:46:12.197985 env[1303]: time="2025-05-17T00:46:12.197918278Z" level=info msg="TearDown network for sandbox \"56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9\" successfully" May 17 00:46:12.197985 env[1303]: time="2025-05-17T00:46:12.197960546Z" level=info msg="StopPodSandbox for \"56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9\" returns successfully" May 17 00:46:12.232085 kubelet[2064]: I0517 00:46:12.231692 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcf8x\" (UniqueName: \"kubernetes.io/projected/c550b965-b6f9-4419-ac4e-7dd2f6774589-kube-api-access-gcf8x\") pod \"c550b965-b6f9-4419-ac4e-7dd2f6774589\" (UID: \"c550b965-b6f9-4419-ac4e-7dd2f6774589\") " May 17 00:46:12.232085 kubelet[2064]: I0517 00:46:12.231784 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c550b965-b6f9-4419-ac4e-7dd2f6774589-cilium-config-path\") pod \"c550b965-b6f9-4419-ac4e-7dd2f6774589\" (UID: \"c550b965-b6f9-4419-ac4e-7dd2f6774589\") " May 17 00:46:12.256055 kubelet[2064]: I0517 00:46:12.255488 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c550b965-b6f9-4419-ac4e-7dd2f6774589-kube-api-access-gcf8x" (OuterVolumeSpecName: "kube-api-access-gcf8x") pod "c550b965-b6f9-4419-ac4e-7dd2f6774589" (UID: "c550b965-b6f9-4419-ac4e-7dd2f6774589"). InnerVolumeSpecName "kube-api-access-gcf8x". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:46:12.256507 kubelet[2064]: I0517 00:46:12.252176 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c550b965-b6f9-4419-ac4e-7dd2f6774589-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c550b965-b6f9-4419-ac4e-7dd2f6774589" (UID: "c550b965-b6f9-4419-ac4e-7dd2f6774589"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:46:12.257587 env[1303]: time="2025-05-17T00:46:12.257525621Z" level=info msg="shim disconnected" id=36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2 May 17 00:46:12.258140 env[1303]: time="2025-05-17T00:46:12.257905395Z" level=warning msg="cleaning up after shim disconnected" id=36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2 namespace=k8s.io May 17 00:46:12.258140 env[1303]: time="2025-05-17T00:46:12.257940263Z" level=info msg="cleaning up dead shim" May 17 00:46:12.270770 env[1303]: time="2025-05-17T00:46:12.270712228Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3834 runtime=io.containerd.runc.v2\n" May 17 00:46:12.271447 env[1303]: time="2025-05-17T00:46:12.271403881Z" level=info msg="TearDown network for sandbox \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" successfully" May 17 00:46:12.271630 env[1303]: time="2025-05-17T00:46:12.271601806Z" level=info msg="StopPodSandbox for \"36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2\" returns successfully" May 17 00:46:12.332810 kubelet[2064]: I0517 00:46:12.332750 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-net\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.333104 kubelet[2064]: I0517 00:46:12.333078 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-bpf-maps\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.333256 kubelet[2064]: I0517 00:46:12.333237 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-lib-modules\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.333412 kubelet[2064]: I0517 00:46:12.333392 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-etc-cni-netd\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.333532 kubelet[2064]: I0517 00:46:12.332870 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.335160 kubelet[2064]: I0517 00:46:12.333513 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/122b5d14-a08b-4e78-a8c0-5aadccfba353-clustermesh-secrets\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.335301 kubelet[2064]: I0517 00:46:12.335190 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cni-path\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.335301 kubelet[2064]: I0517 00:46:12.335214 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-hostproc\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.335301 kubelet[2064]: I0517 00:46:12.335235 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-cgroup\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.336105 kubelet[2064]: I0517 00:46:12.336075 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-hubble-tls\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.336214 kubelet[2064]: I0517 00:46:12.336116 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-run\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.336214 kubelet[2064]: I0517 00:46:12.336137 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckfpr\" (UniqueName: \"kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-kube-api-access-ckfpr\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.336214 kubelet[2064]: I0517 00:46:12.336155 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-kernel\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.336214 kubelet[2064]: I0517 00:46:12.336173 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-config-path\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.336214 kubelet[2064]: I0517 00:46:12.336196 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-xtables-lock\") pod \"122b5d14-a08b-4e78-a8c0-5aadccfba353\" (UID: \"122b5d14-a08b-4e78-a8c0-5aadccfba353\") " May 17 00:46:12.337009 kubelet[2064]: I0517 00:46:12.333116 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.337009 kubelet[2064]: I0517 00:46:12.333280 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.337009 kubelet[2064]: I0517 00:46:12.333424 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.337452 kubelet[2064]: I0517 00:46:12.337422 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.338214 kubelet[2064]: I0517 00:46:12.338177 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cni-path" (OuterVolumeSpecName: "cni-path") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.338306 kubelet[2064]: I0517 00:46:12.338226 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-hostproc" (OuterVolumeSpecName: "hostproc") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.338306 kubelet[2064]: I0517 00:46:12.338243 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.338848 kubelet[2064]: I0517 00:46:12.338819 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.338914 kubelet[2064]: I0517 00:46:12.338850 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:12.338946 kubelet[2064]: I0517 00:46:12.338935 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-cgroup\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.338994 kubelet[2064]: I0517 00:46:12.338949 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-run\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.338994 kubelet[2064]: I0517 00:46:12.338960 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c550b965-b6f9-4419-ac4e-7dd2f6774589-cilium-config-path\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.338994 kubelet[2064]: I0517 00:46:12.338970 2064 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-net\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.338994 kubelet[2064]: I0517 00:46:12.338979 2064 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-bpf-maps\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.338994 kubelet[2064]: I0517 00:46:12.338989 2064 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-lib-modules\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.339205 kubelet[2064]: I0517 00:46:12.339000 2064 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-etc-cni-netd\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.339205 kubelet[2064]: I0517 00:46:12.339008 2064 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-cni-path\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.339205 kubelet[2064]: I0517 00:46:12.339016 2064 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcf8x\" (UniqueName: \"kubernetes.io/projected/c550b965-b6f9-4419-ac4e-7dd2f6774589-kube-api-access-gcf8x\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.339205 kubelet[2064]: I0517 00:46:12.339026 2064 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-hostproc\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.341218 kubelet[2064]: I0517 00:46:12.341163 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:46:12.341370 kubelet[2064]: I0517 00:46:12.341281 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/122b5d14-a08b-4e78-a8c0-5aadccfba353-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:46:12.345690 kubelet[2064]: I0517 00:46:12.345640 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:46:12.346577 kubelet[2064]: I0517 00:46:12.346534 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-kube-api-access-ckfpr" (OuterVolumeSpecName: "kube-api-access-ckfpr") pod "122b5d14-a08b-4e78-a8c0-5aadccfba353" (UID: "122b5d14-a08b-4e78-a8c0-5aadccfba353"). InnerVolumeSpecName "kube-api-access-ckfpr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:46:12.440364 kubelet[2064]: I0517 00:46:12.440255 2064 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-hubble-tls\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.440364 kubelet[2064]: I0517 00:46:12.440342 2064 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-xtables-lock\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.440364 kubelet[2064]: I0517 00:46:12.440359 2064 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckfpr\" (UniqueName: \"kubernetes.io/projected/122b5d14-a08b-4e78-a8c0-5aadccfba353-kube-api-access-ckfpr\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.440364 kubelet[2064]: I0517 00:46:12.440373 2064 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/122b5d14-a08b-4e78-a8c0-5aadccfba353-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.440364 kubelet[2064]: I0517 00:46:12.440388 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/122b5d14-a08b-4e78-a8c0-5aadccfba353-cilium-config-path\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.440879 kubelet[2064]: I0517 00:46:12.440400 2064 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/122b5d14-a08b-4e78-a8c0-5aadccfba353-clustermesh-secrets\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:12.683836 kubelet[2064]: E0517 00:46:12.683784 2064 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:46:12.921277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863-rootfs.mount: Deactivated successfully. May 17 00:46:12.921466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2-rootfs.mount: Deactivated successfully. May 17 00:46:12.921559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36fca0a953baf86b8f8590339269f61bdd15f736111a2a21f21c33ef9a506be2-shm.mount: Deactivated successfully. May 17 00:46:12.921690 systemd[1]: var-lib-kubelet-pods-122b5d14\x2da08b\x2d4e78\x2da8c0\x2d5aadccfba353-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:46:12.921955 systemd[1]: var-lib-kubelet-pods-122b5d14\x2da08b\x2d4e78\x2da8c0\x2d5aadccfba353-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:46:12.922112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56a4ed73a12fe0339eff663b308dea70c292889403d28aaa0057e442db0d86e9-rootfs.mount: Deactivated successfully. May 17 00:46:12.922262 systemd[1]: var-lib-kubelet-pods-c550b965\x2db6f9\x2d4419\x2dac4e\x2d7dd2f6774589-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgcf8x.mount: Deactivated successfully. May 17 00:46:12.922388 systemd[1]: var-lib-kubelet-pods-122b5d14\x2da08b\x2d4e78\x2da8c0\x2d5aadccfba353-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dckfpr.mount: Deactivated successfully. May 17 00:46:13.001398 kubelet[2064]: I0517 00:46:13.000150 2064 scope.go:117] "RemoveContainer" containerID="26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863" May 17 00:46:13.005491 env[1303]: time="2025-05-17T00:46:13.005442062Z" level=info msg="RemoveContainer for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\"" May 17 00:46:13.010929 env[1303]: time="2025-05-17T00:46:13.010868482Z" level=info msg="RemoveContainer for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" returns successfully" May 17 00:46:13.012426 kubelet[2064]: I0517 00:46:13.012378 2064 scope.go:117] "RemoveContainer" containerID="9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621" May 17 00:46:13.018878 env[1303]: time="2025-05-17T00:46:13.018597499Z" level=info msg="RemoveContainer for \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\"" May 17 00:46:13.026389 env[1303]: time="2025-05-17T00:46:13.026344793Z" level=info msg="RemoveContainer for \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\" returns successfully" May 17 00:46:13.027442 kubelet[2064]: I0517 00:46:13.027404 2064 scope.go:117] "RemoveContainer" containerID="950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac" May 17 00:46:13.032128 env[1303]: time="2025-05-17T00:46:13.032075658Z" level=info msg="RemoveContainer for \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\"" May 17 00:46:13.038936 env[1303]: time="2025-05-17T00:46:13.038869792Z" level=info msg="RemoveContainer for \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\" returns successfully" May 17 00:46:13.039647 kubelet[2064]: I0517 00:46:13.039602 2064 scope.go:117] "RemoveContainer" containerID="1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31" May 17 00:46:13.049115 env[1303]: time="2025-05-17T00:46:13.049058388Z" level=info msg="RemoveContainer for \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\"" May 17 00:46:13.052798 env[1303]: time="2025-05-17T00:46:13.052725666Z" level=info msg="RemoveContainer for \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\" returns successfully" May 17 00:46:13.053444 kubelet[2064]: I0517 00:46:13.053420 2064 scope.go:117] "RemoveContainer" containerID="fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867" May 17 00:46:13.058656 env[1303]: time="2025-05-17T00:46:13.058604411Z" level=info msg="RemoveContainer for \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\"" May 17 00:46:13.061731 env[1303]: time="2025-05-17T00:46:13.061655296Z" level=info msg="RemoveContainer for \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\" returns successfully" May 17 00:46:13.062740 kubelet[2064]: I0517 00:46:13.062603 2064 scope.go:117] "RemoveContainer" containerID="26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863" May 17 00:46:13.066690 env[1303]: time="2025-05-17T00:46:13.066568197Z" level=error msg="ContainerStatus for \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\": not found" May 17 00:46:13.068378 kubelet[2064]: E0517 00:46:13.068155 2064 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\": not found" containerID="26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863" May 17 00:46:13.068378 kubelet[2064]: I0517 00:46:13.068217 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863"} err="failed to get container status \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\": rpc error: code = NotFound desc = an error occurred when try to find container \"26f9d8189e5e02c5ab2b7aa3093202582ad1ba150ecaab6e3008b62af48c5863\": not found" May 17 00:46:13.068378 kubelet[2064]: I0517 00:46:13.068303 2064 scope.go:117] "RemoveContainer" containerID="9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621" May 17 00:46:13.068972 env[1303]: time="2025-05-17T00:46:13.068887067Z" level=error msg="ContainerStatus for \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\": not found" May 17 00:46:13.069286 kubelet[2064]: E0517 00:46:13.069255 2064 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\": not found" containerID="9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621" May 17 00:46:13.069426 kubelet[2064]: I0517 00:46:13.069289 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621"} err="failed to get container status \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a4471dfcc078c1bb8faf3f6c3856dfc6f2f43cedc44f9dd9c03e3490d37d621\": not found" May 17 00:46:13.069426 kubelet[2064]: I0517 00:46:13.069337 2064 scope.go:117] "RemoveContainer" containerID="950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac" May 17 00:46:13.069755 env[1303]: time="2025-05-17T00:46:13.069687923Z" level=error msg="ContainerStatus for \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\": not found" May 17 00:46:13.070278 kubelet[2064]: E0517 00:46:13.070226 2064 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\": not found" containerID="950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac" May 17 00:46:13.070278 kubelet[2064]: I0517 00:46:13.070253 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac"} err="failed to get container status \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"950c19c4fba5ef334e6e951e91a13e2d1bcb75ec6a2a667f68af85d2c9ba38ac\": not found" May 17 00:46:13.070278 kubelet[2064]: I0517 00:46:13.070271 2064 scope.go:117] "RemoveContainer" containerID="1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31" May 17 00:46:13.070658 env[1303]: time="2025-05-17T00:46:13.070592165Z" level=error msg="ContainerStatus for \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\": not found" May 17 00:46:13.070955 kubelet[2064]: E0517 00:46:13.070923 2064 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\": not found" containerID="1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31" May 17 00:46:13.071054 kubelet[2064]: I0517 00:46:13.070959 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31"} err="failed to get container status \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b56ef89c83477bab27d4340f17608978dd66b8fc43f236fc31bb6f929f6ca31\": not found" May 17 00:46:13.071054 kubelet[2064]: I0517 00:46:13.070981 2064 scope.go:117] "RemoveContainer" containerID="fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867" May 17 00:46:13.071366 env[1303]: time="2025-05-17T00:46:13.071293207Z" level=error msg="ContainerStatus for \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\": not found" May 17 00:46:13.071645 kubelet[2064]: E0517 00:46:13.071612 2064 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\": not found" containerID="fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867" May 17 00:46:13.071720 kubelet[2064]: I0517 00:46:13.071656 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867"} err="failed to get container status \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbb705d4687a5df1bce0651a651036f219e5967bccb4b1b8b92f8efe41bf4867\": not found" May 17 00:46:13.071720 kubelet[2064]: I0517 00:46:13.071683 2064 scope.go:117] "RemoveContainer" containerID="3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f" May 17 00:46:13.072982 env[1303]: time="2025-05-17T00:46:13.072951224Z" level=info msg="RemoveContainer for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\"" May 17 00:46:13.075981 env[1303]: time="2025-05-17T00:46:13.075927100Z" level=info msg="RemoveContainer for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" returns successfully" May 17 00:46:13.076629 kubelet[2064]: I0517 00:46:13.076600 2064 scope.go:117] "RemoveContainer" containerID="3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f" May 17 00:46:13.077011 env[1303]: time="2025-05-17T00:46:13.076938886Z" level=error msg="ContainerStatus for \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\": not found" May 17 00:46:13.077386 kubelet[2064]: E0517 00:46:13.077353 2064 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\": not found" containerID="3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f" May 17 00:46:13.078022 kubelet[2064]: I0517 00:46:13.077400 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f"} err="failed to get container status \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c15d9f5667e6fd92a69bdeb732d0ed31fea805b5751fd9f7fce554de3cd1c2f\": not found" May 17 00:46:13.531767 kubelet[2064]: I0517 00:46:13.531725 2064 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" path="/var/lib/kubelet/pods/122b5d14-a08b-4e78-a8c0-5aadccfba353/volumes" May 17 00:46:13.533191 kubelet[2064]: I0517 00:46:13.533155 2064 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c550b965-b6f9-4419-ac4e-7dd2f6774589" path="/var/lib/kubelet/pods/c550b965-b6f9-4419-ac4e-7dd2f6774589/volumes" May 17 00:46:13.814155 systemd[1]: Started sshd@26-137.184.126.228:22-147.75.109.163:55810.service. May 17 00:46:13.814679 sshd[3679]: pam_unix(sshd:session): session closed for user core May 17 00:46:13.820710 systemd[1]: sshd@25-137.184.126.228:22-147.75.109.163:55806.service: Deactivated successfully. May 17 00:46:13.825729 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:46:13.826529 systemd-logind[1288]: Session 26 logged out. Waiting for processes to exit. May 17 00:46:13.828215 systemd-logind[1288]: Removed session 26. May 17 00:46:13.886903 sshd[3850]: Accepted publickey for core from 147.75.109.163 port 55810 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:46:13.889502 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:13.898268 systemd[1]: Started session-27.scope. May 17 00:46:13.899453 systemd-logind[1288]: New session 27 of user core. May 17 00:46:14.770135 sshd[3850]: pam_unix(sshd:session): session closed for user core May 17 00:46:14.777445 systemd[1]: Started sshd@27-137.184.126.228:22-147.75.109.163:55816.service. May 17 00:46:14.778147 systemd[1]: sshd@26-137.184.126.228:22-147.75.109.163:55810.service: Deactivated successfully. May 17 00:46:14.780275 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:46:14.780489 systemd-logind[1288]: Session 27 logged out. Waiting for processes to exit. May 17 00:46:14.782796 systemd-logind[1288]: Removed session 27. May 17 00:46:14.801341 kubelet[2064]: E0517 00:46:14.801270 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" containerName="cilium-agent" May 17 00:46:14.813756 kubelet[2064]: E0517 00:46:14.813392 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c550b965-b6f9-4419-ac4e-7dd2f6774589" containerName="cilium-operator" May 17 00:46:14.813756 kubelet[2064]: E0517 00:46:14.813430 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" containerName="mount-cgroup" May 17 00:46:14.813756 kubelet[2064]: E0517 00:46:14.813442 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" containerName="apply-sysctl-overwrites" May 17 00:46:14.813756 kubelet[2064]: E0517 00:46:14.813451 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" containerName="mount-bpf-fs" May 17 00:46:14.813756 kubelet[2064]: E0517 00:46:14.813461 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" containerName="clean-cilium-state" May 17 00:46:14.813756 kubelet[2064]: I0517 00:46:14.813524 2064 memory_manager.go:354] "RemoveStaleState removing state" podUID="c550b965-b6f9-4419-ac4e-7dd2f6774589" containerName="cilium-operator" May 17 00:46:14.813756 kubelet[2064]: I0517 00:46:14.813533 2064 memory_manager.go:354] "RemoveStaleState removing state" podUID="122b5d14-a08b-4e78-a8c0-5aadccfba353" containerName="cilium-agent" May 17 00:46:14.863604 kubelet[2064]: I0517 00:46:14.863565 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-ipsec-secrets\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.863834 kubelet[2064]: I0517 00:46:14.863814 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-cgroup\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.863952 kubelet[2064]: I0517 00:46:14.863937 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cni-path\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.864034 kubelet[2064]: I0517 00:46:14.864022 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-etc-cni-netd\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.864112 kubelet[2064]: I0517 00:46:14.864099 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-net\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.864193 kubelet[2064]: I0517 00:46:14.864181 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-run\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874353 kubelet[2064]: I0517 00:46:14.868685 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-hubble-tls\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874353 kubelet[2064]: I0517 00:46:14.868732 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24m7x\" (UniqueName: \"kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-kube-api-access-24m7x\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874353 kubelet[2064]: I0517 00:46:14.868769 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-hostproc\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874353 kubelet[2064]: I0517 00:46:14.868796 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-xtables-lock\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874353 kubelet[2064]: I0517 00:46:14.868821 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-clustermesh-secrets\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874353 kubelet[2064]: I0517 00:46:14.868844 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-config-path\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874700 kubelet[2064]: I0517 00:46:14.868866 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-bpf-maps\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874700 kubelet[2064]: I0517 00:46:14.868892 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-lib-modules\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.874700 kubelet[2064]: I0517 00:46:14.868916 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-kernel\") pod \"cilium-vt8l4\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " pod="kube-system/cilium-vt8l4" May 17 00:46:14.882932 sshd[3864]: Accepted publickey for core from 147.75.109.163 port 55816 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:46:14.881992 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:14.890800 systemd[1]: Started session-28.scope. May 17 00:46:14.891819 systemd-logind[1288]: New session 28 of user core. May 17 00:46:15.142628 systemd[1]: Started sshd@28-137.184.126.228:22-147.75.109.163:55830.service. May 17 00:46:15.147846 sshd[3864]: pam_unix(sshd:session): session closed for user core May 17 00:46:15.154058 kubelet[2064]: E0517 00:46:15.154009 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:15.161113 env[1303]: time="2025-05-17T00:46:15.159185669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vt8l4,Uid:7bd29bad-5581-454e-a3bd-27f97f3902cc,Namespace:kube-system,Attempt:0,}" May 17 00:46:15.162359 systemd[1]: sshd@27-137.184.126.228:22-147.75.109.163:55816.service: Deactivated successfully. May 17 00:46:15.163938 systemd-logind[1288]: Session 28 logged out. Waiting for processes to exit. May 17 00:46:15.163977 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:46:15.165866 systemd-logind[1288]: Removed session 28. May 17 00:46:15.203996 env[1303]: time="2025-05-17T00:46:15.203172217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:46:15.203996 env[1303]: time="2025-05-17T00:46:15.203277421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:46:15.203996 env[1303]: time="2025-05-17T00:46:15.203300800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:46:15.203996 env[1303]: time="2025-05-17T00:46:15.203526573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263 pid=3889 runtime=io.containerd.runc.v2 May 17 00:46:15.254585 sshd[3880]: Accepted publickey for core from 147.75.109.163 port 55830 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:46:15.256445 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:46:15.264655 systemd-logind[1288]: New session 29 of user core. May 17 00:46:15.265765 systemd[1]: Started session-29.scope. May 17 00:46:15.313191 env[1303]: time="2025-05-17T00:46:15.313125683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vt8l4,Uid:7bd29bad-5581-454e-a3bd-27f97f3902cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263\"" May 17 00:46:15.315193 kubelet[2064]: E0517 00:46:15.314809 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:15.320877 env[1303]: time="2025-05-17T00:46:15.320265298Z" level=info msg="CreateContainer within sandbox \"52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:46:15.336915 env[1303]: time="2025-05-17T00:46:15.335212551Z" level=info msg="CreateContainer within sandbox \"52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2\"" May 17 00:46:15.338440 env[1303]: time="2025-05-17T00:46:15.338382848Z" level=info msg="StartContainer for \"94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2\"" May 17 00:46:15.426768 env[1303]: time="2025-05-17T00:46:15.426584083Z" level=info msg="StartContainer for \"94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2\" returns successfully" May 17 00:46:15.477409 env[1303]: time="2025-05-17T00:46:15.477356697Z" level=info msg="shim disconnected" id=94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2 May 17 00:46:15.478489 env[1303]: time="2025-05-17T00:46:15.478448054Z" level=warning msg="cleaning up after shim disconnected" id=94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2 namespace=k8s.io May 17 00:46:15.478726 env[1303]: time="2025-05-17T00:46:15.478702894Z" level=info msg="cleaning up dead shim" May 17 00:46:15.496394 env[1303]: time="2025-05-17T00:46:15.496339556Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3979 runtime=io.containerd.runc.v2\n" May 17 00:46:16.018749 env[1303]: time="2025-05-17T00:46:16.018689087Z" level=info msg="StopPodSandbox for \"52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263\"" May 17 00:46:16.019120 env[1303]: time="2025-05-17T00:46:16.019087621Z" level=info msg="Container to stop \"94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:46:16.022205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263-shm.mount: Deactivated successfully. May 17 00:46:16.067685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263-rootfs.mount: Deactivated successfully. May 17 00:46:16.079594 env[1303]: time="2025-05-17T00:46:16.079534070Z" level=info msg="shim disconnected" id=52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263 May 17 00:46:16.079868 env[1303]: time="2025-05-17T00:46:16.079841402Z" level=warning msg="cleaning up after shim disconnected" id=52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263 namespace=k8s.io May 17 00:46:16.079968 env[1303]: time="2025-05-17T00:46:16.079953598Z" level=info msg="cleaning up dead shim" May 17 00:46:16.097749 env[1303]: time="2025-05-17T00:46:16.097703397Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4013 runtime=io.containerd.runc.v2\n" May 17 00:46:16.098550 env[1303]: time="2025-05-17T00:46:16.098510501Z" level=info msg="TearDown network for sandbox \"52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263\" successfully" May 17 00:46:16.098711 env[1303]: time="2025-05-17T00:46:16.098690800Z" level=info msg="StopPodSandbox for \"52739de3cbb6ccc4655888e5d84cecdfbbc61047e16e459d51e8e42a1b3e0263\" returns successfully" May 17 00:46:16.183700 kubelet[2064]: I0517 00:46:16.183637 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24m7x\" (UniqueName: \"kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-kube-api-access-24m7x\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.183700 kubelet[2064]: I0517 00:46:16.183706 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-xtables-lock\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184444 kubelet[2064]: I0517 00:46:16.183736 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-config-path\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184444 kubelet[2064]: I0517 00:46:16.183758 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-etc-cni-netd\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184444 kubelet[2064]: I0517 00:46:16.183784 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-clustermesh-secrets\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184444 kubelet[2064]: I0517 00:46:16.183806 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-bpf-maps\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184444 kubelet[2064]: I0517 00:46:16.183828 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-ipsec-secrets\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184444 kubelet[2064]: I0517 00:46:16.183847 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-kernel\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184710 kubelet[2064]: I0517 00:46:16.183870 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-hubble-tls\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184710 kubelet[2064]: I0517 00:46:16.183890 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-hostproc\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184710 kubelet[2064]: I0517 00:46:16.183908 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-lib-modules\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184710 kubelet[2064]: I0517 00:46:16.183929 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-run\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184710 kubelet[2064]: I0517 00:46:16.183947 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cni-path\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184710 kubelet[2064]: I0517 00:46:16.183967 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-net\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184957 kubelet[2064]: I0517 00:46:16.183988 2064 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-cgroup\") pod \"7bd29bad-5581-454e-a3bd-27f97f3902cc\" (UID: \"7bd29bad-5581-454e-a3bd-27f97f3902cc\") " May 17 00:46:16.184957 kubelet[2064]: I0517 00:46:16.184092 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.184957 kubelet[2064]: I0517 00:46:16.184129 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.187708 kubelet[2064]: I0517 00:46:16.187585 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:46:16.187917 kubelet[2064]: I0517 00:46:16.187722 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.191617 kubelet[2064]: I0517 00:46:16.191558 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.195236 systemd[1]: var-lib-kubelet-pods-7bd29bad\x2d5581\x2d454e\x2da3bd\x2d27f97f3902cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24m7x.mount: Deactivated successfully. May 17 00:46:16.197503 kubelet[2064]: I0517 00:46:16.196429 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-hostproc" (OuterVolumeSpecName: "hostproc") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.197503 kubelet[2064]: I0517 00:46:16.196861 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.197503 kubelet[2064]: I0517 00:46:16.196884 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.197503 kubelet[2064]: I0517 00:46:16.196899 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cni-path" (OuterVolumeSpecName: "cni-path") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.197503 kubelet[2064]: I0517 00:46:16.197165 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.197747 kubelet[2064]: I0517 00:46:16.197382 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:46:16.197982 kubelet[2064]: I0517 00:46:16.197945 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-kube-api-access-24m7x" (OuterVolumeSpecName: "kube-api-access-24m7x") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "kube-api-access-24m7x". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:46:16.202644 systemd[1]: var-lib-kubelet-pods-7bd29bad\x2d5581\x2d454e\x2da3bd\x2d27f97f3902cc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:46:16.205350 kubelet[2064]: I0517 00:46:16.205277 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:46:16.209263 kubelet[2064]: I0517 00:46:16.209121 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:46:16.209849 kubelet[2064]: I0517 00:46:16.209788 2064 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7bd29bad-5581-454e-a3bd-27f97f3902cc" (UID: "7bd29bad-5581-454e-a3bd-27f97f3902cc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285166 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285227 2064 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285246 2064 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-bpf-maps\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285262 2064 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-lib-modules\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285281 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-run\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285294 2064 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-hubble-tls\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285307 2064 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-hostproc\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.285407 kubelet[2064]: I0517 00:46:16.285346 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-cgroup\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.286744 kubelet[2064]: I0517 00:46:16.285358 2064 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-cni-path\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.286744 kubelet[2064]: I0517 00:46:16.285375 2064 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-host-proc-sys-net\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.287442 kubelet[2064]: I0517 00:46:16.287387 2064 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24m7x\" (UniqueName: \"kubernetes.io/projected/7bd29bad-5581-454e-a3bd-27f97f3902cc-kube-api-access-24m7x\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.287572 kubelet[2064]: I0517 00:46:16.287449 2064 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-xtables-lock\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.287572 kubelet[2064]: I0517 00:46:16.287465 2064 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bd29bad-5581-454e-a3bd-27f97f3902cc-cilium-config-path\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.287572 kubelet[2064]: I0517 00:46:16.287479 2064 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd29bad-5581-454e-a3bd-27f97f3902cc-etc-cni-netd\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.287572 kubelet[2064]: I0517 00:46:16.287493 2064 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bd29bad-5581-454e-a3bd-27f97f3902cc-clustermesh-secrets\") on node \"ci-3510.3.7-n-d30b09a4ce\" DevicePath \"\"" May 17 00:46:16.978544 systemd[1]: var-lib-kubelet-pods-7bd29bad\x2d5581\x2d454e\x2da3bd\x2d27f97f3902cc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:46:16.979052 systemd[1]: var-lib-kubelet-pods-7bd29bad\x2d5581\x2d454e\x2da3bd\x2d27f97f3902cc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:46:17.020901 kubelet[2064]: I0517 00:46:17.020868 2064 scope.go:117] "RemoveContainer" containerID="94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2" May 17 00:46:17.024406 env[1303]: time="2025-05-17T00:46:17.024349285Z" level=info msg="RemoveContainer for \"94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2\"" May 17 00:46:17.033164 env[1303]: time="2025-05-17T00:46:17.032828515Z" level=info msg="RemoveContainer for \"94a092078ddf7e42557b8b7344cdcdec8c614e2656fb3e6c8f6c2e92041748e2\" returns successfully" May 17 00:46:17.081033 kubelet[2064]: E0517 00:46:17.080970 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bd29bad-5581-454e-a3bd-27f97f3902cc" containerName="mount-cgroup" May 17 00:46:17.081033 kubelet[2064]: I0517 00:46:17.081039 2064 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd29bad-5581-454e-a3bd-27f97f3902cc" containerName="mount-cgroup" May 17 00:46:17.194514 kubelet[2064]: I0517 00:46:17.194438 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-bpf-maps\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.194514 kubelet[2064]: I0517 00:46:17.194527 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6609d497-773c-4aa9-8cbc-3acbab65aabb-clustermesh-secrets\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195097 kubelet[2064]: I0517 00:46:17.194560 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-host-proc-sys-net\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195097 kubelet[2064]: I0517 00:46:17.194609 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-host-proc-sys-kernel\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195097 kubelet[2064]: I0517 00:46:17.194637 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk2tv\" (UniqueName: \"kubernetes.io/projected/6609d497-773c-4aa9-8cbc-3acbab65aabb-kube-api-access-kk2tv\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195097 kubelet[2064]: I0517 00:46:17.194749 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-hostproc\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195097 kubelet[2064]: I0517 00:46:17.194778 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-cilium-cgroup\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195097 kubelet[2064]: I0517 00:46:17.194817 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-xtables-lock\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195341 kubelet[2064]: I0517 00:46:17.194846 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6609d497-773c-4aa9-8cbc-3acbab65aabb-hubble-tls\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195341 kubelet[2064]: I0517 00:46:17.194870 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-etc-cni-netd\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195341 kubelet[2064]: I0517 00:46:17.194993 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-cilium-run\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195341 kubelet[2064]: I0517 00:46:17.195020 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-cni-path\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195341 kubelet[2064]: I0517 00:46:17.195107 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6609d497-773c-4aa9-8cbc-3acbab65aabb-lib-modules\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195341 kubelet[2064]: I0517 00:46:17.195181 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6609d497-773c-4aa9-8cbc-3acbab65aabb-cilium-config-path\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.195519 kubelet[2064]: I0517 00:46:17.195222 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6609d497-773c-4aa9-8cbc-3acbab65aabb-cilium-ipsec-secrets\") pod \"cilium-bnksr\" (UID: \"6609d497-773c-4aa9-8cbc-3acbab65aabb\") " pod="kube-system/cilium-bnksr" May 17 00:46:17.387938 kubelet[2064]: E0517 00:46:17.387890 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:17.390520 env[1303]: time="2025-05-17T00:46:17.390444400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnksr,Uid:6609d497-773c-4aa9-8cbc-3acbab65aabb,Namespace:kube-system,Attempt:0,}" May 17 00:46:17.409556 env[1303]: time="2025-05-17T00:46:17.409419243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:46:17.409748 env[1303]: time="2025-05-17T00:46:17.409563106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:46:17.409748 env[1303]: time="2025-05-17T00:46:17.409604776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:46:17.410056 env[1303]: time="2025-05-17T00:46:17.409900135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac pid=4042 runtime=io.containerd.runc.v2 May 17 00:46:17.473073 env[1303]: time="2025-05-17T00:46:17.473013386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnksr,Uid:6609d497-773c-4aa9-8cbc-3acbab65aabb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\"" May 17 00:46:17.475546 kubelet[2064]: E0517 00:46:17.474212 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:17.483085 env[1303]: time="2025-05-17T00:46:17.483028582Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:46:17.517727 env[1303]: time="2025-05-17T00:46:17.517623082Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"855b17b688637fcdd6a77314059a641a175515730a0b21cc6d8eddfc5a02eea7\"" May 17 00:46:17.520507 env[1303]: time="2025-05-17T00:46:17.520177260Z" level=info msg="StartContainer for \"855b17b688637fcdd6a77314059a641a175515730a0b21cc6d8eddfc5a02eea7\"" May 17 00:46:17.532363 kubelet[2064]: I0517 00:46:17.532261 2064 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd29bad-5581-454e-a3bd-27f97f3902cc" path="/var/lib/kubelet/pods/7bd29bad-5581-454e-a3bd-27f97f3902cc/volumes" May 17 00:46:17.612637 env[1303]: time="2025-05-17T00:46:17.612572278Z" level=info msg="StartContainer for \"855b17b688637fcdd6a77314059a641a175515730a0b21cc6d8eddfc5a02eea7\" returns successfully" May 17 00:46:17.656039 env[1303]: time="2025-05-17T00:46:17.655898434Z" level=info msg="shim disconnected" id=855b17b688637fcdd6a77314059a641a175515730a0b21cc6d8eddfc5a02eea7 May 17 00:46:17.656628 env[1303]: time="2025-05-17T00:46:17.656589015Z" level=warning msg="cleaning up after shim disconnected" id=855b17b688637fcdd6a77314059a641a175515730a0b21cc6d8eddfc5a02eea7 namespace=k8s.io May 17 00:46:17.656761 env[1303]: time="2025-05-17T00:46:17.656740508Z" level=info msg="cleaning up dead shim" May 17 00:46:17.671456 env[1303]: time="2025-05-17T00:46:17.671391905Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4126 runtime=io.containerd.runc.v2\n" May 17 00:46:17.685619 kubelet[2064]: E0517 00:46:17.685552 2064 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:46:18.026740 kubelet[2064]: E0517 00:46:18.026623 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:18.047359 env[1303]: time="2025-05-17T00:46:18.036367684Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:46:18.073005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622778016.mount: Deactivated successfully. May 17 00:46:18.086057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730685591.mount: Deactivated successfully. May 17 00:46:18.091595 env[1303]: time="2025-05-17T00:46:18.091528212Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4b50acc76c7fc2a3b6a037046a552525247a31f8e7e28de25b64380af3037afa\"" May 17 00:46:18.095442 env[1303]: time="2025-05-17T00:46:18.095388451Z" level=info msg="StartContainer for \"4b50acc76c7fc2a3b6a037046a552525247a31f8e7e28de25b64380af3037afa\"" May 17 00:46:18.210936 env[1303]: time="2025-05-17T00:46:18.209530222Z" level=info msg="StartContainer for \"4b50acc76c7fc2a3b6a037046a552525247a31f8e7e28de25b64380af3037afa\" returns successfully" May 17 00:46:18.255359 env[1303]: time="2025-05-17T00:46:18.255274336Z" level=info msg="shim disconnected" id=4b50acc76c7fc2a3b6a037046a552525247a31f8e7e28de25b64380af3037afa May 17 00:46:18.255785 env[1303]: time="2025-05-17T00:46:18.255755993Z" level=warning msg="cleaning up after shim disconnected" id=4b50acc76c7fc2a3b6a037046a552525247a31f8e7e28de25b64380af3037afa namespace=k8s.io May 17 00:46:18.255906 env[1303]: time="2025-05-17T00:46:18.255887493Z" level=info msg="cleaning up dead shim" May 17 00:46:18.271088 env[1303]: time="2025-05-17T00:46:18.271039301Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" May 17 00:46:19.031343 kubelet[2064]: E0517 00:46:19.031282 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:19.034684 env[1303]: time="2025-05-17T00:46:19.034623941Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:46:19.067029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446499361.mount: Deactivated successfully. May 17 00:46:19.071580 env[1303]: time="2025-05-17T00:46:19.071402233Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb6588d27da2e5ed578b2708acc5eff098cfc255f6bfec0ef15faa9f91c5b156\"" May 17 00:46:19.075175 env[1303]: time="2025-05-17T00:46:19.075077920Z" level=info msg="StartContainer for \"cb6588d27da2e5ed578b2708acc5eff098cfc255f6bfec0ef15faa9f91c5b156\"" May 17 00:46:19.187648 env[1303]: time="2025-05-17T00:46:19.187593220Z" level=info msg="StartContainer for \"cb6588d27da2e5ed578b2708acc5eff098cfc255f6bfec0ef15faa9f91c5b156\" returns successfully" May 17 00:46:19.228470 env[1303]: time="2025-05-17T00:46:19.228405175Z" level=info msg="shim disconnected" id=cb6588d27da2e5ed578b2708acc5eff098cfc255f6bfec0ef15faa9f91c5b156 May 17 00:46:19.228470 env[1303]: time="2025-05-17T00:46:19.228453056Z" level=warning msg="cleaning up after shim disconnected" id=cb6588d27da2e5ed578b2708acc5eff098cfc255f6bfec0ef15faa9f91c5b156 namespace=k8s.io May 17 00:46:19.228470 env[1303]: time="2025-05-17T00:46:19.228462844Z" level=info msg="cleaning up dead shim" May 17 00:46:19.241131 env[1303]: time="2025-05-17T00:46:19.241073467Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4250 runtime=io.containerd.runc.v2\n" May 17 00:46:19.980230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb6588d27da2e5ed578b2708acc5eff098cfc255f6bfec0ef15faa9f91c5b156-rootfs.mount: Deactivated successfully. May 17 00:46:20.037113 kubelet[2064]: E0517 00:46:20.036620 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:20.054354 env[1303]: time="2025-05-17T00:46:20.051689857Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:46:20.078531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994152982.mount: Deactivated successfully. May 17 00:46:20.111269 env[1303]: time="2025-05-17T00:46:20.111208082Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3d53ce318af1cb445dce066bf1ab5e98b49ba00bddc856db2c9e42e894289fa2\"" May 17 00:46:20.112718 env[1303]: time="2025-05-17T00:46:20.112668793Z" level=info msg="StartContainer for \"3d53ce318af1cb445dce066bf1ab5e98b49ba00bddc856db2c9e42e894289fa2\"" May 17 00:46:20.192140 env[1303]: time="2025-05-17T00:46:20.192082418Z" level=info msg="StartContainer for \"3d53ce318af1cb445dce066bf1ab5e98b49ba00bddc856db2c9e42e894289fa2\" returns successfully" May 17 00:46:20.221288 env[1303]: time="2025-05-17T00:46:20.221227728Z" level=info msg="shim disconnected" id=3d53ce318af1cb445dce066bf1ab5e98b49ba00bddc856db2c9e42e894289fa2 May 17 00:46:20.221683 env[1303]: time="2025-05-17T00:46:20.221658990Z" level=warning msg="cleaning up after shim disconnected" id=3d53ce318af1cb445dce066bf1ab5e98b49ba00bddc856db2c9e42e894289fa2 namespace=k8s.io May 17 00:46:20.221850 env[1303]: time="2025-05-17T00:46:20.221833440Z" level=info msg="cleaning up dead shim" May 17 00:46:20.235070 env[1303]: time="2025-05-17T00:46:20.234520760Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:46:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4307 runtime=io.containerd.runc.v2\n" May 17 00:46:20.265639 kubelet[2064]: I0517 00:46:20.265305 2064 setters.go:600] "Node became not ready" node="ci-3510.3.7-n-d30b09a4ce" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:46:20Z","lastTransitionTime":"2025-05-17T00:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:46:21.041781 kubelet[2064]: E0517 00:46:21.041733 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:21.045697 env[1303]: time="2025-05-17T00:46:21.045646034Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:46:21.069486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900947716.mount: Deactivated successfully. May 17 00:46:21.072306 env[1303]: time="2025-05-17T00:46:21.072243268Z" level=info msg="CreateContainer within sandbox \"2d892b28f966670c870d3627cfac11dd511dff3b3be6a007b55311c13fdec8ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a08c0a9fb95a5e77c34bf68f610b338b810eb22de780194f7b845ce98750832\"" May 17 00:46:21.078374 env[1303]: time="2025-05-17T00:46:21.078235460Z" level=info msg="StartContainer for \"7a08c0a9fb95a5e77c34bf68f610b338b810eb22de780194f7b845ce98750832\"" May 17 00:46:21.181808 env[1303]: time="2025-05-17T00:46:21.181746972Z" level=info msg="StartContainer for \"7a08c0a9fb95a5e77c34bf68f610b338b810eb22de780194f7b845ce98750832\" returns successfully" May 17 00:46:21.765364 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:46:22.049236 kubelet[2064]: E0517 00:46:22.048967 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:22.080571 kubelet[2064]: I0517 00:46:22.080478 2064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bnksr" podStartSLOduration=5.077584605 podStartE2EDuration="5.077584605s" podCreationTimestamp="2025-05-17 00:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:46:22.074427273 +0000 UTC m=+134.839066416" watchObservedRunningTime="2025-05-17 00:46:22.077584605 +0000 UTC m=+134.842223749" May 17 00:46:23.389826 kubelet[2064]: E0517 00:46:23.389781 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:23.834005 systemd[1]: run-containerd-runc-k8s.io-7a08c0a9fb95a5e77c34bf68f610b338b810eb22de780194f7b845ce98750832-runc.sANeqU.mount: Deactivated successfully. May 17 00:46:25.166053 systemd-networkd[1071]: lxc_health: Link UP May 17 00:46:25.178968 systemd-networkd[1071]: lxc_health: Gained carrier May 17 00:46:25.179434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:46:25.399131 kubelet[2064]: E0517 00:46:25.399073 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:26.064733 kubelet[2064]: E0517 00:46:26.061074 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:26.076761 systemd[1]: run-containerd-runc-k8s.io-7a08c0a9fb95a5e77c34bf68f610b338b810eb22de780194f7b845ce98750832-runc.fO33Mp.mount: Deactivated successfully. May 17 00:46:26.969538 systemd-networkd[1071]: lxc_health: Gained IPv6LL May 17 00:46:27.062890 kubelet[2064]: E0517 00:46:27.062849 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:30.530900 kubelet[2064]: E0517 00:46:30.530184 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:46:30.542627 systemd[1]: run-containerd-runc-k8s.io-7a08c0a9fb95a5e77c34bf68f610b338b810eb22de780194f7b845ce98750832-runc.n6cWXs.mount: Deactivated successfully. May 17 00:46:30.669391 sshd[3880]: pam_unix(sshd:session): session closed for user core May 17 00:46:30.674014 systemd[1]: sshd@28-137.184.126.228:22-147.75.109.163:55830.service: Deactivated successfully. May 17 00:46:30.675221 systemd[1]: session-29.scope: Deactivated successfully. May 17 00:46:30.675267 systemd-logind[1288]: Session 29 logged out. Waiting for processes to exit. May 17 00:46:30.677250 systemd-logind[1288]: Removed session 29.