May 17 00:35:36.029316 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:35:36.029347 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:35:36.029361 kernel: BIOS-provided physical RAM map: May 17 00:35:36.029371 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:35:36.029381 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:35:36.029391 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:35:36.029405 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 17 00:35:36.029416 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:35:36.029425 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:35:36.029435 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:35:36.029445 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:35:36.029455 kernel: printk: bootconsole [earlyser0] enabled May 17 00:35:36.029465 kernel: NX (Execute Disable) protection: active May 17 00:35:36.029475 kernel: efi: EFI v2.70 by Microsoft May 17 00:35:36.029491 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 May 17 00:35:36.029503 kernel: random: crng init done May 17 00:35:36.029514 kernel: SMBIOS 3.1.0 present. May 17 00:35:36.029526 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:35:36.029537 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:35:36.029548 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:35:36.029560 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 May 17 00:35:36.029571 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:35:36.029584 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:35:36.029595 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:35:36.029607 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:35:36.029618 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:35:36.029630 kernel: tsc: Detected 2593.907 MHz processor May 17 00:35:36.029642 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:35:36.029653 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:35:36.029664 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:35:36.029676 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:35:36.029687 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:35:36.029701 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:35:36.029713 kernel: Using GB pages for direct mapping May 17 00:35:36.029723 kernel: Secure boot disabled May 17 00:35:36.029735 kernel: ACPI: Early table checksum verification disabled May 17 00:35:36.029746 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:35:36.029758 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029770 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029782 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:35:36.029803 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:35:36.029816 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029829 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029841 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029853 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029866 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029897 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029909 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:35:36.029921 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:35:36.029933 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:35:36.029946 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:35:36.029958 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:35:36.029971 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:35:36.029983 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:35:36.029998 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:35:36.030011 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:35:36.030024 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:35:36.030037 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:35:36.030049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:35:36.030061 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:35:36.030073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:35:36.030086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:35:36.030097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:35:36.030113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:35:36.030125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:35:36.030138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:35:36.030151 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:35:36.030163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:35:36.030177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:35:36.030190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:35:36.030203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:35:36.030215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:35:36.030230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:35:36.030243 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:35:36.030255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:35:36.030267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:35:36.030280 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:35:36.030292 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:35:36.030305 kernel: Zone ranges: May 17 00:35:36.030318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:35:36.030330 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:35:36.030344 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:35:36.030356 kernel: Movable zone start for each node May 17 00:35:36.030369 kernel: Early memory node ranges May 17 00:35:36.030382 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:35:36.030395 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:35:36.030408 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:35:36.030420 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:35:36.030432 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:35:36.030444 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:35:36.030459 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:35:36.030472 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:35:36.030484 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:35:36.030496 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:35:36.030508 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:35:36.030520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:35:36.030532 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:35:36.030544 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:35:36.030557 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:35:36.030571 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:35:36.030583 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:35:36.030596 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:35:36.030608 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:35:36.030620 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:35:36.030633 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:35:36.030645 kernel: pcpu-alloc: [0] 0 1 May 17 00:35:36.030657 kernel: Hyper-V: PV spinlocks enabled May 17 00:35:36.030669 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:35:36.030684 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:35:36.030696 kernel: Policy zone: Normal May 17 00:35:36.030710 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:35:36.030723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:35:36.030735 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:35:36.030748 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:35:36.030760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:35:36.030773 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 308056K reserved, 0K cma-reserved) May 17 00:35:36.030789 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:35:36.030801 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:35:36.030823 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:35:36.030839 kernel: rcu: Hierarchical RCU implementation. May 17 00:35:36.030853 kernel: rcu: RCU event tracing is enabled. May 17 00:35:36.030866 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:35:36.030891 kernel: Rude variant of Tasks RCU enabled. May 17 00:35:36.030905 kernel: Tracing variant of Tasks RCU enabled. May 17 00:35:36.030918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:35:36.030931 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:35:36.030945 kernel: Using NULL legacy PIC May 17 00:35:36.030961 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:35:36.030975 kernel: Console: colour dummy device 80x25 May 17 00:35:36.030988 kernel: printk: console [tty1] enabled May 17 00:35:36.031001 kernel: printk: console [ttyS0] enabled May 17 00:35:36.031015 kernel: printk: bootconsole [earlyser0] disabled May 17 00:35:36.031030 kernel: ACPI: Core revision 20210730 May 17 00:35:36.031043 kernel: Failed to register legacy timer interrupt May 17 00:35:36.031057 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:35:36.031070 kernel: Hyper-V: Using IPI hypercalls May 17 00:35:36.031082 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) May 17 00:35:36.031095 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:35:36.031107 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:35:36.031121 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:35:36.031135 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:35:36.031146 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:35:36.031162 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:35:36.031175 kernel: RETBleed: Vulnerable May 17 00:35:36.031188 kernel: Speculative Store Bypass: Vulnerable May 17 00:35:36.031202 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:35:36.031214 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:35:36.031227 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:35:36.031240 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:35:36.031253 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:35:36.031265 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:35:36.031278 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:35:36.031294 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:35:36.031307 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:35:36.031319 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:35:36.031331 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:35:36.031355 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:35:36.031368 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:35:36.031382 kernel: Freeing SMP alternatives memory: 32K May 17 00:35:36.031395 kernel: pid_max: default: 32768 minimum: 301 May 17 00:35:36.031409 kernel: LSM: Security Framework initializing May 17 00:35:36.031422 kernel: SELinux: Initializing. May 17 00:35:36.031435 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:35:36.031448 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:35:36.031465 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:35:36.031477 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:35:36.031491 kernel: signal: max sigframe size: 3632 May 17 00:35:36.031504 kernel: rcu: Hierarchical SRCU implementation. May 17 00:35:36.031518 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:35:36.031531 kernel: smp: Bringing up secondary CPUs ... May 17 00:35:36.031543 kernel: x86: Booting SMP configuration: May 17 00:35:36.031557 kernel: .... node #0, CPUs: #1 May 17 00:35:36.031572 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:35:36.031589 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:35:36.031601 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:35:36.031614 kernel: smpboot: Max logical packages: 1 May 17 00:35:36.031626 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 17 00:35:36.031637 kernel: devtmpfs: initialized May 17 00:35:36.031649 kernel: x86/mm: Memory block size: 128MB May 17 00:35:36.031663 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:35:36.031677 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:35:36.031691 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:35:36.031708 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:35:36.031720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:35:36.031732 kernel: audit: initializing netlink subsys (disabled) May 17 00:35:36.031747 kernel: audit: type=2000 audit(1747442135.023:1): state=initialized audit_enabled=0 res=1 May 17 00:35:36.031761 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:35:36.031774 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:35:36.031788 kernel: cpuidle: using governor menu May 17 00:35:36.031802 kernel: ACPI: bus type PCI registered May 17 00:35:36.031816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:35:36.031833 kernel: dca service started, version 1.12.1 May 17 00:35:36.031847 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:35:36.031861 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:35:36.031919 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:35:36.031935 kernel: ACPI: Added _OSI(Module Device) May 17 00:35:36.031948 kernel: ACPI: Added _OSI(Processor Device) May 17 00:35:36.031960 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:35:36.031972 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:35:36.031983 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:35:36.031998 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:35:36.032010 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:35:36.032021 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:35:36.032033 kernel: ACPI: Interpreter enabled May 17 00:35:36.032046 kernel: ACPI: PM: (supports S0 S5) May 17 00:35:36.032059 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:35:36.032073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:35:36.032086 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:35:36.032100 kernel: iommu: Default domain type: Translated May 17 00:35:36.032116 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:35:36.032129 kernel: vgaarb: loaded May 17 00:35:36.032143 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:35:36.032157 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:35:36.032171 kernel: PTP clock support registered May 17 00:35:36.032183 kernel: Registered efivars operations May 17 00:35:36.032196 kernel: PCI: Using ACPI for IRQ routing May 17 00:35:36.032209 kernel: PCI: System does not support PCI May 17 00:35:36.032223 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:35:36.032239 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:35:36.032252 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:35:36.032264 kernel: pnp: PnP ACPI init May 17 00:35:36.032276 kernel: pnp: PnP ACPI: found 3 devices May 17 00:35:36.032288 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:35:36.032298 kernel: NET: Registered PF_INET protocol family May 17 00:35:36.032310 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:35:36.032323 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:35:36.032335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:35:36.032349 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:35:36.032362 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:35:36.032374 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:35:36.032387 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:35:36.032400 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:35:36.032412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:35:36.032425 kernel: NET: Registered PF_XDP protocol family May 17 00:35:36.032438 kernel: PCI: CLS 0 bytes, default 64 May 17 00:35:36.032452 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:35:36.032468 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) May 17 00:35:36.032481 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:35:36.032494 kernel: Initialise system trusted keyrings May 17 00:35:36.032507 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:35:36.032521 kernel: Key type asymmetric registered May 17 00:35:36.032534 kernel: Asymmetric key parser 'x509' registered May 17 00:35:36.032547 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:35:36.032560 kernel: io scheduler mq-deadline registered May 17 00:35:36.032573 kernel: io scheduler kyber registered May 17 00:35:36.032589 kernel: io scheduler bfq registered May 17 00:35:36.032602 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:35:36.032616 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:35:36.032630 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:35:36.032643 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:35:36.032657 kernel: i8042: PNP: No PS/2 controller found. May 17 00:35:36.032812 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:35:36.037752 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:35:35 UTC (1747442135) May 17 00:35:36.042565 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:35:36.042590 kernel: intel_pstate: CPU model not supported May 17 00:35:36.042605 kernel: efifb: probing for efifb May 17 00:35:36.042619 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:35:36.042633 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:35:36.042646 kernel: efifb: scrolling: redraw May 17 00:35:36.042660 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:35:36.042674 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:35:36.042691 kernel: fb0: EFI VGA frame buffer device May 17 00:35:36.042705 kernel: pstore: Registered efi as persistent store backend May 17 00:35:36.042719 kernel: NET: Registered PF_INET6 protocol family May 17 00:35:36.042733 kernel: Segment Routing with IPv6 May 17 00:35:36.042746 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:35:36.042759 kernel: NET: Registered PF_PACKET protocol family May 17 00:35:36.042773 kernel: Key type dns_resolver registered May 17 00:35:36.042786 kernel: IPI shorthand broadcast: enabled May 17 00:35:36.042800 kernel: sched_clock: Marking stable (805244300, 19932300)->(984403300, -159226700) May 17 00:35:36.042814 kernel: registered taskstats version 1 May 17 00:35:36.042830 kernel: Loading compiled-in X.509 certificates May 17 00:35:36.042844 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:35:36.042857 kernel: Key type .fscrypt registered May 17 00:35:36.042870 kernel: Key type fscrypt-provisioning registered May 17 00:35:36.042894 kernel: pstore: Using crash dump compression: deflate May 17 00:35:36.042908 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:35:36.042922 kernel: ima: Allocated hash algorithm: sha1 May 17 00:35:36.042935 kernel: ima: No architecture policies found May 17 00:35:36.042952 kernel: clk: Disabling unused clocks May 17 00:35:36.042965 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:35:36.042979 kernel: Write protecting the kernel read-only data: 28672k May 17 00:35:36.042992 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:35:36.043006 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:35:36.043020 kernel: Run /init as init process May 17 00:35:36.043033 kernel: with arguments: May 17 00:35:36.043046 kernel: /init May 17 00:35:36.043060 kernel: with environment: May 17 00:35:36.043076 kernel: HOME=/ May 17 00:35:36.043089 kernel: TERM=linux May 17 00:35:36.043102 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:35:36.043118 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:35:36.043135 systemd[1]: Detected virtualization microsoft. May 17 00:35:36.043150 systemd[1]: Detected architecture x86-64. May 17 00:35:36.043164 systemd[1]: Running in initrd. May 17 00:35:36.043178 systemd[1]: No hostname configured, using default hostname. May 17 00:35:36.043194 systemd[1]: Hostname set to . May 17 00:35:36.043209 systemd[1]: Initializing machine ID from random generator. May 17 00:35:36.043223 systemd[1]: Queued start job for default target initrd.target. May 17 00:35:36.043237 systemd[1]: Started systemd-ask-password-console.path. May 17 00:35:36.043251 systemd[1]: Reached target cryptsetup.target. May 17 00:35:36.043265 systemd[1]: Reached target paths.target. May 17 00:35:36.043278 systemd[1]: Reached target slices.target. May 17 00:35:36.043292 systemd[1]: Reached target swap.target. May 17 00:35:36.043308 systemd[1]: Reached target timers.target. May 17 00:35:36.043323 systemd[1]: Listening on iscsid.socket. May 17 00:35:36.043337 systemd[1]: Listening on iscsiuio.socket. May 17 00:35:36.043351 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:35:36.043366 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:35:36.043380 systemd[1]: Listening on systemd-journald.socket. May 17 00:35:36.043395 systemd[1]: Listening on systemd-networkd.socket. May 17 00:35:36.043409 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:35:36.043426 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:35:36.043440 systemd[1]: Reached target sockets.target. May 17 00:35:36.043454 systemd[1]: Starting kmod-static-nodes.service... May 17 00:35:36.043468 systemd[1]: Finished network-cleanup.service. May 17 00:35:36.043482 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:35:36.043497 systemd[1]: Starting systemd-journald.service... May 17 00:35:36.043511 systemd[1]: Starting systemd-modules-load.service... May 17 00:35:36.043525 systemd[1]: Starting systemd-resolved.service... May 17 00:35:36.043539 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:35:36.043556 systemd[1]: Finished kmod-static-nodes.service. May 17 00:35:36.043573 systemd-journald[183]: Journal started May 17 00:35:36.043636 systemd-journald[183]: Runtime Journal (/run/log/journal/6390747dbf77483089d51e37febbde86) is 8.0M, max 159.0M, 151.0M free. May 17 00:35:36.032303 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:35:36.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.061899 kernel: audit: type=1130 audit(1747442136.048:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.061925 systemd[1]: Started systemd-journald.service. May 17 00:35:36.077777 systemd-resolved[185]: Positive Trust Anchors: May 17 00:35:36.078364 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:35:36.081944 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:35:36.097590 kernel: audit: type=1130 audit(1747442136.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.097693 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:35:36.098330 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:35:36.103109 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:35:36.170175 kernel: audit: type=1130 audit(1747442136.080:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.170209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:35:36.170232 kernel: audit: type=1130 audit(1747442136.093:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.170248 kernel: audit: type=1130 audit(1747442136.138:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.170264 kernel: Bridge firewalling registered May 17 00:35:36.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.106946 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:35:36.134851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:35:36.137828 systemd[1]: Started systemd-resolved.service. May 17 00:35:36.138163 systemd[1]: Reached target nss-lookup.target. May 17 00:35:36.170321 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:35:36.188701 dracut-cmdline[200]: dracut-dracut-053 May 17 00:35:36.173244 systemd[1]: Starting dracut-cmdline.service... May 17 00:35:36.178974 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:35:36.194896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:35:36.200850 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:35:36.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.235650 kernel: audit: type=1130 audit(1747442136.171:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.235707 kernel: audit: type=1130 audit(1747442136.196:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.239482 kernel: SCSI subsystem initialized May 17 00:35:36.264959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:35:36.265025 kernel: device-mapper: uevent: version 1.0.3 May 17 00:35:36.270279 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:35:36.274951 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:35:36.277124 systemd[1]: Finished systemd-modules-load.service. May 17 00:35:36.300756 kernel: audit: type=1130 audit(1747442136.279:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.300787 kernel: Loading iSCSI transport class v2.0-870. May 17 00:35:36.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.281348 systemd[1]: Starting systemd-sysctl.service... May 17 00:35:36.304780 systemd[1]: Finished systemd-sysctl.service. May 17 00:35:36.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.319891 kernel: audit: type=1130 audit(1747442136.305:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.334904 kernel: iscsi: registered transport (tcp) May 17 00:35:36.360839 kernel: iscsi: registered transport (qla4xxx) May 17 00:35:36.360908 kernel: QLogic iSCSI HBA Driver May 17 00:35:36.390240 systemd[1]: Finished dracut-cmdline.service. May 17 00:35:36.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.393622 systemd[1]: Starting dracut-pre-udev.service... May 17 00:35:36.442900 kernel: raid6: avx512x4 gen() 18386 MB/s May 17 00:35:36.461891 kernel: raid6: avx512x4 xor() 8293 MB/s May 17 00:35:36.481889 kernel: raid6: avx512x2 gen() 18309 MB/s May 17 00:35:36.501893 kernel: raid6: avx512x2 xor() 29529 MB/s May 17 00:35:36.520886 kernel: raid6: avx512x1 gen() 18325 MB/s May 17 00:35:36.539888 kernel: raid6: avx512x1 xor() 26679 MB/s May 17 00:35:36.559891 kernel: raid6: avx2x4 gen() 18276 MB/s May 17 00:35:36.578886 kernel: raid6: avx2x4 xor() 8016 MB/s May 17 00:35:36.597887 kernel: raid6: avx2x2 gen() 18221 MB/s May 17 00:35:36.617892 kernel: raid6: avx2x2 xor() 22292 MB/s May 17 00:35:36.636885 kernel: raid6: avx2x1 gen() 14012 MB/s May 17 00:35:36.662917 kernel: raid6: avx2x1 xor() 19414 MB/s May 17 00:35:36.681890 kernel: raid6: sse2x4 gen() 11712 MB/s May 17 00:35:36.700887 kernel: raid6: sse2x4 xor() 7389 MB/s May 17 00:35:36.720892 kernel: raid6: sse2x2 gen() 12571 MB/s May 17 00:35:36.739884 kernel: raid6: sse2x2 xor() 7621 MB/s May 17 00:35:36.758889 kernel: raid6: sse2x1 gen() 11585 MB/s May 17 00:35:36.781634 kernel: raid6: sse2x1 xor() 5896 MB/s May 17 00:35:36.781669 kernel: raid6: using algorithm avx512x4 gen() 18386 MB/s May 17 00:35:36.781690 kernel: raid6: .... xor() 8293 MB/s, rmw enabled May 17 00:35:36.784731 kernel: raid6: using avx512x2 recovery algorithm May 17 00:35:36.802895 kernel: xor: automatically using best checksumming function avx May 17 00:35:36.898901 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:35:36.906537 systemd[1]: Finished dracut-pre-udev.service. May 17 00:35:36.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.909000 audit: BPF prog-id=7 op=LOAD May 17 00:35:36.909000 audit: BPF prog-id=8 op=LOAD May 17 00:35:36.910444 systemd[1]: Starting systemd-udevd.service... May 17 00:35:36.924821 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:35:36.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.931665 systemd[1]: Started systemd-udevd.service. May 17 00:35:36.936959 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:35:36.954980 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation May 17 00:35:36.985071 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:35:36.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:36.989839 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:35:37.023313 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:35:37.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:37.070894 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:35:37.083900 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:35:37.102891 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:35:37.118897 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:35:37.132893 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:35:37.144893 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:35:37.148890 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:35:37.153887 kernel: AES CTR mode by8 optimization enabled May 17 00:35:37.158887 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:35:37.166894 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:35:37.166932 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:35:37.176361 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:35:37.179889 kernel: scsi host0: storvsc_host_t May 17 00:35:37.180073 kernel: scsi host1: storvsc_host_t May 17 00:35:37.188892 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:35:37.193894 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:35:37.224896 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:35:37.231815 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:35:37.231837 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:35:37.245340 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:35:37.245516 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:35:37.245672 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:35:37.245823 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:35:37.245997 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:35:37.246153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:35:37.246172 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:35:37.297902 kernel: hv_netvsc 7c1e5204-badd-7c1e-5204-badd7c1e5204 eth0: VF slot 1 added May 17 00:35:37.308892 kernel: hv_vmbus: registering driver hv_pci May 17 00:35:37.308930 kernel: hv_pci a5efe256-5b42-4a04-9934-903dbd33b08e: PCI VMBus probing: Using version 0x10004 May 17 00:35:37.389323 kernel: hv_pci a5efe256-5b42-4a04-9934-903dbd33b08e: PCI host bridge to bus 5b42:00 May 17 00:35:37.389570 kernel: pci_bus 5b42:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:35:37.389759 kernel: pci_bus 5b42:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:35:37.389954 kernel: pci 5b42:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:35:37.390128 kernel: pci 5b42:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:35:37.390285 kernel: pci 5b42:00:02.0: enabling Extended Tags May 17 00:35:37.390440 kernel: pci 5b42:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5b42:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:35:37.390594 kernel: pci_bus 5b42:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:35:37.390738 kernel: pci 5b42:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:35:37.481900 kernel: mlx5_core 5b42:00:02.0: firmware version: 14.30.5000 May 17 00:35:37.729857 kernel: mlx5_core 5b42:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 00:35:37.730052 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) May 17 00:35:37.730072 kernel: mlx5_core 5b42:00:02.0: Supported tc offload range - chains: 1, prios: 1 May 17 00:35:37.730221 kernel: mlx5_core 5b42:00:02.0: mlx5e_tc_post_act_init:40:(pid 238): firmware level support is missing May 17 00:35:37.730364 kernel: hv_netvsc 7c1e5204-badd-7c1e-5204-badd7c1e5204 eth0: VF registering: eth1 May 17 00:35:37.730460 kernel: mlx5_core 5b42:00:02.0 eth1: joined to eth0 May 17 00:35:37.710829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:35:37.720700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:35:37.741897 kernel: mlx5_core 5b42:00:02.0 enP23362s1: renamed from eth1 May 17 00:35:37.909070 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:35:37.959476 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:35:37.962491 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:35:37.964058 systemd[1]: Starting disk-uuid.service... May 17 00:35:38.987904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:35:38.988919 disk-uuid[559]: The operation has completed successfully. May 17 00:35:39.060411 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:35:39.060526 systemd[1]: Finished disk-uuid.service. May 17 00:35:39.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.068269 systemd[1]: Starting verity-setup.service... May 17 00:35:39.120903 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:35:39.404798 systemd[1]: Found device dev-mapper-usr.device. May 17 00:35:39.409443 systemd[1]: Mounting sysusr-usr.mount... May 17 00:35:39.412770 systemd[1]: Finished verity-setup.service. May 17 00:35:39.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.486619 systemd[1]: Mounted sysusr-usr.mount. May 17 00:35:39.489689 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:35:39.489799 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:35:39.493850 systemd[1]: Starting ignition-setup.service... May 17 00:35:39.497937 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:35:39.517475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:35:39.517518 kernel: BTRFS info (device sda6): using free space tree May 17 00:35:39.517532 kernel: BTRFS info (device sda6): has skinny extents May 17 00:35:39.567081 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:35:39.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.570000 audit: BPF prog-id=9 op=LOAD May 17 00:35:39.571982 systemd[1]: Starting systemd-networkd.service... May 17 00:35:39.595581 systemd-networkd[829]: lo: Link UP May 17 00:35:39.595590 systemd-networkd[829]: lo: Gained carrier May 17 00:35:39.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.596151 systemd-networkd[829]: Enumeration completed May 17 00:35:39.596509 systemd[1]: Started systemd-networkd.service. May 17 00:35:39.598974 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:35:39.599782 systemd[1]: Reached target network.target. May 17 00:35:39.604672 systemd[1]: Starting iscsiuio.service... May 17 00:35:39.613185 systemd[1]: Started iscsiuio.service. May 17 00:35:39.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.617724 systemd[1]: Starting iscsid.service... May 17 00:35:39.622896 iscsid[834]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:35:39.622896 iscsid[834]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:35:39.622896 iscsid[834]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:35:39.622896 iscsid[834]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:35:39.622896 iscsid[834]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:35:39.622896 iscsid[834]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:35:39.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.622490 systemd[1]: Started iscsid.service. May 17 00:35:39.641181 systemd[1]: Starting dracut-initqueue.service... May 17 00:35:39.656262 systemd[1]: Finished dracut-initqueue.service. May 17 00:35:39.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.660687 systemd[1]: Reached target remote-fs-pre.target. May 17 00:35:39.669871 kernel: mlx5_core 5b42:00:02.0 enP23362s1: Link up May 17 00:35:39.662763 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:35:39.667151 systemd[1]: Reached target remote-fs.target. May 17 00:35:39.674454 systemd[1]: Starting dracut-pre-mount.service... May 17 00:35:39.682389 systemd[1]: Finished dracut-pre-mount.service. May 17 00:35:39.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.698905 kernel: hv_netvsc 7c1e5204-badd-7c1e-5204-badd7c1e5204 eth0: Data path switched to VF: enP23362s1 May 17 00:35:39.703389 systemd-networkd[829]: enP23362s1: Link UP May 17 00:35:39.705015 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:35:39.704368 systemd-networkd[829]: eth0: Link UP May 17 00:35:39.704517 systemd-networkd[829]: eth0: Gained carrier May 17 00:35:39.709043 systemd-networkd[829]: enP23362s1: Gained carrier May 17 00:35:39.732958 systemd-networkd[829]: eth0: DHCPv4 address 10.200.4.16/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:35:39.743806 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:35:39.904994 systemd[1]: Finished ignition-setup.service. May 17 00:35:39.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:39.909453 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:35:41.568056 systemd-networkd[829]: eth0: Gained IPv6LL May 17 00:35:43.537322 ignition[856]: Ignition 2.14.0 May 17 00:35:43.537338 ignition[856]: Stage: fetch-offline May 17 00:35:43.537432 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:43.537491 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:43.615848 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:43.618864 ignition[856]: parsed url from cmdline: "" May 17 00:35:43.618883 ignition[856]: no config URL provided May 17 00:35:43.618893 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:35:43.618907 ignition[856]: no config at "/usr/lib/ignition/user.ign" May 17 00:35:43.618920 ignition[856]: failed to fetch config: resource requires networking May 17 00:35:43.623560 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:35:43.620049 ignition[856]: Ignition finished successfully May 17 00:35:43.645442 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:35:43.645492 kernel: audit: type=1130 audit(1747442143.632:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.634373 systemd[1]: Starting ignition-fetch.service... May 17 00:35:43.642945 ignition[862]: Ignition 2.14.0 May 17 00:35:43.642952 ignition[862]: Stage: fetch May 17 00:35:43.643058 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:43.643083 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:43.649063 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:43.649216 ignition[862]: parsed url from cmdline: "" May 17 00:35:43.649221 ignition[862]: no config URL provided May 17 00:35:43.649230 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:35:43.649239 ignition[862]: no config at "/usr/lib/ignition/user.ign" May 17 00:35:43.649271 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:35:43.732479 ignition[862]: GET result: OK May 17 00:35:43.732596 ignition[862]: config has been read from IMDS userdata May 17 00:35:43.732630 ignition[862]: parsing config with SHA512: 8ad94fcdb46366225f016af78d73c047300d1ff7faeb5bb45e1be667afe7d84f86a40801e8d5c618828612407eb00e5aa4e6413927a57bcf8050e7d0d4a01a04 May 17 00:35:43.736436 unknown[862]: fetched base config from "system" May 17 00:35:43.736449 unknown[862]: fetched base config from "system" May 17 00:35:43.737188 ignition[862]: fetch: fetch complete May 17 00:35:43.736459 unknown[862]: fetched user config from "azure" May 17 00:35:43.737195 ignition[862]: fetch: fetch passed May 17 00:35:43.759252 kernel: audit: type=1130 audit(1747442143.744:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.741957 systemd[1]: Finished ignition-fetch.service. May 17 00:35:43.737242 ignition[862]: Ignition finished successfully May 17 00:35:43.755842 systemd[1]: Starting ignition-kargs.service... May 17 00:35:43.766215 ignition[868]: Ignition 2.14.0 May 17 00:35:43.766225 ignition[868]: Stage: kargs May 17 00:35:43.766349 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:43.766376 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:43.768898 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:43.776047 ignition[868]: kargs: kargs passed May 17 00:35:43.776979 ignition[868]: Ignition finished successfully May 17 00:35:43.779671 systemd[1]: Finished ignition-kargs.service. May 17 00:35:43.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.793189 systemd[1]: Starting ignition-disks.service... May 17 00:35:43.796912 kernel: audit: type=1130 audit(1747442143.781:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.802018 ignition[874]: Ignition 2.14.0 May 17 00:35:43.802029 ignition[874]: Stage: disks May 17 00:35:43.802172 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:43.802207 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:43.807167 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:43.808499 ignition[874]: disks: disks passed May 17 00:35:43.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.809816 systemd[1]: Finished ignition-disks.service. May 17 00:35:43.829272 kernel: audit: type=1130 audit(1747442143.811:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.808540 ignition[874]: Ignition finished successfully May 17 00:35:43.811980 systemd[1]: Reached target initrd-root-device.target. May 17 00:35:43.825420 systemd[1]: Reached target local-fs-pre.target. May 17 00:35:43.829251 systemd[1]: Reached target local-fs.target. May 17 00:35:43.832504 systemd[1]: Reached target sysinit.target. May 17 00:35:43.840910 systemd[1]: Reached target basic.target. May 17 00:35:43.843983 systemd[1]: Starting systemd-fsck-root.service... May 17 00:35:43.916799 systemd-fsck[882]: ROOT: clean, 619/7326000 files, 481079/7359488 blocks May 17 00:35:43.921341 systemd[1]: Finished systemd-fsck-root.service. May 17 00:35:43.936241 kernel: audit: type=1130 audit(1747442143.923:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:43.934340 systemd[1]: Mounting sysroot.mount... May 17 00:35:43.953042 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:35:43.953614 systemd[1]: Mounted sysroot.mount. May 17 00:35:43.957071 systemd[1]: Reached target initrd-root-fs.target. May 17 00:35:44.001062 systemd[1]: Mounting sysroot-usr.mount... May 17 00:35:44.006424 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:35:44.010489 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:35:44.011277 systemd[1]: Reached target ignition-diskful.target. May 17 00:35:44.019144 systemd[1]: Mounted sysroot-usr.mount. May 17 00:35:44.070121 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:35:44.075782 systemd[1]: Starting initrd-setup-root.service... May 17 00:35:44.090893 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (893) May 17 00:35:44.099522 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:35:44.099559 kernel: BTRFS info (device sda6): using free space tree May 17 00:35:44.099575 kernel: BTRFS info (device sda6): has skinny extents May 17 00:35:44.107251 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:35:44.137044 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:35:44.160626 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory May 17 00:35:44.181605 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:35:44.187902 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:35:44.735343 systemd[1]: Finished initrd-setup-root.service. May 17 00:35:44.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.748498 systemd[1]: Starting ignition-mount.service... May 17 00:35:44.751187 kernel: audit: type=1130 audit(1747442144.736:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.753680 systemd[1]: Starting sysroot-boot.service... May 17 00:35:44.764058 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:35:44.764179 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:35:44.779058 systemd[1]: Finished sysroot-boot.service. May 17 00:35:44.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.792899 kernel: audit: type=1130 audit(1747442144.781:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.801633 ignition[961]: INFO : Ignition 2.14.0 May 17 00:35:44.801633 ignition[961]: INFO : Stage: mount May 17 00:35:44.804950 ignition[961]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:44.804950 ignition[961]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:44.812559 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:44.815428 ignition[961]: INFO : mount: mount passed May 17 00:35:44.815428 ignition[961]: INFO : Ignition finished successfully May 17 00:35:44.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:44.816051 systemd[1]: Finished ignition-mount.service. May 17 00:35:44.832794 kernel: audit: type=1130 audit(1747442144.817:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:45.708223 coreos-metadata[892]: May 17 00:35:45.708 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:35:45.726209 coreos-metadata[892]: May 17 00:35:45.726 INFO Fetch successful May 17 00:35:45.759570 coreos-metadata[892]: May 17 00:35:45.759 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:35:45.775727 coreos-metadata[892]: May 17 00:35:45.775 INFO Fetch successful May 17 00:35:45.791104 coreos-metadata[892]: May 17 00:35:45.791 INFO wrote hostname ci-3510.3.7-n-ec5807f93e to /sysroot/etc/hostname May 17 00:35:45.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:45.793288 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:35:45.815809 kernel: audit: type=1130 audit(1747442145.796:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:45.798625 systemd[1]: Starting ignition-files.service... May 17 00:35:45.819026 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:35:45.833898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (971) May 17 00:35:45.833931 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:35:45.843242 kernel: BTRFS info (device sda6): using free space tree May 17 00:35:45.843610 kernel: BTRFS info (device sda6): has skinny extents May 17 00:35:45.852955 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:35:45.865785 ignition[990]: INFO : Ignition 2.14.0 May 17 00:35:45.865785 ignition[990]: INFO : Stage: files May 17 00:35:45.869644 ignition[990]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:45.869644 ignition[990]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:45.883823 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:45.904610 ignition[990]: DEBUG : files: compiled without relabeling support, skipping May 17 00:35:45.908719 ignition[990]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:35:45.908719 ignition[990]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:35:45.951107 ignition[990]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:35:45.954625 ignition[990]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:35:45.968319 unknown[990]: wrote ssh authorized keys file for user: core May 17 00:35:45.970949 ignition[990]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:35:45.974158 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:35:45.978482 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:35:46.018383 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:35:46.075639 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:35:46.080822 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:35:46.080822 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:35:46.597762 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:35:46.647294 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:35:46.654351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1327934529" May 17 00:35:46.714407 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1327934529": device or resource busy May 17 00:35:46.714407 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1327934529", trying btrfs: device or resource busy May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1327934529" May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1327934529" May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1327934529" May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1327934529" May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem832900688" May 17 00:35:46.714407 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem832900688": device or resource busy May 17 00:35:46.714407 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem832900688", trying btrfs: device or resource busy May 17 00:35:46.714407 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem832900688" May 17 00:35:46.672267 systemd[1]: mnt-oem1327934529.mount: Deactivated successfully. May 17 00:35:46.785863 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem832900688" May 17 00:35:46.785863 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem832900688" May 17 00:35:46.785863 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem832900688" May 17 00:35:46.785863 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:35:46.785863 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:35:46.785863 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:35:46.697172 systemd[1]: mnt-oem832900688.mount: Deactivated successfully. May 17 00:35:47.459743 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK May 17 00:35:47.654714 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:35:47.654714 ignition[990]: INFO : files: op(14): [started] processing unit "nvidia.service" May 17 00:35:47.654714 ignition[990]: INFO : files: op(14): [finished] processing unit "nvidia.service" May 17 00:35:47.654714 ignition[990]: INFO : files: op(15): [started] processing unit "waagent.service" May 17 00:35:47.654714 ignition[990]: INFO : files: op(15): [finished] processing unit "waagent.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(16): [started] processing unit "prepare-helm.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" May 17 00:35:47.671463 ignition[990]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" May 17 00:35:47.700075 ignition[990]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:35:47.700075 ignition[990]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:35:47.700075 ignition[990]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:35:47.711144 ignition[990]: INFO : files: files passed May 17 00:35:47.711144 ignition[990]: INFO : Ignition finished successfully May 17 00:35:47.716521 systemd[1]: Finished ignition-files.service. May 17 00:35:47.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.730975 kernel: audit: type=1130 audit(1747442147.718:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.731189 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:35:47.733304 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:35:47.734306 systemd[1]: Starting ignition-quench.service... May 17 00:35:47.740400 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:35:47.740505 systemd[1]: Finished ignition-quench.service. May 17 00:35:47.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.768416 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:35:47.770409 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:35:47.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.775307 systemd[1]: Reached target ignition-complete.target. May 17 00:35:47.780124 systemd[1]: Starting initrd-parse-etc.service... May 17 00:35:47.795823 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:35:47.798042 systemd[1]: Finished initrd-parse-etc.service. May 17 00:35:47.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.801777 systemd[1]: Reached target initrd-fs.target. May 17 00:35:47.805117 systemd[1]: Reached target initrd.target. May 17 00:35:47.808407 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:35:47.811754 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:35:47.822277 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:35:47.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.826333 systemd[1]: Starting initrd-cleanup.service... May 17 00:35:47.836080 systemd[1]: Stopped target nss-lookup.target. May 17 00:35:47.837880 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:35:47.841403 systemd[1]: Stopped target timers.target. May 17 00:35:47.844787 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:35:47.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.844942 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:35:47.848619 systemd[1]: Stopped target initrd.target. May 17 00:35:47.852187 systemd[1]: Stopped target basic.target. May 17 00:35:47.855443 systemd[1]: Stopped target ignition-complete.target. May 17 00:35:47.858765 systemd[1]: Stopped target ignition-diskful.target. May 17 00:35:47.862083 systemd[1]: Stopped target initrd-root-device.target. May 17 00:35:47.866195 systemd[1]: Stopped target remote-fs.target. May 17 00:35:47.870092 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:35:47.873595 systemd[1]: Stopped target sysinit.target. May 17 00:35:47.876789 systemd[1]: Stopped target local-fs.target. May 17 00:35:47.880293 systemd[1]: Stopped target local-fs-pre.target. May 17 00:35:47.883679 systemd[1]: Stopped target swap.target. May 17 00:35:47.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.886670 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:35:47.886812 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:35:47.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.890249 systemd[1]: Stopped target cryptsetup.target. May 17 00:35:47.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.893214 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:35:47.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.893364 systemd[1]: Stopped dracut-initqueue.service. May 17 00:35:47.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.897392 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:35:47.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.927735 iscsid[834]: iscsid shutting down. May 17 00:35:47.897523 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:35:47.901222 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:35:47.933847 ignition[1028]: INFO : Ignition 2.14.0 May 17 00:35:47.933847 ignition[1028]: INFO : Stage: umount May 17 00:35:47.933847 ignition[1028]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:35:47.933847 ignition[1028]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:35:47.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.901349 systemd[1]: Stopped ignition-files.service. May 17 00:35:47.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.954062 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:35:47.954062 ignition[1028]: INFO : umount: umount passed May 17 00:35:47.954062 ignition[1028]: INFO : Ignition finished successfully May 17 00:35:47.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.904630 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:35:47.904758 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:35:47.909503 systemd[1]: Stopping ignition-mount.service... May 17 00:35:47.912165 systemd[1]: Stopping iscsid.service... May 17 00:35:47.914699 systemd[1]: Stopping sysroot-boot.service... May 17 00:35:47.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.916281 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:35:47.916444 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:35:47.918675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:35:47.918842 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:35:47.923011 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:35:47.923136 systemd[1]: Stopped iscsid.service. May 17 00:35:47.938315 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:35:47.938415 systemd[1]: Finished initrd-cleanup.service. May 17 00:35:47.948308 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:35:47.948394 systemd[1]: Stopped ignition-mount.service. May 17 00:35:47.951332 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:35:47.951385 systemd[1]: Stopped ignition-disks.service. May 17 00:35:47.953983 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:35:47.954036 systemd[1]: Stopped ignition-kargs.service. May 17 00:35:47.955754 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:35:47.955794 systemd[1]: Stopped ignition-fetch.service. May 17 00:35:47.960399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:35:47.960449 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:35:47.964145 systemd[1]: Stopped target paths.target. May 17 00:35:47.967574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:35:47.973326 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:35:47.976594 systemd[1]: Stopped target slices.target. May 17 00:35:48.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.978145 systemd[1]: Stopped target sockets.target. May 17 00:35:47.979771 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:35:47.979809 systemd[1]: Closed iscsid.socket. May 17 00:35:47.982976 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:35:47.983028 systemd[1]: Stopped ignition-setup.service. May 17 00:35:47.986500 systemd[1]: Stopping iscsiuio.service... May 17 00:35:47.992577 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:35:48.019162 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:35:48.022683 systemd[1]: Stopped iscsiuio.service. May 17 00:35:48.030765 systemd[1]: Stopped target network.target. May 17 00:35:48.049274 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:35:48.049335 systemd[1]: Closed iscsiuio.socket. May 17 00:35:48.054557 systemd[1]: Stopping systemd-networkd.service... May 17 00:35:48.056231 systemd[1]: Stopping systemd-resolved.service... May 17 00:35:48.061927 systemd-networkd[829]: eth0: DHCPv6 lease lost May 17 00:35:48.065640 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:35:48.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.071000 audit: BPF prog-id=9 op=UNLOAD May 17 00:35:48.065718 systemd[1]: Stopped systemd-networkd.service. May 17 00:35:48.069668 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:35:48.069698 systemd[1]: Closed systemd-networkd.socket. May 17 00:35:48.072534 systemd[1]: Stopping network-cleanup.service... May 17 00:35:48.075757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:35:48.075820 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:35:48.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.086339 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:35:48.086390 systemd[1]: Stopped systemd-sysctl.service. May 17 00:35:48.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.091727 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:35:48.091776 systemd[1]: Stopped systemd-modules-load.service. May 17 00:35:48.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.097376 systemd[1]: Stopping systemd-udevd.service... May 17 00:35:48.101693 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:35:48.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.102211 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:35:48.102303 systemd[1]: Stopped systemd-resolved.service. May 17 00:35:48.108811 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:35:48.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.113000 audit: BPF prog-id=6 op=UNLOAD May 17 00:35:48.108996 systemd[1]: Stopped systemd-udevd.service. May 17 00:35:48.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.114112 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:35:48.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.114149 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:35:48.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.116486 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:35:48.116535 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:35:48.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.118701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:35:48.118749 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:35:48.122355 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:35:48.122401 systemd[1]: Stopped dracut-cmdline.service. May 17 00:35:48.126417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:35:48.126465 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:35:48.130621 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:35:48.163698 kernel: hv_netvsc 7c1e5204-badd-7c1e-5204-badd7c1e5204 eth0: Data path switched from VF: enP23362s1 May 17 00:35:48.134140 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:35:48.134205 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:35:48.136402 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:35:48.136452 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:35:48.138432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:35:48.138484 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:35:48.141310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:35:48.141771 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:35:48.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.141847 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:35:48.180386 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:35:48.180480 systemd[1]: Stopped network-cleanup.service. May 17 00:35:48.280681 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:35:48.280814 systemd[1]: Stopped sysroot-boot.service. May 17 00:35:48.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.287400 systemd[1]: Reached target initrd-switch-root.target. May 17 00:35:48.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:48.289630 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:35:48.289691 systemd[1]: Stopped initrd-setup-root.service. May 17 00:35:48.294026 systemd[1]: Starting initrd-switch-root.service... May 17 00:35:48.306804 systemd[1]: Switching root. May 17 00:35:48.328605 systemd-journald[183]: Journal stopped May 17 00:36:04.604423 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:36:04.604451 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:36:04.604464 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:36:04.604474 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:36:04.604484 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:36:04.604493 kernel: SELinux: policy capability open_perms=1 May 17 00:36:04.604506 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:36:04.604515 kernel: SELinux: policy capability always_check_network=0 May 17 00:36:04.604525 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:36:04.604537 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:36:04.604545 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:36:04.604556 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:36:04.604565 kernel: kauditd_printk_skb: 43 callbacks suppressed May 17 00:36:04.604575 kernel: audit: type=1403 audit(1747442150.899:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:36:04.604590 systemd[1]: Successfully loaded SELinux policy in 373.584ms. May 17 00:36:04.604603 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.531ms. May 17 00:36:04.604613 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:36:04.604626 systemd[1]: Detected virtualization microsoft. May 17 00:36:04.604639 systemd[1]: Detected architecture x86-64. May 17 00:36:04.604648 systemd[1]: Detected first boot. May 17 00:36:04.604664 systemd[1]: Hostname set to . May 17 00:36:04.604676 systemd[1]: Initializing machine ID from random generator. May 17 00:36:04.604686 kernel: audit: type=1400 audit(1747442151.761:83): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:36:04.604699 kernel: audit: type=1400 audit(1747442151.778:84): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:36:04.604711 kernel: audit: type=1400 audit(1747442151.778:85): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:36:04.604721 kernel: audit: type=1334 audit(1747442151.801:86): prog-id=10 op=LOAD May 17 00:36:04.604732 kernel: audit: type=1334 audit(1747442151.801:87): prog-id=10 op=UNLOAD May 17 00:36:04.604744 kernel: audit: type=1334 audit(1747442151.806:88): prog-id=11 op=LOAD May 17 00:36:04.604753 kernel: audit: type=1334 audit(1747442151.806:89): prog-id=11 op=UNLOAD May 17 00:36:04.604765 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:36:04.604776 kernel: audit: type=1400 audit(1747442153.471:90): avc: denied { associate } for pid=1063 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:36:04.604786 kernel: audit: type=1300 audit(1747442153.471:90): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d382 a1=c0000ce6f0 a2=c0000d6c00 a3=32 items=0 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:04.604800 systemd[1]: Populated /etc with preset unit settings. May 17 00:36:04.604812 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:36:04.604822 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:36:04.604836 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:36:04.604848 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:36:04.604856 kernel: audit: type=1334 audit(1747442164.012:92): prog-id=12 op=LOAD May 17 00:36:04.604868 kernel: audit: type=1334 audit(1747442164.012:93): prog-id=3 op=UNLOAD May 17 00:36:04.604887 kernel: audit: type=1334 audit(1747442164.017:94): prog-id=13 op=LOAD May 17 00:36:04.604902 kernel: audit: type=1334 audit(1747442164.023:95): prog-id=14 op=LOAD May 17 00:36:04.604915 kernel: audit: type=1334 audit(1747442164.023:96): prog-id=4 op=UNLOAD May 17 00:36:04.604924 kernel: audit: type=1334 audit(1747442164.023:97): prog-id=5 op=UNLOAD May 17 00:36:04.604936 kernel: audit: type=1334 audit(1747442164.030:98): prog-id=15 op=LOAD May 17 00:36:04.604948 kernel: audit: type=1334 audit(1747442164.030:99): prog-id=12 op=UNLOAD May 17 00:36:04.604957 kernel: audit: type=1334 audit(1747442164.036:100): prog-id=16 op=LOAD May 17 00:36:04.604969 kernel: audit: type=1334 audit(1747442164.058:101): prog-id=17 op=LOAD May 17 00:36:04.604980 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:36:04.604995 systemd[1]: Stopped initrd-switch-root.service. May 17 00:36:04.605013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:36:04.605032 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:36:04.605052 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:36:04.605072 systemd[1]: Created slice system-getty.slice. May 17 00:36:04.605088 systemd[1]: Created slice system-modprobe.slice. May 17 00:36:04.605108 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:36:04.605126 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:36:04.605146 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:36:04.605163 systemd[1]: Created slice user.slice. May 17 00:36:04.605181 systemd[1]: Started systemd-ask-password-console.path. May 17 00:36:04.605202 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:36:04.605219 systemd[1]: Set up automount boot.automount. May 17 00:36:04.605240 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:36:04.605259 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:36:04.605277 systemd[1]: Stopped target initrd-fs.target. May 17 00:36:04.605296 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:36:04.605316 systemd[1]: Reached target integritysetup.target. May 17 00:36:04.605334 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:36:04.605354 systemd[1]: Reached target remote-fs.target. May 17 00:36:04.605371 systemd[1]: Reached target slices.target. May 17 00:36:04.605389 systemd[1]: Reached target swap.target. May 17 00:36:04.605409 systemd[1]: Reached target torcx.target. May 17 00:36:04.605427 systemd[1]: Reached target veritysetup.target. May 17 00:36:04.605451 systemd[1]: Listening on systemd-coredump.socket. May 17 00:36:04.605470 systemd[1]: Listening on systemd-initctl.socket. May 17 00:36:04.605489 systemd[1]: Listening on systemd-networkd.socket. May 17 00:36:04.605507 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:36:04.605527 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:36:04.605549 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:36:04.605571 systemd[1]: Mounting dev-hugepages.mount... May 17 00:36:04.605591 systemd[1]: Mounting dev-mqueue.mount... May 17 00:36:04.605609 systemd[1]: Mounting media.mount... May 17 00:36:04.605628 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:04.605648 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:36:04.605668 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:36:04.605687 systemd[1]: Mounting tmp.mount... May 17 00:36:04.605706 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:36:04.605727 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:04.605747 systemd[1]: Starting kmod-static-nodes.service... May 17 00:36:04.605766 systemd[1]: Starting modprobe@configfs.service... May 17 00:36:04.605788 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:04.605807 systemd[1]: Starting modprobe@drm.service... May 17 00:36:04.605826 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:36:04.605843 systemd[1]: Starting modprobe@fuse.service... May 17 00:36:04.605861 systemd[1]: Starting modprobe@loop.service... May 17 00:36:04.605888 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:36:04.605906 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:36:04.605920 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:36:04.605935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:36:04.605950 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:36:04.605965 systemd[1]: Stopped systemd-journald.service. May 17 00:36:04.605981 systemd[1]: Starting systemd-journald.service... May 17 00:36:04.605994 systemd[1]: Starting systemd-modules-load.service... May 17 00:36:04.610124 systemd[1]: Starting systemd-network-generator.service... May 17 00:36:04.610157 systemd[1]: Starting systemd-remount-fs.service... May 17 00:36:04.610170 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:36:04.610182 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:36:04.610195 systemd[1]: Stopped verity-setup.service. May 17 00:36:04.610208 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:04.610220 systemd[1]: Mounted dev-hugepages.mount. May 17 00:36:04.610230 systemd[1]: Mounted dev-mqueue.mount. May 17 00:36:04.610243 kernel: loop: module loaded May 17 00:36:04.610256 systemd[1]: Mounted media.mount. May 17 00:36:04.610268 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:36:04.610281 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:36:04.610291 systemd[1]: Mounted tmp.mount. May 17 00:36:04.610304 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:36:04.610314 systemd[1]: Finished kmod-static-nodes.service. May 17 00:36:04.610327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:04.610347 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:04.610360 kernel: fuse: init (API version 7.34) May 17 00:36:04.610371 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:36:04.610383 systemd[1]: Finished modprobe@drm.service. May 17 00:36:04.610397 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:36:04.610412 systemd[1]: Finished modprobe@configfs.service. May 17 00:36:04.610423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:36:04.610436 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:36:04.610449 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:36:04.610460 systemd[1]: Finished modprobe@fuse.service. May 17 00:36:04.610472 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:04.610485 systemd[1]: Finished modprobe@loop.service. May 17 00:36:04.610496 systemd[1]: Finished systemd-modules-load.service. May 17 00:36:04.610508 systemd[1]: Finished systemd-network-generator.service. May 17 00:36:04.610522 systemd[1]: Finished systemd-remount-fs.service. May 17 00:36:04.610533 systemd[1]: Reached target network-pre.target. May 17 00:36:04.610544 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:36:04.610557 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:36:04.610570 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:36:04.610585 systemd-journald[1144]: Journal started May 17 00:36:04.610640 systemd-journald[1144]: Runtime Journal (/run/log/journal/ed0fbc03f9a5408d9b349aa9b4ffd5b2) is 8.0M, max 159.0M, 151.0M free. May 17 00:35:50.899000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:35:51.761000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:35:51.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:35:51.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:35:51.801000 audit: BPF prog-id=10 op=LOAD May 17 00:35:51.801000 audit: BPF prog-id=10 op=UNLOAD May 17 00:35:51.806000 audit: BPF prog-id=11 op=LOAD May 17 00:35:51.806000 audit: BPF prog-id=11 op=UNLOAD May 17 00:35:53.471000 audit[1063]: AVC avc: denied { associate } for pid=1063 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:35:53.471000 audit[1063]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d382 a1=c0000ce6f0 a2=c0000d6c00 a3=32 items=0 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:53.471000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:35:53.477000 audit[1063]: AVC avc: denied { associate } for pid=1063 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:35:53.477000 audit[1063]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d459 a2=1ed a3=0 items=2 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:53.477000 audit: CWD cwd="/" May 17 00:35:53.477000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:53.477000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:53.477000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:36:04.012000 audit: BPF prog-id=12 op=LOAD May 17 00:36:04.012000 audit: BPF prog-id=3 op=UNLOAD May 17 00:36:04.017000 audit: BPF prog-id=13 op=LOAD May 17 00:36:04.023000 audit: BPF prog-id=14 op=LOAD May 17 00:36:04.023000 audit: BPF prog-id=4 op=UNLOAD May 17 00:36:04.023000 audit: BPF prog-id=5 op=UNLOAD May 17 00:36:04.030000 audit: BPF prog-id=15 op=LOAD May 17 00:36:04.030000 audit: BPF prog-id=12 op=UNLOAD May 17 00:36:04.036000 audit: BPF prog-id=16 op=LOAD May 17 00:36:04.058000 audit: BPF prog-id=17 op=LOAD May 17 00:36:04.058000 audit: BPF prog-id=13 op=UNLOAD May 17 00:36:04.058000 audit: BPF prog-id=14 op=UNLOAD May 17 00:36:04.064000 audit: BPF prog-id=18 op=LOAD May 17 00:36:04.064000 audit: BPF prog-id=15 op=UNLOAD May 17 00:36:04.069000 audit: BPF prog-id=19 op=LOAD May 17 00:36:04.073000 audit: BPF prog-id=20 op=LOAD May 17 00:36:04.073000 audit: BPF prog-id=16 op=UNLOAD May 17 00:36:04.073000 audit: BPF prog-id=17 op=UNLOAD May 17 00:36:04.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.085000 audit: BPF prog-id=18 op=UNLOAD May 17 00:36:04.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.408000 audit: BPF prog-id=21 op=LOAD May 17 00:36:04.408000 audit: BPF prog-id=22 op=LOAD May 17 00:36:04.408000 audit: BPF prog-id=23 op=LOAD May 17 00:36:04.408000 audit: BPF prog-id=19 op=UNLOAD May 17 00:36:04.408000 audit: BPF prog-id=20 op=UNLOAD May 17 00:36:04.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.601000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:36:04.601000 audit[1144]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff7b2b84e0 a2=4000 a3=7fff7b2b857c items=0 ppid=1 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:04.601000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:36:04.011908 systemd[1]: Queued start job for default target multi-user.target. May 17 00:35:53.407009 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:36:04.011921 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:35:53.423244 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:36:04.074580 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:35:53.423271 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:35:53.423314 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:35:53.423375 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:35:53.423462 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:35:53.423481 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:35:53.423895 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:35:53.423953 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:35:53.423968 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:35:53.455842 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:35:53.455923 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:35:53.455955 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:35:53.455979 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:35:53.456002 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:35:53.456020 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:35:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:36:02.660096 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:36:02Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:02.660331 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:36:02Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:02.660425 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:36:02Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:02.660600 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:36:02Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:36:02.660645 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:36:02Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:36:02.660701 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-05-17T00:36:02Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:36:04.636150 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:36:04.645897 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:36:04.652025 systemd[1]: Starting systemd-random-seed.service... May 17 00:36:04.660053 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:36:04.664019 systemd[1]: Starting systemd-sysctl.service... May 17 00:36:04.672108 systemd[1]: Starting systemd-sysusers.service... May 17 00:36:04.680171 systemd[1]: Started systemd-journald.service. May 17 00:36:04.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.681399 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:36:04.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.683914 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:36:04.686163 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:36:04.688509 systemd[1]: Finished systemd-random-seed.service. May 17 00:36:04.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.690946 systemd[1]: Reached target first-boot-complete.target. May 17 00:36:04.693965 systemd[1]: Starting systemd-journal-flush.service... May 17 00:36:04.696930 systemd[1]: Starting systemd-udev-settle.service... May 17 00:36:04.706301 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:36:04.709576 systemd[1]: Finished systemd-sysctl.service. May 17 00:36:04.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.734990 systemd-journald[1144]: Time spent on flushing to /var/log/journal/ed0fbc03f9a5408d9b349aa9b4ffd5b2 is 22.706ms for 1172 entries. May 17 00:36:04.734990 systemd-journald[1144]: System Journal (/var/log/journal/ed0fbc03f9a5408d9b349aa9b4ffd5b2) is 8.0M, max 2.6G, 2.6G free. May 17 00:36:04.822791 systemd-journald[1144]: Received client request to flush runtime journal. May 17 00:36:04.824001 systemd[1]: Finished systemd-journal-flush.service. May 17 00:36:04.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.420755 systemd[1]: Finished systemd-sysusers.service. May 17 00:36:05.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.424611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:36:05.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.736139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:36:05.958234 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:36:05.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:05.961000 audit: BPF prog-id=24 op=LOAD May 17 00:36:05.961000 audit: BPF prog-id=25 op=LOAD May 17 00:36:05.961000 audit: BPF prog-id=7 op=UNLOAD May 17 00:36:05.961000 audit: BPF prog-id=8 op=UNLOAD May 17 00:36:05.963646 systemd[1]: Starting systemd-udevd.service... May 17 00:36:05.985174 systemd-udevd[1191]: Using default interface naming scheme 'v252'. May 17 00:36:06.206123 systemd[1]: Started systemd-udevd.service. May 17 00:36:06.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:06.209000 audit: BPF prog-id=26 op=LOAD May 17 00:36:06.211643 systemd[1]: Starting systemd-networkd.service... May 17 00:36:06.252014 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:36:06.323968 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:36:06.342926 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:36:06.350714 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:36:06.350776 kernel: hv_vmbus: registering driver hv_utils May 17 00:36:06.365293 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:36:06.365382 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:36:06.370780 kernel: Console: switching to colour dummy device 80x25 May 17 00:36:07.606479 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:36:07.606543 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:36:07.606581 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:36:06.327000 audit[1201]: AVC avc: denied { confidentiality } for pid=1201 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:36:07.617391 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:36:07.629958 kernel: hv_vmbus: registering driver hv_balloon May 17 00:36:07.630052 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:36:07.624000 audit: BPF prog-id=27 op=LOAD May 17 00:36:07.628000 audit: BPF prog-id=28 op=LOAD May 17 00:36:07.628000 audit: BPF prog-id=29 op=LOAD May 17 00:36:07.629845 systemd[1]: Starting systemd-userdbd.service... May 17 00:36:06.327000 audit[1201]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56540ddc4ed0 a1=f884 a2=7f933dc58bc5 a3=5 items=12 ppid=1191 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:06.327000 audit: CWD cwd="/" May 17 00:36:06.327000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=1 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=2 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=3 name=(null) inode=15508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=4 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=5 name=(null) inode=15509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=6 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=7 name=(null) inode=15510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=8 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=9 name=(null) inode=15511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=10 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PATH item=11 name=(null) inode=15512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:36:06.327000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:36:07.703045 systemd[1]: Started systemd-userdbd.service. May 17 00:36:07.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:07.928365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:36:07.950394 kernel: KVM: vmx: using Hyper-V Enlightened VMCS May 17 00:36:07.983122 systemd[1]: Finished systemd-udev-settle.service. May 17 00:36:07.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:07.987132 systemd[1]: Starting lvm2-activation-early.service... May 17 00:36:08.088712 systemd-networkd[1198]: lo: Link UP May 17 00:36:08.088723 systemd-networkd[1198]: lo: Gained carrier May 17 00:36:08.089286 systemd-networkd[1198]: Enumeration completed May 17 00:36:08.089453 systemd[1]: Started systemd-networkd.service. May 17 00:36:08.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.092917 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:36:08.119923 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:36:08.177384 kernel: mlx5_core 5b42:00:02.0 enP23362s1: Link up May 17 00:36:08.200381 kernel: hv_netvsc 7c1e5204-badd-7c1e-5204-badd7c1e5204 eth0: Data path switched to VF: enP23362s1 May 17 00:36:08.200719 systemd-networkd[1198]: enP23362s1: Link UP May 17 00:36:08.200931 systemd-networkd[1198]: eth0: Link UP May 17 00:36:08.200945 systemd-networkd[1198]: eth0: Gained carrier May 17 00:36:08.205639 systemd-networkd[1198]: enP23362s1: Gained carrier May 17 00:36:08.229489 systemd-networkd[1198]: eth0: DHCPv4 address 10.200.4.16/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:36:08.357867 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:36:08.388608 systemd[1]: Finished lvm2-activation-early.service. May 17 00:36:08.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.391400 systemd[1]: Reached target cryptsetup.target. May 17 00:36:08.395056 systemd[1]: Starting lvm2-activation.service... May 17 00:36:08.399410 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:36:08.430547 systemd[1]: Finished lvm2-activation.service. May 17 00:36:08.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:08.433306 systemd[1]: Reached target local-fs-pre.target. May 17 00:36:08.435589 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:36:08.435633 systemd[1]: Reached target local-fs.target. May 17 00:36:08.437721 systemd[1]: Reached target machines.target. May 17 00:36:08.440985 systemd[1]: Starting ldconfig.service... May 17 00:36:08.461241 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:08.461331 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:08.462767 systemd[1]: Starting systemd-boot-update.service... May 17 00:36:08.466541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:36:08.470225 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:36:08.473403 systemd[1]: Starting systemd-sysext.service... May 17 00:36:09.361600 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1272 (bootctl) May 17 00:36:09.365170 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:36:09.371074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:36:09.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.383063 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:36:09.493413 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:36:09.493698 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:36:09.541376 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:36:09.552522 systemd-networkd[1198]: eth0: Gained IPv6LL May 17 00:36:09.559179 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:36:09.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.588372 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:36:09.609381 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:36:09.615809 (sd-sysext)[1284]: Using extensions 'kubernetes'. May 17 00:36:09.616264 (sd-sysext)[1284]: Merged extensions into '/usr'. May 17 00:36:09.633081 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:36:09.633891 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:36:09.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.636555 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:09.637995 systemd[1]: Mounting usr-share-oem.mount... May 17 00:36:09.640268 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:09.642130 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:09.645400 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:36:09.647770 systemd[1]: Starting modprobe@loop.service... May 17 00:36:09.648763 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:09.648908 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:09.649073 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:09.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.653612 systemd[1]: Mounted usr-share-oem.mount. May 17 00:36:09.655721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:09.655826 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:09.658390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:36:09.658496 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:36:09.661144 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:09.661248 systemd[1]: Finished modprobe@loop.service. May 17 00:36:09.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.664818 systemd[1]: Finished systemd-sysext.service. May 17 00:36:09.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.668834 systemd[1]: Starting ensure-sysext.service... May 17 00:36:09.670712 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:36:09.670781 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:36:09.672052 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:36:09.682870 systemd[1]: Reloading. May 17 00:36:09.728229 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:36:09.737215 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2025-05-17T00:36:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:36:09.745338 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:36:09.747746 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2025-05-17T00:36:09Z" level=info msg="torcx already run" May 17 00:36:09.763703 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:36:09.837473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:36:09.837494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:36:09.853991 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:36:09.918000 audit: BPF prog-id=30 op=LOAD May 17 00:36:09.918000 audit: BPF prog-id=27 op=UNLOAD May 17 00:36:09.918000 audit: BPF prog-id=31 op=LOAD May 17 00:36:09.919000 audit: BPF prog-id=32 op=LOAD May 17 00:36:09.919000 audit: BPF prog-id=28 op=UNLOAD May 17 00:36:09.919000 audit: BPF prog-id=29 op=UNLOAD May 17 00:36:09.920000 audit: BPF prog-id=33 op=LOAD May 17 00:36:09.920000 audit: BPF prog-id=34 op=LOAD May 17 00:36:09.920000 audit: BPF prog-id=24 op=UNLOAD May 17 00:36:09.920000 audit: BPF prog-id=25 op=UNLOAD May 17 00:36:09.921000 audit: BPF prog-id=35 op=LOAD May 17 00:36:09.922000 audit: BPF prog-id=21 op=UNLOAD May 17 00:36:09.922000 audit: BPF prog-id=36 op=LOAD May 17 00:36:09.922000 audit: BPF prog-id=37 op=LOAD May 17 00:36:09.922000 audit: BPF prog-id=22 op=UNLOAD May 17 00:36:09.922000 audit: BPF prog-id=23 op=UNLOAD May 17 00:36:09.923000 audit: BPF prog-id=38 op=LOAD May 17 00:36:09.923000 audit: BPF prog-id=26 op=UNLOAD May 17 00:36:09.936680 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:09.936947 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:09.938315 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:09.940963 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:36:09.945103 systemd[1]: Starting modprobe@loop.service... May 17 00:36:09.946032 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:09.946188 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:09.946329 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:09.947791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:09.947963 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:09.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.949397 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:09.949504 systemd[1]: Finished modprobe@loop.service. May 17 00:36:09.953619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:36:09.953761 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:36:09.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.956472 systemd[1]: Finished ensure-sysext.service. May 17 00:36:09.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.958198 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:09.958611 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:36:09.959543 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:36:09.961674 systemd[1]: Starting modprobe@drm.service... May 17 00:36:09.963970 systemd[1]: Starting modprobe@loop.service... May 17 00:36:09.967292 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:36:09.967399 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:09.967495 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:36:09.967595 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:36:09.968183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:36:09.968389 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:36:09.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.969464 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:36:09.969561 systemd[1]: Finished modprobe@loop.service. May 17 00:36:09.969717 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:36:09.970699 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:36:09.970807 systemd[1]: Finished modprobe@drm.service. May 17 00:36:10.182985 systemd-fsck[1280]: fsck.fat 4.2 (2021-01-31) May 17 00:36:10.182985 systemd-fsck[1280]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:36:10.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.185045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:36:10.189582 systemd[1]: Mounting boot.mount... May 17 00:36:10.200831 systemd[1]: Mounted boot.mount. May 17 00:36:10.216239 systemd[1]: Finished systemd-boot-update.service. May 17 00:36:10.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.382381 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:36:10.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.387603 systemd[1]: Starting audit-rules.service... May 17 00:36:10.388825 kernel: kauditd_printk_skb: 123 callbacks suppressed May 17 00:36:10.388868 kernel: audit: type=1130 audit(1747442170.384:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.401262 systemd[1]: Starting clean-ca-certificates.service... May 17 00:36:10.404519 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:36:10.406000 audit: BPF prog-id=39 op=LOAD May 17 00:36:10.411413 kernel: audit: type=1334 audit(1747442170.406:209): prog-id=39 op=LOAD May 17 00:36:10.408927 systemd[1]: Starting systemd-resolved.service... May 17 00:36:10.412000 audit: BPF prog-id=40 op=LOAD May 17 00:36:10.415060 systemd[1]: Starting systemd-timesyncd.service... May 17 00:36:10.418925 kernel: audit: type=1334 audit(1747442170.412:210): prog-id=40 op=LOAD May 17 00:36:10.420523 systemd[1]: Starting systemd-update-utmp.service... May 17 00:36:10.430000 audit[1390]: SYSTEM_BOOT pid=1390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:36:10.432926 systemd[1]: Finished systemd-update-utmp.service. May 17 00:36:10.442867 kernel: audit: type=1127 audit(1747442170.430:211): pid=1390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:36:10.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.454388 kernel: audit: type=1130 audit(1747442170.442:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.485812 systemd[1]: Finished clean-ca-certificates.service. May 17 00:36:10.488674 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:36:10.500369 kernel: audit: type=1130 audit(1747442170.487:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.582094 systemd[1]: Started systemd-timesyncd.service. May 17 00:36:10.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.584705 systemd[1]: Reached target time-set.target. May 17 00:36:10.596415 kernel: audit: type=1130 audit(1747442170.583:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.607623 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:36:10.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.622375 kernel: audit: type=1130 audit(1747442170.608:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:10.645126 systemd-resolved[1388]: Positive Trust Anchors: May 17 00:36:10.645146 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:36:10.645185 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:36:10.783000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:36:10.789630 systemd-timesyncd[1389]: Contacted time server 178.62.68.79:123 (0.flatcar.pool.ntp.org). May 17 00:36:10.795933 augenrules[1405]: No rules May 17 00:36:10.789740 systemd-timesyncd[1389]: Initial clock synchronization to Sat 2025-05-17 00:36:10.791662 UTC. May 17 00:36:10.790211 systemd[1]: Finished audit-rules.service. May 17 00:36:10.803488 kernel: audit: type=1305 audit(1747442170.783:216): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:36:10.803528 kernel: audit: type=1300 audit(1747442170.783:216): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6b8ba8b0 a2=420 a3=0 items=0 ppid=1384 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.783000 audit[1405]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6b8ba8b0 a2=420 a3=0 items=0 ppid=1384 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:10.783000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:36:10.796769 systemd-resolved[1388]: Using system hostname 'ci-3510.3.7-n-ec5807f93e'. May 17 00:36:10.815519 systemd[1]: Started systemd-resolved.service. May 17 00:36:10.817776 systemd[1]: Reached target network.target. May 17 00:36:10.819627 systemd[1]: Reached target network-online.target. May 17 00:36:10.821701 systemd[1]: Reached target nss-lookup.target. May 17 00:36:17.389084 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:36:17.402647 systemd[1]: Finished ldconfig.service. May 17 00:36:17.406249 systemd[1]: Starting systemd-update-done.service... May 17 00:36:17.414792 systemd[1]: Finished systemd-update-done.service. May 17 00:36:17.417185 systemd[1]: Reached target sysinit.target. May 17 00:36:17.419159 systemd[1]: Started motdgen.path. May 17 00:36:17.420876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:36:17.423771 systemd[1]: Started logrotate.timer. May 17 00:36:17.425459 systemd[1]: Started mdadm.timer. May 17 00:36:17.426952 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:36:17.429060 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:36:17.429096 systemd[1]: Reached target paths.target. May 17 00:36:17.430705 systemd[1]: Reached target timers.target. May 17 00:36:17.433412 systemd[1]: Listening on dbus.socket. May 17 00:36:17.436252 systemd[1]: Starting docker.socket... May 17 00:36:17.440527 systemd[1]: Listening on sshd.socket. May 17 00:36:17.442725 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:17.443184 systemd[1]: Listening on docker.socket. May 17 00:36:17.445674 systemd[1]: Reached target sockets.target. May 17 00:36:17.447457 systemd[1]: Reached target basic.target. May 17 00:36:17.449425 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:36:17.449458 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:36:17.450486 systemd[1]: Starting containerd.service... May 17 00:36:17.454369 systemd[1]: Starting dbus.service... May 17 00:36:17.456746 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:36:17.459892 systemd[1]: Starting extend-filesystems.service... May 17 00:36:17.462493 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:36:17.478807 systemd[1]: Starting kubelet.service... May 17 00:36:17.481722 systemd[1]: Starting motdgen.service... May 17 00:36:17.484521 systemd[1]: Started nvidia.service. May 17 00:36:17.487941 systemd[1]: Starting prepare-helm.service... May 17 00:36:17.491055 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:36:17.495001 systemd[1]: Starting sshd-keygen.service... May 17 00:36:17.499952 systemd[1]: Starting systemd-logind.service... May 17 00:36:17.503501 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:36:17.503609 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:36:17.504135 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:36:17.505032 systemd[1]: Starting update-engine.service... May 17 00:36:17.508327 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:36:17.520406 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:36:17.520602 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:36:17.548645 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:36:17.548947 systemd[1]: Finished motdgen.service. May 17 00:36:17.557514 extend-filesystems[1416]: Found loop1 May 17 00:36:17.559779 extend-filesystems[1416]: Found sda May 17 00:36:17.559779 extend-filesystems[1416]: Found sda1 May 17 00:36:17.559779 extend-filesystems[1416]: Found sda2 May 17 00:36:17.559779 extend-filesystems[1416]: Found sda3 May 17 00:36:17.559779 extend-filesystems[1416]: Found usr May 17 00:36:17.559779 extend-filesystems[1416]: Found sda4 May 17 00:36:17.559779 extend-filesystems[1416]: Found sda6 May 17 00:36:17.559779 extend-filesystems[1416]: Found sda7 May 17 00:36:17.559779 extend-filesystems[1416]: Found sda9 May 17 00:36:17.559779 extend-filesystems[1416]: Checking size of /dev/sda9 May 17 00:36:17.597784 jq[1430]: true May 17 00:36:17.577451 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:36:17.598037 jq[1415]: false May 17 00:36:17.577670 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:36:17.606731 jq[1442]: true May 17 00:36:17.624861 env[1437]: time="2025-05-17T00:36:17.624815686Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:36:17.668743 tar[1433]: linux-amd64/LICENSE May 17 00:36:17.668743 tar[1433]: linux-amd64/helm May 17 00:36:17.677695 extend-filesystems[1416]: Old size kept for /dev/sda9 May 17 00:36:17.680334 extend-filesystems[1416]: Found sr0 May 17 00:36:17.682244 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:36:17.682458 systemd[1]: Finished extend-filesystems.service. May 17 00:36:17.704054 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:36:17.704262 systemd-logind[1427]: New seat seat0. May 17 00:36:17.770234 env[1437]: time="2025-05-17T00:36:17.770176869Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:36:17.770558 env[1437]: time="2025-05-17T00:36:17.770534017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:17.774073 env[1437]: time="2025-05-17T00:36:17.774036781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:36:17.774548 env[1437]: time="2025-05-17T00:36:17.774526146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:17.776448 env[1437]: time="2025-05-17T00:36:17.775067718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:36:17.776565 env[1437]: time="2025-05-17T00:36:17.776547814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:36:17.777236 env[1437]: time="2025-05-17T00:36:17.777210702Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:36:17.777496 env[1437]: time="2025-05-17T00:36:17.777472637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:36:17.779184 env[1437]: time="2025-05-17T00:36:17.779156260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:17.781373 dbus-daemon[1414]: [system] SELinux support is enabled May 17 00:36:17.781585 systemd[1]: Started dbus.service. May 17 00:36:17.786395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:36:17.791222 env[1437]: time="2025-05-17T00:36:17.787667990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:36:17.791222 env[1437]: time="2025-05-17T00:36:17.787846313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:36:17.791222 env[1437]: time="2025-05-17T00:36:17.787864516Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:36:17.791222 env[1437]: time="2025-05-17T00:36:17.787916423Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:36:17.791222 env[1437]: time="2025-05-17T00:36:17.787930624Z" level=info msg="metadata content store policy set" policy=shared May 17 00:36:17.786432 systemd[1]: Reached target system-config.target. May 17 00:36:17.788731 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:36:17.788754 systemd[1]: Reached target user-config.target. May 17 00:36:17.793755 systemd[1]: Started systemd-logind.service. May 17 00:36:17.793962 dbus-daemon[1414]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:36:17.812075 env[1437]: time="2025-05-17T00:36:17.812037622Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:36:17.812183 env[1437]: time="2025-05-17T00:36:17.812110832Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:36:17.812183 env[1437]: time="2025-05-17T00:36:17.812130335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:36:17.812262 env[1437]: time="2025-05-17T00:36:17.812191543Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812262 env[1437]: time="2025-05-17T00:36:17.812211946Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812335 env[1437]: time="2025-05-17T00:36:17.812289156Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812335 env[1437]: time="2025-05-17T00:36:17.812313059Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812431 env[1437]: time="2025-05-17T00:36:17.812334562Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812431 env[1437]: time="2025-05-17T00:36:17.812376067Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812431 env[1437]: time="2025-05-17T00:36:17.812396070Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812431 env[1437]: time="2025-05-17T00:36:17.812413972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:36:17.812560 env[1437]: time="2025-05-17T00:36:17.812445376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:36:17.812626 env[1437]: time="2025-05-17T00:36:17.812606598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:36:17.812763 env[1437]: time="2025-05-17T00:36:17.812732615Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:36:17.813210 env[1437]: time="2025-05-17T00:36:17.813187575Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:36:17.813266 env[1437]: time="2025-05-17T00:36:17.813228180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813266 env[1437]: time="2025-05-17T00:36:17.813261185Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:36:17.813422 env[1437]: time="2025-05-17T00:36:17.813343296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813469 env[1437]: time="2025-05-17T00:36:17.813441909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813469 env[1437]: time="2025-05-17T00:36:17.813462811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813543 env[1437]: time="2025-05-17T00:36:17.813479214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813543 env[1437]: time="2025-05-17T00:36:17.813522919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813615 env[1437]: time="2025-05-17T00:36:17.813541622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813615 env[1437]: time="2025-05-17T00:36:17.813558724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813615 env[1437]: time="2025-05-17T00:36:17.813592629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:36:17.813725 env[1437]: time="2025-05-17T00:36:17.813613931Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:36:17.813776 env[1437]: time="2025-05-17T00:36:17.813755350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:36:17.815380 env[1437]: time="2025-05-17T00:36:17.815337760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:36:17.815454 env[1437]: time="2025-05-17T00:36:17.815390167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:36:17.815454 env[1437]: time="2025-05-17T00:36:17.815408069Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:36:17.815454 env[1437]: time="2025-05-17T00:36:17.815428072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:36:17.815454 env[1437]: time="2025-05-17T00:36:17.815443174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:36:17.815606 env[1437]: time="2025-05-17T00:36:17.815466977Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:36:17.815606 env[1437]: time="2025-05-17T00:36:17.815510083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:36:17.815865 env[1437]: time="2025-05-17T00:36:17.815798221Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.815883933Z" level=info msg="Connect containerd service" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.815927738Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.816629231Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.817231911Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.817283918Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.817341026Z" level=info msg="containerd successfully booted in 0.194415s" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.818678903Z" level=info msg="Start subscribing containerd event" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.818734411Z" level=info msg="Start recovering state" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.818793819Z" level=info msg="Start event monitor" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.818805520Z" level=info msg="Start snapshots syncer" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.818814421Z" level=info msg="Start cni network conf syncer for default" May 17 00:36:17.853678 env[1437]: time="2025-05-17T00:36:17.818821922Z" level=info msg="Start streaming server" May 17 00:36:17.817452 systemd[1]: Started containerd.service. May 17 00:36:17.858396 bash[1472]: Updated "/home/core/.ssh/authorized_keys" May 17 00:36:17.859379 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:36:17.943652 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:36:18.591562 update_engine[1429]: I0517 00:36:18.591190 1429 main.cc:92] Flatcar Update Engine starting May 17 00:36:18.669023 systemd[1]: Started update-engine.service. May 17 00:36:18.674012 systemd[1]: Started locksmithd.service. May 17 00:36:18.678561 update_engine[1429]: I0517 00:36:18.677243 1429 update_check_scheduler.cc:74] Next update check in 3m27s May 17 00:36:18.737078 tar[1433]: linux-amd64/README.md May 17 00:36:18.744222 systemd[1]: Finished prepare-helm.service. May 17 00:36:19.076456 systemd[1]: Started kubelet.service. May 17 00:36:19.705066 kubelet[1521]: E0517 00:36:19.705015 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:19.707169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:19.707320 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:19.707618 systemd[1]: kubelet.service: Consumed 1.184s CPU time. May 17 00:36:19.866552 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:36:19.886830 systemd[1]: Finished sshd-keygen.service. May 17 00:36:19.890941 systemd[1]: Starting issuegen.service... May 17 00:36:19.895984 systemd[1]: Started waagent.service. May 17 00:36:19.901082 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:36:19.901221 systemd[1]: Finished issuegen.service. May 17 00:36:19.904337 systemd[1]: Starting systemd-user-sessions.service... May 17 00:36:19.941385 systemd[1]: Finished systemd-user-sessions.service. May 17 00:36:19.945516 systemd[1]: Started getty@tty1.service. May 17 00:36:19.948890 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:36:19.951082 systemd[1]: Reached target getty.target. May 17 00:36:19.952964 systemd[1]: Reached target multi-user.target. May 17 00:36:19.956248 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:36:19.965959 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:36:19.966097 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:36:19.968330 systemd[1]: Startup finished in 1.059s (firmware) + 34.613s (loader) + 976ms (kernel) + 14.552s (initrd) + 28.464s (userspace) = 1min 19.666s. May 17 00:36:20.344267 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:36:20.546290 login[1544]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:36:20.547810 login[1545]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:36:20.604031 systemd[1]: Created slice user-500.slice. May 17 00:36:20.605623 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:36:20.609398 systemd-logind[1427]: New session 1 of user core. May 17 00:36:20.615292 systemd-logind[1427]: New session 2 of user core. May 17 00:36:20.619108 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:36:20.620828 systemd[1]: Starting user@500.service... May 17 00:36:20.639753 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:20.868207 systemd[1548]: Queued start job for default target default.target. May 17 00:36:20.868842 systemd[1548]: Reached target paths.target. May 17 00:36:20.868875 systemd[1548]: Reached target sockets.target. May 17 00:36:20.868891 systemd[1548]: Reached target timers.target. May 17 00:36:20.868906 systemd[1548]: Reached target basic.target. May 17 00:36:20.869032 systemd[1]: Started user@500.service. May 17 00:36:20.870297 systemd[1]: Started session-1.scope. May 17 00:36:20.871135 systemd[1]: Started session-2.scope. May 17 00:36:20.872266 systemd[1548]: Reached target default.target. May 17 00:36:20.872483 systemd[1548]: Startup finished in 226ms. May 17 00:36:26.978063 waagent[1539]: 2025-05-17T00:36:26.977943Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:36:26.998780 waagent[1539]: 2025-05-17T00:36:26.998691Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:36:27.000905 waagent[1539]: 2025-05-17T00:36:27.000838Z INFO Daemon Daemon Python: 3.9.16 May 17 00:36:27.003303 waagent[1539]: 2025-05-17T00:36:27.003230Z INFO Daemon Daemon Run daemon May 17 00:36:27.005886 waagent[1539]: 2025-05-17T00:36:27.005812Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:36:27.017975 waagent[1539]: 2025-05-17T00:36:27.017855Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:36:27.024249 waagent[1539]: 2025-05-17T00:36:27.024143Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.025323Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.025991Z INFO Daemon Daemon Using waagent for provisioning May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.027251Z INFO Daemon Daemon Activate resource disk May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.027896Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.035619Z INFO Daemon Daemon Found device: None May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.036534Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.037245Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.038814Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.039524Z INFO Daemon Daemon Running default provisioning handler May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.048791Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.052387Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.053377Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:36:27.062109 waagent[1539]: 2025-05-17T00:36:27.054062Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:36:27.135337 waagent[1539]: 2025-05-17T00:36:27.135168Z INFO Daemon Daemon Successfully mounted dvd May 17 00:36:27.221330 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:36:27.242163 waagent[1539]: 2025-05-17T00:36:27.241977Z INFO Daemon Daemon Detect protocol endpoint May 17 00:36:27.257267 waagent[1539]: 2025-05-17T00:36:27.243211Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:36:27.257267 waagent[1539]: 2025-05-17T00:36:27.244009Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:36:27.257267 waagent[1539]: 2025-05-17T00:36:27.244710Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:36:27.257267 waagent[1539]: 2025-05-17T00:36:27.246107Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:36:27.257267 waagent[1539]: 2025-05-17T00:36:27.246763Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:36:27.341116 waagent[1539]: 2025-05-17T00:36:27.341041Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:36:27.348131 waagent[1539]: 2025-05-17T00:36:27.342892Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:36:27.348131 waagent[1539]: 2025-05-17T00:36:27.343515Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:36:27.835431 waagent[1539]: 2025-05-17T00:36:27.835254Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:36:27.846358 waagent[1539]: 2025-05-17T00:36:27.846266Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:36:27.849102 waagent[1539]: 2025-05-17T00:36:27.849032Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:36:27.922148 waagent[1539]: 2025-05-17T00:36:27.922021Z INFO Daemon Daemon Found private key matching thumbprint 4B3F247CC34DAF5D28A116294C4308EFC3A7DA78 May 17 00:36:27.927640 waagent[1539]: 2025-05-17T00:36:27.925626Z INFO Daemon Daemon Fetch goal state completed May 17 00:36:27.965451 waagent[1539]: 2025-05-17T00:36:27.965325Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 3887f3b6-44a2-47e8-a13b-995acca0cd69 New eTag: 15041307437186136133] May 17 00:36:27.969675 waagent[1539]: 2025-05-17T00:36:27.969593Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:36:27.985229 waagent[1539]: 2025-05-17T00:36:27.985076Z INFO Daemon Daemon Starting provisioning May 17 00:36:27.993417 waagent[1539]: 2025-05-17T00:36:27.988097Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:36:27.993417 waagent[1539]: 2025-05-17T00:36:27.989776Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-ec5807f93e] May 17 00:36:28.009488 waagent[1539]: 2025-05-17T00:36:28.009347Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-ec5807f93e] May 17 00:36:28.016960 waagent[1539]: 2025-05-17T00:36:28.011197Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:36:28.016960 waagent[1539]: 2025-05-17T00:36:28.012388Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:36:28.026486 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:36:28.026732 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:36:28.026814 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:36:28.027168 systemd[1]: Stopping systemd-networkd.service... May 17 00:36:28.030399 systemd-networkd[1198]: eth0: DHCPv6 lease lost May 17 00:36:28.031692 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:36:28.031849 systemd[1]: Stopped systemd-networkd.service. May 17 00:36:28.034144 systemd[1]: Starting systemd-networkd.service... May 17 00:36:28.065332 systemd-networkd[1590]: enP23362s1: Link UP May 17 00:36:28.065343 systemd-networkd[1590]: enP23362s1: Gained carrier May 17 00:36:28.066735 systemd-networkd[1590]: eth0: Link UP May 17 00:36:28.066744 systemd-networkd[1590]: eth0: Gained carrier May 17 00:36:28.067178 systemd-networkd[1590]: lo: Link UP May 17 00:36:28.067186 systemd-networkd[1590]: lo: Gained carrier May 17 00:36:28.067524 systemd-networkd[1590]: eth0: Gained IPv6LL May 17 00:36:28.067799 systemd-networkd[1590]: Enumeration completed May 17 00:36:28.067905 systemd[1]: Started systemd-networkd.service. May 17 00:36:28.070152 waagent[1539]: 2025-05-17T00:36:28.069993Z INFO Daemon Daemon Create user account if not exists May 17 00:36:28.070693 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:36:28.071867 waagent[1539]: 2025-05-17T00:36:28.071792Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:36:28.074511 waagent[1539]: 2025-05-17T00:36:28.074448Z INFO Daemon Daemon Configure sudoer May 17 00:36:28.075905 systemd-networkd[1590]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:36:28.077941 waagent[1539]: 2025-05-17T00:36:28.077846Z INFO Daemon Daemon Configure sshd May 17 00:36:28.079066 waagent[1539]: 2025-05-17T00:36:28.079009Z INFO Daemon Daemon Deploy ssh public key. May 17 00:36:28.113438 systemd-networkd[1590]: eth0: DHCPv4 address 10.200.4.16/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:36:28.116666 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:36:29.198117 waagent[1539]: 2025-05-17T00:36:29.198003Z INFO Daemon Daemon Provisioning complete May 17 00:36:29.212008 waagent[1539]: 2025-05-17T00:36:29.211925Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:36:29.215380 waagent[1539]: 2025-05-17T00:36:29.215296Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:36:29.220785 waagent[1539]: 2025-05-17T00:36:29.220706Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:36:29.489246 waagent[1596]: 2025-05-17T00:36:29.489072Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:36:29.489985 waagent[1596]: 2025-05-17T00:36:29.489916Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:29.490134 waagent[1596]: 2025-05-17T00:36:29.490073Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:29.500996 waagent[1596]: 2025-05-17T00:36:29.500919Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:36:29.501160 waagent[1596]: 2025-05-17T00:36:29.501107Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:36:29.550954 waagent[1596]: 2025-05-17T00:36:29.550825Z INFO ExtHandler ExtHandler Found private key matching thumbprint 4B3F247CC34DAF5D28A116294C4308EFC3A7DA78 May 17 00:36:29.551263 waagent[1596]: 2025-05-17T00:36:29.551200Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:36:29.563960 waagent[1596]: 2025-05-17T00:36:29.563892Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 1fb9c2ca-02e3-46a0-9bfa-b12ca53e14ca New eTag: 15041307437186136133] May 17 00:36:29.564534 waagent[1596]: 2025-05-17T00:36:29.564474Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:36:29.633708 waagent[1596]: 2025-05-17T00:36:29.633542Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:36:29.658588 waagent[1596]: 2025-05-17T00:36:29.658485Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1596 May 17 00:36:29.662006 waagent[1596]: 2025-05-17T00:36:29.661932Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:36:29.663189 waagent[1596]: 2025-05-17T00:36:29.663129Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:36:29.777936 waagent[1596]: 2025-05-17T00:36:29.777814Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:36:29.778299 waagent[1596]: 2025-05-17T00:36:29.778233Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:36:29.787202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:36:29.787526 systemd[1]: Stopped kubelet.service. May 17 00:36:29.787577 systemd[1]: kubelet.service: Consumed 1.184s CPU time. May 17 00:36:29.789292 systemd[1]: Starting kubelet.service... May 17 00:36:29.799939 waagent[1596]: 2025-05-17T00:36:29.795477Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:36:29.799939 waagent[1596]: 2025-05-17T00:36:29.796388Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:36:29.799939 waagent[1596]: 2025-05-17T00:36:29.799163Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:36:29.801830 waagent[1596]: 2025-05-17T00:36:29.801674Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:36:29.802257 waagent[1596]: 2025-05-17T00:36:29.802188Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:29.802469 waagent[1596]: 2025-05-17T00:36:29.802402Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:29.803190 waagent[1596]: 2025-05-17T00:36:29.803112Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:36:29.803591 waagent[1596]: 2025-05-17T00:36:29.803512Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:36:29.803591 waagent[1596]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:36:29.803591 waagent[1596]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:36:29.803591 waagent[1596]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:36:29.803591 waagent[1596]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:29.803591 waagent[1596]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:29.803591 waagent[1596]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:29.806988 waagent[1596]: 2025-05-17T00:36:29.806759Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:36:29.808007 waagent[1596]: 2025-05-17T00:36:29.807941Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:29.808678 waagent[1596]: 2025-05-17T00:36:29.808570Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:29.809436 waagent[1596]: 2025-05-17T00:36:29.809342Z INFO EnvHandler ExtHandler Configure routes May 17 00:36:29.809607 waagent[1596]: 2025-05-17T00:36:29.809551Z INFO EnvHandler ExtHandler Gateway:None May 17 00:36:29.809750 waagent[1596]: 2025-05-17T00:36:29.809702Z INFO EnvHandler ExtHandler Routes:None May 17 00:36:29.811112 waagent[1596]: 2025-05-17T00:36:29.811052Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:36:29.811937 waagent[1596]: 2025-05-17T00:36:29.810943Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:36:29.812574 waagent[1596]: 2025-05-17T00:36:29.812499Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:36:29.817169 waagent[1596]: 2025-05-17T00:36:29.812380Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:36:29.817454 waagent[1596]: 2025-05-17T00:36:29.817382Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:36:29.832385 waagent[1596]: 2025-05-17T00:36:29.832295Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:36:29.833231 waagent[1596]: 2025-05-17T00:36:29.833159Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:36:29.834418 waagent[1596]: 2025-05-17T00:36:29.834324Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:36:29.856374 waagent[1596]: 2025-05-17T00:36:29.856281Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:36:29.888026 waagent[1596]: 2025-05-17T00:36:29.887917Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1590' May 17 00:36:30.631007 systemd[1]: Started kubelet.service. May 17 00:36:30.642075 waagent[1596]: 2025-05-17T00:36:30.641952Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:36:30.642075 waagent[1596]: Executing ['ip', '-a', '-o', 'link']: May 17 00:36:30.642075 waagent[1596]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:36:30.642075 waagent[1596]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:ba:dd brd ff:ff:ff:ff:ff:ff May 17 00:36:30.642075 waagent[1596]: 3: enP23362s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:ba:dd brd ff:ff:ff:ff:ff:ff\ altname enP23362p0s2 May 17 00:36:30.642075 waagent[1596]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:36:30.642075 waagent[1596]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:36:30.642075 waagent[1596]: 2: eth0 inet 10.200.4.16/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:36:30.642075 waagent[1596]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:36:30.642075 waagent[1596]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:36:30.642075 waagent[1596]: 2: eth0 inet6 fe80::7e1e:52ff:fe04:badd/64 scope link \ valid_lft forever preferred_lft forever May 17 00:36:30.697266 waagent[1596]: 2025-05-17T00:36:30.696329Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:36:30.703915 kubelet[1625]: E0517 00:36:30.703876 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:30.707657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:30.707805 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:31.225479 waagent[1539]: 2025-05-17T00:36:31.225287Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:36:31.230479 waagent[1539]: 2025-05-17T00:36:31.230416Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:36:32.304457 waagent[1634]: 2025-05-17T00:36:32.304330Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:36:32.305743 waagent[1634]: 2025-05-17T00:36:32.305672Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:36:32.305891 waagent[1634]: 2025-05-17T00:36:32.305834Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:36:32.306040 waagent[1634]: 2025-05-17T00:36:32.305992Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 17 00:36:32.321451 waagent[1634]: 2025-05-17T00:36:32.321334Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:36:32.321855 waagent[1634]: 2025-05-17T00:36:32.321796Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:32.322015 waagent[1634]: 2025-05-17T00:36:32.321967Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:32.322235 waagent[1634]: 2025-05-17T00:36:32.322186Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:36:32.333919 waagent[1634]: 2025-05-17T00:36:32.333845Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:36:32.345870 waagent[1634]: 2025-05-17T00:36:32.345772Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 May 17 00:36:32.347191 waagent[1634]: 2025-05-17T00:36:32.347112Z INFO ExtHandler May 17 00:36:32.347417 waagent[1634]: 2025-05-17T00:36:32.347331Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bf4e5bc3-f3a0-46b6-9dfd-bfea197ade87 eTag: 15041307437186136133 source: Fabric] May 17 00:36:32.348731 waagent[1634]: 2025-05-17T00:36:32.348641Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:36:32.350227 waagent[1634]: 2025-05-17T00:36:32.350149Z INFO ExtHandler May 17 00:36:32.350421 waagent[1634]: 2025-05-17T00:36:32.350334Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:36:32.356899 waagent[1634]: 2025-05-17T00:36:32.356847Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:36:32.357324 waagent[1634]: 2025-05-17T00:36:32.357276Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:36:32.374064 waagent[1634]: 2025-05-17T00:36:32.373998Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:36:32.429117 waagent[1634]: 2025-05-17T00:36:32.428987Z INFO ExtHandler Downloaded certificate {'thumbprint': '4B3F247CC34DAF5D28A116294C4308EFC3A7DA78', 'hasPrivateKey': True} May 17 00:36:32.430304 waagent[1634]: 2025-05-17T00:36:32.430238Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:36:32.431133 waagent[1634]: 2025-05-17T00:36:32.431074Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:36:32.450269 waagent[1634]: 2025-05-17T00:36:32.450170Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:36:32.458291 waagent[1634]: 2025-05-17T00:36:32.458198Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:36:32.461752 waagent[1634]: 2025-05-17T00:36:32.461656Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:36:32.461949 waagent[1634]: 2025-05-17T00:36:32.461897Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:36:32.568145 waagent[1634]: 2025-05-17T00:36:32.567964Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:36:32.568145 waagent[1634]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:36:32.568145 waagent[1634]: pkts bytes target prot opt in out source destination May 17 00:36:32.568145 waagent[1634]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:36:32.568145 waagent[1634]: pkts bytes target prot opt in out source destination May 17 00:36:32.568145 waagent[1634]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:36:32.568145 waagent[1634]: pkts bytes target prot opt in out source destination May 17 00:36:32.568145 waagent[1634]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:36:32.568145 waagent[1634]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:36:32.568145 waagent[1634]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:36:32.569214 waagent[1634]: 2025-05-17T00:36:32.569142Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:36:32.571878 waagent[1634]: 2025-05-17T00:36:32.571777Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:36:32.572120 waagent[1634]: 2025-05-17T00:36:32.572067Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:36:32.572514 waagent[1634]: 2025-05-17T00:36:32.572457Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:36:32.580462 waagent[1634]: 2025-05-17T00:36:32.580407Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:36:32.580919 waagent[1634]: 2025-05-17T00:36:32.580860Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:36:32.588172 waagent[1634]: 2025-05-17T00:36:32.588103Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1634 May 17 00:36:32.591048 waagent[1634]: 2025-05-17T00:36:32.590986Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:36:32.591795 waagent[1634]: 2025-05-17T00:36:32.591734Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:36:32.592609 waagent[1634]: 2025-05-17T00:36:32.592552Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:36:32.595043 waagent[1634]: 2025-05-17T00:36:32.594980Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:36:32.596264 waagent[1634]: 2025-05-17T00:36:32.596207Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:36:32.596855 waagent[1634]: 2025-05-17T00:36:32.596800Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:32.597013 waagent[1634]: 2025-05-17T00:36:32.596964Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:32.597534 waagent[1634]: 2025-05-17T00:36:32.597478Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:36:32.597816 waagent[1634]: 2025-05-17T00:36:32.597761Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:36:32.597816 waagent[1634]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:36:32.597816 waagent[1634]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:36:32.597816 waagent[1634]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:36:32.597816 waagent[1634]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:32.597816 waagent[1634]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:32.597816 waagent[1634]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:36:32.600951 waagent[1634]: 2025-05-17T00:36:32.600851Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:36:32.601594 waagent[1634]: 2025-05-17T00:36:32.601529Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:36:32.602087 waagent[1634]: 2025-05-17T00:36:32.602021Z INFO EnvHandler ExtHandler Configure routes May 17 00:36:32.602087 waagent[1634]: 2025-05-17T00:36:32.601266Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:36:32.602541 waagent[1634]: 2025-05-17T00:36:32.602486Z INFO EnvHandler ExtHandler Gateway:None May 17 00:36:32.602688 waagent[1634]: 2025-05-17T00:36:32.602641Z INFO EnvHandler ExtHandler Routes:None May 17 00:36:32.603579 waagent[1634]: 2025-05-17T00:36:32.603519Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:36:32.604380 waagent[1634]: 2025-05-17T00:36:32.604327Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:36:32.607163 waagent[1634]: 2025-05-17T00:36:32.607092Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:36:32.607513 waagent[1634]: 2025-05-17T00:36:32.606920Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:36:32.611856 waagent[1634]: 2025-05-17T00:36:32.611802Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:36:32.617903 waagent[1634]: 2025-05-17T00:36:32.617832Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:36:32.617903 waagent[1634]: Executing ['ip', '-a', '-o', 'link']: May 17 00:36:32.617903 waagent[1634]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:36:32.617903 waagent[1634]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:ba:dd brd ff:ff:ff:ff:ff:ff May 17 00:36:32.617903 waagent[1634]: 3: enP23362s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:ba:dd brd ff:ff:ff:ff:ff:ff\ altname enP23362p0s2 May 17 00:36:32.617903 waagent[1634]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:36:32.617903 waagent[1634]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:36:32.617903 waagent[1634]: 2: eth0 inet 10.200.4.16/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:36:32.617903 waagent[1634]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:36:32.617903 waagent[1634]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:36:32.617903 waagent[1634]: 2: eth0 inet6 fe80::7e1e:52ff:fe04:badd/64 scope link \ valid_lft forever preferred_lft forever May 17 00:36:32.630882 waagent[1634]: 2025-05-17T00:36:32.630805Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:36:32.646567 waagent[1634]: 2025-05-17T00:36:32.646506Z INFO ExtHandler ExtHandler May 17 00:36:32.646735 waagent[1634]: 2025-05-17T00:36:32.646682Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 87249286-2b88-4a61-99d4-35238c2ad181 correlation 7dd3d72e-f1ec-41de-903a-8afee6575c1c created: 2025-05-17T00:34:49.461800Z] May 17 00:36:32.649817 waagent[1634]: 2025-05-17T00:36:32.649761Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:36:32.653910 waagent[1634]: 2025-05-17T00:36:32.653855Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 7 ms] May 17 00:36:32.674228 waagent[1634]: 2025-05-17T00:36:32.674161Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:36:32.676551 waagent[1634]: 2025-05-17T00:36:32.676489Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 04766642-3845-4E25-8B36-496F611EAEC7;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:36:32.744643 waagent[1634]: 2025-05-17T00:36:32.744561Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:36:32.759402 waagent[1634]: 2025-05-17T00:36:32.759313Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:36:37.152585 systemd[1]: Created slice system-sshd.slice. May 17 00:36:37.154893 systemd[1]: Started sshd@0-10.200.4.16:22-10.200.16.10:41050.service. May 17 00:36:37.916601 sshd[1676]: Accepted publickey for core from 10.200.16.10 port 41050 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:36:37.918077 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:37.922021 systemd-logind[1427]: New session 3 of user core. May 17 00:36:37.923621 systemd[1]: Started session-3.scope. May 17 00:36:38.442333 systemd[1]: Started sshd@1-10.200.4.16:22-10.200.16.10:41052.service. May 17 00:36:39.030091 sshd[1681]: Accepted publickey for core from 10.200.16.10 port 41052 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:36:39.031788 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:39.037723 systemd[1]: Started session-4.scope. May 17 00:36:39.038431 systemd-logind[1427]: New session 4 of user core. May 17 00:36:39.451224 sshd[1681]: pam_unix(sshd:session): session closed for user core May 17 00:36:39.454608 systemd[1]: sshd@1-10.200.4.16:22-10.200.16.10:41052.service: Deactivated successfully. May 17 00:36:39.455653 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:36:39.456424 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. May 17 00:36:39.457319 systemd-logind[1427]: Removed session 4. May 17 00:36:39.548847 systemd[1]: Started sshd@2-10.200.4.16:22-10.200.16.10:34940.service. May 17 00:36:40.132761 sshd[1687]: Accepted publickey for core from 10.200.16.10 port 34940 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:36:40.134462 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:40.139281 systemd[1]: Started session-5.scope. May 17 00:36:40.139893 systemd-logind[1427]: New session 5 of user core. May 17 00:36:40.554885 sshd[1687]: pam_unix(sshd:session): session closed for user core May 17 00:36:40.558549 systemd[1]: sshd@2-10.200.4.16:22-10.200.16.10:34940.service: Deactivated successfully. May 17 00:36:40.559592 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:36:40.560401 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. May 17 00:36:40.561290 systemd-logind[1427]: Removed session 5. May 17 00:36:40.652332 systemd[1]: Started sshd@3-10.200.4.16:22-10.200.16.10:34952.service. May 17 00:36:40.814626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:36:40.814861 systemd[1]: Stopped kubelet.service. May 17 00:36:40.816631 systemd[1]: Starting kubelet.service... May 17 00:36:40.912061 systemd[1]: Started kubelet.service. May 17 00:36:41.238708 sshd[1693]: Accepted publickey for core from 10.200.16.10 port 34952 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:36:41.240293 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:41.245376 systemd[1]: Started session-6.scope. May 17 00:36:41.245988 systemd-logind[1427]: New session 6 of user core. May 17 00:36:41.635206 kubelet[1699]: E0517 00:36:41.635152 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:41.636793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:41.636967 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:41.666795 sshd[1693]: pam_unix(sshd:session): session closed for user core May 17 00:36:41.669447 systemd[1]: sshd@3-10.200.4.16:22-10.200.16.10:34952.service: Deactivated successfully. May 17 00:36:41.670239 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:36:41.670858 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. May 17 00:36:41.671632 systemd-logind[1427]: Removed session 6. May 17 00:36:41.764866 systemd[1]: Started sshd@4-10.200.4.16:22-10.200.16.10:34968.service. May 17 00:36:42.349938 sshd[1708]: Accepted publickey for core from 10.200.16.10 port 34968 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:36:42.351677 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:42.357442 systemd[1]: Started session-7.scope. May 17 00:36:42.358004 systemd-logind[1427]: New session 7 of user core. May 17 00:36:43.063522 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:36:43.063908 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:36:43.105156 systemd[1]: Starting docker.service... May 17 00:36:43.142727 env[1721]: time="2025-05-17T00:36:43.142676501Z" level=info msg="Starting up" May 17 00:36:43.143941 env[1721]: time="2025-05-17T00:36:43.143914332Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:36:43.143941 env[1721]: time="2025-05-17T00:36:43.143932332Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:36:43.143941 env[1721]: time="2025-05-17T00:36:43.143951033Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:36:43.144140 env[1721]: time="2025-05-17T00:36:43.143962633Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:36:43.145769 env[1721]: time="2025-05-17T00:36:43.145749477Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:36:43.145878 env[1721]: time="2025-05-17T00:36:43.145864880Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:36:43.145966 env[1721]: time="2025-05-17T00:36:43.145951382Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:36:43.146027 env[1721]: time="2025-05-17T00:36:43.146016984Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:36:43.152470 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3267665117-merged.mount: Deactivated successfully. May 17 00:36:43.201835 env[1721]: time="2025-05-17T00:36:43.201338454Z" level=info msg="Loading containers: start." May 17 00:36:43.341378 kernel: Initializing XFRM netlink socket May 17 00:36:43.365592 env[1721]: time="2025-05-17T00:36:43.365551823Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:36:43.518041 systemd-networkd[1590]: docker0: Link UP May 17 00:36:43.543838 env[1721]: time="2025-05-17T00:36:43.543800539Z" level=info msg="Loading containers: done." May 17 00:36:43.555505 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3941652789-merged.mount: Deactivated successfully. May 17 00:36:43.571264 env[1721]: time="2025-05-17T00:36:43.571220118Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:36:43.571454 env[1721]: time="2025-05-17T00:36:43.571431124Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:36:43.571565 env[1721]: time="2025-05-17T00:36:43.571542626Z" level=info msg="Daemon has completed initialization" May 17 00:36:43.600062 systemd[1]: Started docker.service. May 17 00:36:43.610613 env[1721]: time="2025-05-17T00:36:43.610451190Z" level=info msg="API listen on /run/docker.sock" May 17 00:36:45.379884 env[1437]: time="2025-05-17T00:36:45.379828162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:36:46.172030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572270230.mount: Deactivated successfully. May 17 00:36:47.966276 env[1437]: time="2025-05-17T00:36:47.966218401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:47.973437 env[1437]: time="2025-05-17T00:36:47.973391238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:47.979556 env[1437]: time="2025-05-17T00:36:47.979523455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:47.983689 env[1437]: time="2025-05-17T00:36:47.983659435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:47.984306 env[1437]: time="2025-05-17T00:36:47.984274746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:36:47.985077 env[1437]: time="2025-05-17T00:36:47.985051161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:36:49.805269 env[1437]: time="2025-05-17T00:36:49.805151647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:49.811516 env[1437]: time="2025-05-17T00:36:49.811475453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:49.815851 env[1437]: time="2025-05-17T00:36:49.815760225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:49.821779 env[1437]: time="2025-05-17T00:36:49.821715926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:49.822684 env[1437]: time="2025-05-17T00:36:49.822645141Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:36:49.823490 env[1437]: time="2025-05-17T00:36:49.823461655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:36:51.431503 env[1437]: time="2025-05-17T00:36:51.431450082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:51.435842 env[1437]: time="2025-05-17T00:36:51.435801546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:51.440940 env[1437]: time="2025-05-17T00:36:51.440858321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:51.444574 env[1437]: time="2025-05-17T00:36:51.444539175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:51.445220 env[1437]: time="2025-05-17T00:36:51.445186585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:36:51.445967 env[1437]: time="2025-05-17T00:36:51.445941396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:36:51.814285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:36:51.814568 systemd[1]: Stopped kubelet.service. May 17 00:36:51.816329 systemd[1]: Starting kubelet.service... May 17 00:36:51.915034 systemd[1]: Started kubelet.service. May 17 00:36:52.604737 kubelet[1841]: E0517 00:36:52.604683 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:36:52.606171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:36:52.606288 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:36:53.700627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896924646.mount: Deactivated successfully. May 17 00:36:54.345262 env[1437]: time="2025-05-17T00:36:54.345202162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:54.351943 env[1437]: time="2025-05-17T00:36:54.351891744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:54.354898 env[1437]: time="2025-05-17T00:36:54.354863480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:54.359637 env[1437]: time="2025-05-17T00:36:54.359597437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:54.360011 env[1437]: time="2025-05-17T00:36:54.359978942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:36:54.361312 env[1437]: time="2025-05-17T00:36:54.361285958Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:36:54.964474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159871156.mount: Deactivated successfully. May 17 00:36:55.748974 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 17 00:36:56.402346 env[1437]: time="2025-05-17T00:36:56.402294869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:56.409923 env[1437]: time="2025-05-17T00:36:56.409879450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:56.413014 env[1437]: time="2025-05-17T00:36:56.412980583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:56.416888 env[1437]: time="2025-05-17T00:36:56.416854524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:56.417621 env[1437]: time="2025-05-17T00:36:56.417587032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:36:56.418366 env[1437]: time="2025-05-17T00:36:56.418330140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:36:56.950287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237355894.mount: Deactivated successfully. May 17 00:36:57.012686 env[1437]: time="2025-05-17T00:36:57.012637901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:57.021063 env[1437]: time="2025-05-17T00:36:57.021023186Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:57.025509 env[1437]: time="2025-05-17T00:36:57.025478630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:57.030140 env[1437]: time="2025-05-17T00:36:57.030111777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:36:57.030669 env[1437]: time="2025-05-17T00:36:57.030640282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:36:57.031215 env[1437]: time="2025-05-17T00:36:57.031189588Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:36:57.713222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048915532.mount: Deactivated successfully. May 17 00:37:00.294409 env[1437]: time="2025-05-17T00:37:00.294339881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.301387 env[1437]: time="2025-05-17T00:37:00.301326639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.305611 env[1437]: time="2025-05-17T00:37:00.305571474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.310563 env[1437]: time="2025-05-17T00:37:00.310534315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:00.311304 env[1437]: time="2025-05-17T00:37:00.311272722Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:37:02.814324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:37:02.814607 systemd[1]: Stopped kubelet.service. May 17 00:37:02.819500 systemd[1]: Starting kubelet.service... May 17 00:37:02.951012 systemd[1]: Started kubelet.service. May 17 00:37:03.015178 kubelet[1869]: E0517 00:37:03.015122 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:37:03.016853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:37:03.017014 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:37:03.509155 systemd[1]: Stopped kubelet.service. May 17 00:37:03.511729 systemd[1]: Starting kubelet.service... May 17 00:37:03.548765 systemd[1]: Reloading. May 17 00:37:03.659976 /usr/lib/systemd/system-generators/torcx-generator[1904]: time="2025-05-17T00:37:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:37:03.660015 /usr/lib/systemd/system-generators/torcx-generator[1904]: time="2025-05-17T00:37:03Z" level=info msg="torcx already run" May 17 00:37:03.754532 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:37:03.754553 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:37:03.770869 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:37:03.871506 systemd[1]: Started kubelet.service. May 17 00:37:03.874698 systemd[1]: Stopping kubelet.service... May 17 00:37:03.875128 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:37:03.875347 systemd[1]: Stopped kubelet.service. May 17 00:37:03.877044 systemd[1]: Starting kubelet.service... May 17 00:37:03.988475 update_engine[1429]: I0517 00:37:03.988428 1429 update_attempter.cc:509] Updating boot flags... May 17 00:37:05.245164 systemd[1]: Started kubelet.service. May 17 00:37:05.290944 kubelet[2037]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:37:05.290944 kubelet[2037]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:37:05.290944 kubelet[2037]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:37:05.322082 kubelet[2037]: I0517 00:37:05.291009 2037 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:37:05.551270 kubelet[2037]: I0517 00:37:05.551142 2037 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:37:05.551554 kubelet[2037]: I0517 00:37:05.551538 2037 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:37:05.552278 kubelet[2037]: I0517 00:37:05.552252 2037 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:37:05.577801 kubelet[2037]: E0517 00:37:05.577758 2037 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:05.579006 kubelet[2037]: I0517 00:37:05.578975 2037 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:37:05.584718 kubelet[2037]: E0517 00:37:05.584680 2037 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:37:05.584718 kubelet[2037]: I0517 00:37:05.584704 2037 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:37:05.588094 kubelet[2037]: I0517 00:37:05.588072 2037 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:37:05.589829 kubelet[2037]: I0517 00:37:05.589785 2037 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:37:05.590007 kubelet[2037]: I0517 00:37:05.589826 2037 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-ec5807f93e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:37:05.590169 kubelet[2037]: I0517 00:37:05.590018 2037 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:37:05.590169 kubelet[2037]: I0517 00:37:05.590033 2037 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:37:05.590169 kubelet[2037]: I0517 00:37:05.590157 2037 state_mem.go:36] "Initialized new in-memory state store" May 17 00:37:05.593246 kubelet[2037]: I0517 00:37:05.593226 2037 kubelet.go:446] "Attempting to sync node with API server" May 17 00:37:05.593336 kubelet[2037]: I0517 00:37:05.593259 2037 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:37:05.593336 kubelet[2037]: I0517 00:37:05.593283 2037 kubelet.go:352] "Adding apiserver pod source" May 17 00:37:05.593336 kubelet[2037]: I0517 00:37:05.593295 2037 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:37:05.601698 kubelet[2037]: W0517 00:37:05.601577 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ec5807f93e&limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:05.601698 kubelet[2037]: E0517 00:37:05.601666 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ec5807f93e&limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:05.601838 kubelet[2037]: W0517 00:37:05.601757 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:05.601838 kubelet[2037]: E0517 00:37:05.601801 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:05.601928 kubelet[2037]: I0517 00:37:05.601880 2037 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:37:05.602395 kubelet[2037]: I0517 00:37:05.602375 2037 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:37:05.602472 kubelet[2037]: W0517 00:37:05.602440 2037 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:37:05.608442 kubelet[2037]: I0517 00:37:05.608406 2037 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:37:05.608442 kubelet[2037]: I0517 00:37:05.608443 2037 server.go:1287] "Started kubelet" May 17 00:37:05.626373 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:37:05.626519 kubelet[2037]: I0517 00:37:05.626500 2037 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:37:05.629149 kubelet[2037]: I0517 00:37:05.628733 2037 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:37:05.629864 kubelet[2037]: I0517 00:37:05.629831 2037 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:37:05.631297 kubelet[2037]: I0517 00:37:05.631233 2037 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:37:05.631613 kubelet[2037]: I0517 00:37:05.631592 2037 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:37:05.634414 kubelet[2037]: I0517 00:37:05.634389 2037 server.go:479] "Adding debug handlers to kubelet server" May 17 00:37:05.635482 kubelet[2037]: E0517 00:37:05.631785 2037 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-ec5807f93e.1840297222e87084 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-ec5807f93e,UID:ci-3510.3.7-n-ec5807f93e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-ec5807f93e,},FirstTimestamp:2025-05-17 00:37:05.608421508 +0000 UTC m=+0.357622343,LastTimestamp:2025-05-17 00:37:05.608421508 +0000 UTC m=+0.357622343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-ec5807f93e,}" May 17 00:37:05.636448 kubelet[2037]: I0517 00:37:05.636147 2037 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:37:05.636448 kubelet[2037]: E0517 00:37:05.636394 2037 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" May 17 00:37:05.636588 kubelet[2037]: I0517 00:37:05.636515 2037 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:37:05.636588 kubelet[2037]: I0517 00:37:05.636564 2037 reconciler.go:26] "Reconciler: start to sync state" May 17 00:37:05.637046 kubelet[2037]: W0517 00:37:05.636924 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:05.637046 kubelet[2037]: E0517 00:37:05.636985 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:05.637180 kubelet[2037]: E0517 00:37:05.637073 2037 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ec5807f93e?timeout=10s\": dial tcp 10.200.4.16:6443: connect: connection refused" interval="200ms" May 17 00:37:05.637325 kubelet[2037]: I0517 00:37:05.637305 2037 factory.go:221] Registration of the systemd container factory successfully May 17 00:37:05.637444 kubelet[2037]: I0517 00:37:05.637420 2037 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:37:05.639174 kubelet[2037]: I0517 00:37:05.639154 2037 factory.go:221] Registration of the containerd container factory successfully May 17 00:37:05.659290 kubelet[2037]: E0517 00:37:05.659260 2037 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:37:05.683383 kubelet[2037]: I0517 00:37:05.683347 2037 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:37:05.683496 kubelet[2037]: I0517 00:37:05.683488 2037 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:37:05.683548 kubelet[2037]: I0517 00:37:05.683543 2037 state_mem.go:36] "Initialized new in-memory state store" May 17 00:37:05.688338 kubelet[2037]: I0517 00:37:05.688322 2037 policy_none.go:49] "None policy: Start" May 17 00:37:05.688444 kubelet[2037]: I0517 00:37:05.688388 2037 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:37:05.688444 kubelet[2037]: I0517 00:37:05.688408 2037 state_mem.go:35] "Initializing new in-memory state store" May 17 00:37:05.696379 systemd[1]: Created slice kubepods.slice. May 17 00:37:05.700848 kubelet[2037]: I0517 00:37:05.700726 2037 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:37:05.702789 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:37:05.705286 kubelet[2037]: I0517 00:37:05.705260 2037 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:37:05.705286 kubelet[2037]: I0517 00:37:05.705288 2037 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:37:05.705463 kubelet[2037]: I0517 00:37:05.705310 2037 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:37:05.705463 kubelet[2037]: I0517 00:37:05.705318 2037 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:37:05.705463 kubelet[2037]: E0517 00:37:05.705385 2037 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:37:05.708373 kubelet[2037]: W0517 00:37:05.708305 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:05.708978 kubelet[2037]: E0517 00:37:05.708951 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:05.709328 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:37:05.716973 kubelet[2037]: I0517 00:37:05.716942 2037 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:37:05.717094 kubelet[2037]: I0517 00:37:05.717075 2037 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:37:05.717154 kubelet[2037]: I0517 00:37:05.717098 2037 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:37:05.717782 kubelet[2037]: I0517 00:37:05.717770 2037 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:37:05.718947 kubelet[2037]: E0517 00:37:05.718925 2037 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:37:05.719031 kubelet[2037]: E0517 00:37:05.718965 2037 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-ec5807f93e\" not found" May 17 00:37:05.817071 systemd[1]: Created slice kubepods-burstable-pode54628016f0c5afbe38b7c68be68cef8.slice. May 17 00:37:05.819411 kubelet[2037]: I0517 00:37:05.819376 2037 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.819760 kubelet[2037]: E0517 00:37:05.819734 2037 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.16:6443/api/v1/nodes\": dial tcp 10.200.4.16:6443: connect: connection refused" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.825181 kubelet[2037]: E0517 00:37:05.824954 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.827506 systemd[1]: Created slice kubepods-burstable-pod231d0d299a1cf1bd8dbc8b9a3f6a4af6.slice. May 17 00:37:05.835531 kubelet[2037]: E0517 00:37:05.835319 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.837622 systemd[1]: Created slice kubepods-burstable-pod959ab538a776da28c3faefe7aac974ac.slice. May 17 00:37:05.838831 kubelet[2037]: I0517 00:37:05.838809 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.838930 kubelet[2037]: I0517 00:37:05.838837 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.838930 kubelet[2037]: I0517 00:37:05.838863 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.838930 kubelet[2037]: I0517 00:37:05.838885 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e54628016f0c5afbe38b7c68be68cef8-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" (UID: \"e54628016f0c5afbe38b7c68be68cef8\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.838930 kubelet[2037]: I0517 00:37:05.838912 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e54628016f0c5afbe38b7c68be68cef8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" (UID: \"e54628016f0c5afbe38b7c68be68cef8\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.839103 kubelet[2037]: I0517 00:37:05.838943 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.839103 kubelet[2037]: I0517 00:37:05.838968 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.839103 kubelet[2037]: I0517 00:37:05.838990 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/959ab538a776da28c3faefe7aac974ac-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-ec5807f93e\" (UID: \"959ab538a776da28c3faefe7aac974ac\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.839103 kubelet[2037]: I0517 00:37:05.839022 2037 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e54628016f0c5afbe38b7c68be68cef8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" (UID: \"e54628016f0c5afbe38b7c68be68cef8\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:05.839789 kubelet[2037]: E0517 00:37:05.839763 2037 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ec5807f93e?timeout=10s\": dial tcp 10.200.4.16:6443: connect: connection refused" interval="400ms" May 17 00:37:05.840852 kubelet[2037]: E0517 00:37:05.840831 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:06.022120 kubelet[2037]: I0517 00:37:06.022072 2037 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:06.022525 kubelet[2037]: E0517 00:37:06.022491 2037 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.16:6443/api/v1/nodes\": dial tcp 10.200.4.16:6443: connect: connection refused" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:06.126966 env[1437]: time="2025-05-17T00:37:06.126828573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-ec5807f93e,Uid:e54628016f0c5afbe38b7c68be68cef8,Namespace:kube-system,Attempt:0,}" May 17 00:37:06.136952 env[1437]: time="2025-05-17T00:37:06.136905130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-ec5807f93e,Uid:231d0d299a1cf1bd8dbc8b9a3f6a4af6,Namespace:kube-system,Attempt:0,}" May 17 00:37:06.142306 env[1437]: time="2025-05-17T00:37:06.142263960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-ec5807f93e,Uid:959ab538a776da28c3faefe7aac974ac,Namespace:kube-system,Attempt:0,}" May 17 00:37:06.240668 kubelet[2037]: E0517 00:37:06.240620 2037 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ec5807f93e?timeout=10s\": dial tcp 10.200.4.16:6443: connect: connection refused" interval="800ms" May 17 00:37:06.424515 kubelet[2037]: I0517 00:37:06.424183 2037 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:06.424975 kubelet[2037]: E0517 00:37:06.424769 2037 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.16:6443/api/v1/nodes\": dial tcp 10.200.4.16:6443: connect: connection refused" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:06.504826 kubelet[2037]: W0517 00:37:06.504770 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:06.504978 kubelet[2037]: E0517 00:37:06.504836 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:06.685020 kubelet[2037]: W0517 00:37:06.684905 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:06.685020 kubelet[2037]: E0517 00:37:06.684953 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:06.910724 kubelet[2037]: W0517 00:37:06.910672 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:06.910724 kubelet[2037]: E0517 00:37:06.910730 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:06.920485 kubelet[2037]: W0517 00:37:06.920430 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ec5807f93e&limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:06.920607 kubelet[2037]: E0517 00:37:06.920497 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ec5807f93e&limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:07.041266 kubelet[2037]: E0517 00:37:07.041141 2037 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ec5807f93e?timeout=10s\": dial tcp 10.200.4.16:6443: connect: connection refused" interval="1.6s" May 17 00:37:07.227277 kubelet[2037]: I0517 00:37:07.227246 2037 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:07.227668 kubelet[2037]: E0517 00:37:07.227635 2037 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.16:6443/api/v1/nodes\": dial tcp 10.200.4.16:6443: connect: connection refused" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:07.666092 kubelet[2037]: E0517 00:37:07.666034 2037 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:08.642490 kubelet[2037]: E0517 00:37:08.642440 2037 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ec5807f93e?timeout=10s\": dial tcp 10.200.4.16:6443: connect: connection refused" interval="3.2s" May 17 00:37:08.715600 kubelet[2037]: W0517 00:37:08.715527 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ec5807f93e&limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:08.716029 kubelet[2037]: E0517 00:37:08.715606 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ec5807f93e&limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:08.830277 kubelet[2037]: I0517 00:37:08.830227 2037 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:08.830745 kubelet[2037]: E0517 00:37:08.830707 2037 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.16:6443/api/v1/nodes\": dial tcp 10.200.4.16:6443: connect: connection refused" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:09.030890 kubelet[2037]: W0517 00:37:09.030733 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:09.030890 kubelet[2037]: E0517 00:37:09.030813 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:09.398923 kubelet[2037]: W0517 00:37:09.398855 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:09.399119 kubelet[2037]: E0517 00:37:09.398936 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:09.442205 kubelet[2037]: W0517 00:37:09.442135 2037 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.16:6443: connect: connection refused May 17 00:37:09.442385 kubelet[2037]: E0517 00:37:09.442213 2037 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.16:6443: connect: connection refused" logger="UnhandledError" May 17 00:37:10.945144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995535664.mount: Deactivated successfully. May 17 00:37:10.979280 env[1437]: time="2025-05-17T00:37:10.979192572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:10.981445 env[1437]: time="2025-05-17T00:37:10.981399782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:10.993720 env[1437]: time="2025-05-17T00:37:10.993686035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:10.998231 env[1437]: time="2025-05-17T00:37:10.998194855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.008537 env[1437]: time="2025-05-17T00:37:11.008500197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.014937 env[1437]: time="2025-05-17T00:37:11.014890623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.018875 env[1437]: time="2025-05-17T00:37:11.018841539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.025247 env[1437]: time="2025-05-17T00:37:11.025200765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.029257 env[1437]: time="2025-05-17T00:37:11.029225082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.038170 env[1437]: time="2025-05-17T00:37:11.038137718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.048173 env[1437]: time="2025-05-17T00:37:11.048141859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.067909 env[1437]: time="2025-05-17T00:37:11.067843239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:11.107546 env[1437]: time="2025-05-17T00:37:11.107462100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:11.107546 env[1437]: time="2025-05-17T00:37:11.107511700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:11.107546 env[1437]: time="2025-05-17T00:37:11.107526500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:11.108030 env[1437]: time="2025-05-17T00:37:11.107988902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/690e504cc3b6b48310beeac18d3d51ebdc836b8f69db80c27090e6147dc9e1b5 pid=2076 runtime=io.containerd.runc.v2 May 17 00:37:11.130020 systemd[1]: Started cri-containerd-690e504cc3b6b48310beeac18d3d51ebdc836b8f69db80c27090e6147dc9e1b5.scope. May 17 00:37:11.149310 env[1437]: time="2025-05-17T00:37:11.148939568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:11.149310 env[1437]: time="2025-05-17T00:37:11.148984369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:11.149310 env[1437]: time="2025-05-17T00:37:11.148999769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:11.149641 env[1437]: time="2025-05-17T00:37:11.149335470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3274bdc8e0b50f673b9367afc21a3f72f6b7840d6fa1785453631e85970ae357 pid=2104 runtime=io.containerd.runc.v2 May 17 00:37:11.170720 systemd[1]: Started cri-containerd-3274bdc8e0b50f673b9367afc21a3f72f6b7840d6fa1785453631e85970ae357.scope. May 17 00:37:11.174396 env[1437]: time="2025-05-17T00:37:11.174304572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:11.174652 env[1437]: time="2025-05-17T00:37:11.174602473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:11.174805 env[1437]: time="2025-05-17T00:37:11.174779874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:11.175202 env[1437]: time="2025-05-17T00:37:11.175157075Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54d0073dd5d590903c463a7d1316ba8d718d18a9f14e925560c4fef422534344 pid=2139 runtime=io.containerd.runc.v2 May 17 00:37:11.192583 systemd[1]: Started cri-containerd-54d0073dd5d590903c463a7d1316ba8d718d18a9f14e925560c4fef422534344.scope. May 17 00:37:11.223238 env[1437]: time="2025-05-17T00:37:11.223121570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-ec5807f93e,Uid:959ab538a776da28c3faefe7aac974ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"690e504cc3b6b48310beeac18d3d51ebdc836b8f69db80c27090e6147dc9e1b5\"" May 17 00:37:11.230726 env[1437]: time="2025-05-17T00:37:11.230683201Z" level=info msg="CreateContainer within sandbox \"690e504cc3b6b48310beeac18d3d51ebdc836b8f69db80c27090e6147dc9e1b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:37:11.271609 env[1437]: time="2025-05-17T00:37:11.271562167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-ec5807f93e,Uid:e54628016f0c5afbe38b7c68be68cef8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3274bdc8e0b50f673b9367afc21a3f72f6b7840d6fa1785453631e85970ae357\"" May 17 00:37:11.274317 env[1437]: time="2025-05-17T00:37:11.274275178Z" level=info msg="CreateContainer within sandbox \"3274bdc8e0b50f673b9367afc21a3f72f6b7840d6fa1785453631e85970ae357\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:37:11.282317 env[1437]: time="2025-05-17T00:37:11.281575008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-ec5807f93e,Uid:231d0d299a1cf1bd8dbc8b9a3f6a4af6,Namespace:kube-system,Attempt:0,} returns sandbox id \"54d0073dd5d590903c463a7d1316ba8d718d18a9f14e925560c4fef422534344\"" May 17 00:37:11.283227 env[1437]: time="2025-05-17T00:37:11.283182714Z" level=info msg="CreateContainer within sandbox \"690e504cc3b6b48310beeac18d3d51ebdc836b8f69db80c27090e6147dc9e1b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ff57b4c14f4f7875032127d89cd47335c342ce0ce519f52942ae6d1eaf1418a\"" May 17 00:37:11.284425 env[1437]: time="2025-05-17T00:37:11.284398319Z" level=info msg="StartContainer for \"2ff57b4c14f4f7875032127d89cd47335c342ce0ce519f52942ae6d1eaf1418a\"" May 17 00:37:11.285412 env[1437]: time="2025-05-17T00:37:11.285384423Z" level=info msg="CreateContainer within sandbox \"54d0073dd5d590903c463a7d1316ba8d718d18a9f14e925560c4fef422534344\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:37:11.302644 systemd[1]: Started cri-containerd-2ff57b4c14f4f7875032127d89cd47335c342ce0ce519f52942ae6d1eaf1418a.scope. May 17 00:37:11.329882 env[1437]: time="2025-05-17T00:37:11.329841304Z" level=info msg="CreateContainer within sandbox \"3274bdc8e0b50f673b9367afc21a3f72f6b7840d6fa1785453631e85970ae357\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"418e216bede93414b7cf517e4824e8182cd2f6cc9aace0de47ce4e938d5f8602\"" May 17 00:37:11.330521 env[1437]: time="2025-05-17T00:37:11.330493107Z" level=info msg="StartContainer for \"418e216bede93414b7cf517e4824e8182cd2f6cc9aace0de47ce4e938d5f8602\"" May 17 00:37:11.336704 env[1437]: time="2025-05-17T00:37:11.336669732Z" level=info msg="CreateContainer within sandbox \"54d0073dd5d590903c463a7d1316ba8d718d18a9f14e925560c4fef422534344\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4800eb07ad06e36d6a77bdcfb8dd190becc69e6e87dc6985cfb7202dcc1ff8d1\"" May 17 00:37:11.337326 env[1437]: time="2025-05-17T00:37:11.337302034Z" level=info msg="StartContainer for \"4800eb07ad06e36d6a77bdcfb8dd190becc69e6e87dc6985cfb7202dcc1ff8d1\"" May 17 00:37:11.360859 systemd[1]: Started cri-containerd-418e216bede93414b7cf517e4824e8182cd2f6cc9aace0de47ce4e938d5f8602.scope. May 17 00:37:11.384077 systemd[1]: Started cri-containerd-4800eb07ad06e36d6a77bdcfb8dd190becc69e6e87dc6985cfb7202dcc1ff8d1.scope. May 17 00:37:11.387991 env[1437]: time="2025-05-17T00:37:11.387832940Z" level=info msg="StartContainer for \"2ff57b4c14f4f7875032127d89cd47335c342ce0ce519f52942ae6d1eaf1418a\" returns successfully" May 17 00:37:11.461454 env[1437]: time="2025-05-17T00:37:11.461410939Z" level=info msg="StartContainer for \"418e216bede93414b7cf517e4824e8182cd2f6cc9aace0de47ce4e938d5f8602\" returns successfully" May 17 00:37:11.491298 env[1437]: time="2025-05-17T00:37:11.491180260Z" level=info msg="StartContainer for \"4800eb07ad06e36d6a77bdcfb8dd190becc69e6e87dc6985cfb7202dcc1ff8d1\" returns successfully" May 17 00:37:11.720666 kubelet[2037]: E0517 00:37:11.720637 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:11.723524 kubelet[2037]: E0517 00:37:11.723502 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:11.727200 kubelet[2037]: E0517 00:37:11.727183 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:11.945964 systemd[1]: run-containerd-runc-k8s.io-690e504cc3b6b48310beeac18d3d51ebdc836b8f69db80c27090e6147dc9e1b5-runc.bZgFll.mount: Deactivated successfully. May 17 00:37:12.032566 kubelet[2037]: I0517 00:37:12.032539 2037 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:12.728880 kubelet[2037]: E0517 00:37:12.728828 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:12.729313 kubelet[2037]: E0517 00:37:12.729268 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.730277 kubelet[2037]: E0517 00:37:13.730240 2037 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.736866 kubelet[2037]: E0517 00:37:13.736834 2037 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-ec5807f93e\" not found" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.820255 kubelet[2037]: I0517 00:37:13.820219 2037 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.836905 kubelet[2037]: I0517 00:37:13.836878 2037 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.855461 kubelet[2037]: E0517 00:37:13.855349 2037 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.7-n-ec5807f93e.1840297222e87084 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-ec5807f93e,UID:ci-3510.3.7-n-ec5807f93e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-ec5807f93e,},FirstTimestamp:2025-05-17 00:37:05.608421508 +0000 UTC m=+0.357622343,LastTimestamp:2025-05-17 00:37:05.608421508 +0000 UTC m=+0.357622343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-ec5807f93e,}" May 17 00:37:13.901186 kubelet[2037]: E0517 00:37:13.901148 2037 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.901497 kubelet[2037]: I0517 00:37:13.901478 2037 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.901704 kubelet[2037]: I0517 00:37:13.901468 2037 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.903525 kubelet[2037]: E0517 00:37:13.903490 2037 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-ec5807f93e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.903665 kubelet[2037]: I0517 00:37:13.903649 2037 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.903840 kubelet[2037]: E0517 00:37:13.903767 2037 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:13.906737 kubelet[2037]: E0517 00:37:13.906126 2037 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:14.605648 kubelet[2037]: I0517 00:37:14.605606 2037 apiserver.go:52] "Watching apiserver" May 17 00:37:14.637043 kubelet[2037]: I0517 00:37:14.637002 2037 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:37:15.755346 systemd[1]: Reloading. May 17 00:37:15.847888 /usr/lib/systemd/system-generators/torcx-generator[2335]: time="2025-05-17T00:37:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:37:15.847920 /usr/lib/systemd/system-generators/torcx-generator[2335]: time="2025-05-17T00:37:15Z" level=info msg="torcx already run" May 17 00:37:15.933219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:37:15.933239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:37:15.949899 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:37:16.090410 systemd[1]: Stopping kubelet.service... May 17 00:37:16.104779 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:37:16.105007 systemd[1]: Stopped kubelet.service. May 17 00:37:16.107326 systemd[1]: Starting kubelet.service... May 17 00:37:16.204956 systemd[1]: Started kubelet.service. May 17 00:37:16.879758 kubelet[2402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:37:16.879758 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:37:16.879758 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:37:16.879758 kubelet[2402]: I0517 00:37:16.845297 2402 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:37:16.879758 kubelet[2402]: I0517 00:37:16.851829 2402 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:37:16.879758 kubelet[2402]: I0517 00:37:16.851848 2402 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:37:16.879758 kubelet[2402]: I0517 00:37:16.852092 2402 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:37:16.915940 kubelet[2402]: I0517 00:37:16.915894 2402 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:37:16.918873 kubelet[2402]: I0517 00:37:16.918839 2402 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:37:16.922518 kubelet[2402]: E0517 00:37:16.922487 2402 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:37:16.922518 kubelet[2402]: I0517 00:37:16.922514 2402 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:37:16.927050 kubelet[2402]: I0517 00:37:16.927017 2402 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:37:16.927501 kubelet[2402]: I0517 00:37:16.927475 2402 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:37:16.928023 kubelet[2402]: I0517 00:37:16.927664 2402 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-ec5807f93e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:37:16.928221 kubelet[2402]: I0517 00:37:16.928208 2402 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:37:16.928316 kubelet[2402]: I0517 00:37:16.928307 2402 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:37:16.928460 kubelet[2402]: I0517 00:37:16.928443 2402 state_mem.go:36] "Initialized new in-memory state store" May 17 00:37:16.928715 kubelet[2402]: I0517 00:37:16.928704 2402 kubelet.go:446] "Attempting to sync node with API server" May 17 00:37:16.928817 kubelet[2402]: I0517 00:37:16.928808 2402 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:37:16.928911 kubelet[2402]: I0517 00:37:16.928903 2402 kubelet.go:352] "Adding apiserver pod source" May 17 00:37:16.928989 kubelet[2402]: I0517 00:37:16.928981 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:37:16.934683 kubelet[2402]: I0517 00:37:16.931192 2402 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:37:16.934683 kubelet[2402]: I0517 00:37:16.931663 2402 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:37:16.934683 kubelet[2402]: I0517 00:37:16.932135 2402 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:37:16.934683 kubelet[2402]: I0517 00:37:16.932167 2402 server.go:1287] "Started kubelet" May 17 00:37:16.934683 kubelet[2402]: I0517 00:37:16.934194 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:37:16.946464 kubelet[2402]: E0517 00:37:16.946438 2402 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:37:16.947483 kubelet[2402]: I0517 00:37:16.947453 2402 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:37:16.948620 kubelet[2402]: I0517 00:37:16.948602 2402 server.go:479] "Adding debug handlers to kubelet server" May 17 00:37:16.949902 kubelet[2402]: I0517 00:37:16.949853 2402 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:37:16.950223 kubelet[2402]: I0517 00:37:16.950200 2402 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:37:16.950516 kubelet[2402]: I0517 00:37:16.950497 2402 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:37:16.951152 kubelet[2402]: I0517 00:37:16.951133 2402 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:37:16.954784 kubelet[2402]: I0517 00:37:16.954763 2402 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:37:16.954901 kubelet[2402]: I0517 00:37:16.954888 2402 reconciler.go:26] "Reconciler: start to sync state" May 17 00:37:16.962832 kubelet[2402]: I0517 00:37:16.962670 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:37:16.963937 kubelet[2402]: I0517 00:37:16.963919 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:37:16.964028 kubelet[2402]: I0517 00:37:16.963941 2402 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:37:16.964028 kubelet[2402]: I0517 00:37:16.963961 2402 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:37:16.964028 kubelet[2402]: I0517 00:37:16.963969 2402 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:37:16.964028 kubelet[2402]: E0517 00:37:16.964022 2402 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:37:16.964910 kubelet[2402]: I0517 00:37:16.964892 2402 factory.go:221] Registration of the containerd container factory successfully May 17 00:37:16.965023 kubelet[2402]: I0517 00:37:16.965011 2402 factory.go:221] Registration of the systemd container factory successfully May 17 00:37:16.965194 kubelet[2402]: I0517 00:37:16.965173 2402 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:37:16.984782 sudo[2430]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:37:16.985068 sudo[2430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:37:17.033746 kubelet[2402]: I0517 00:37:17.033712 2402 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:37:17.033746 kubelet[2402]: I0517 00:37:17.033739 2402 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:37:17.033962 kubelet[2402]: I0517 00:37:17.033760 2402 state_mem.go:36] "Initialized new in-memory state store" May 17 00:37:17.033962 kubelet[2402]: I0517 00:37:17.033921 2402 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:37:17.033962 kubelet[2402]: I0517 00:37:17.033945 2402 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:37:17.034095 kubelet[2402]: I0517 00:37:17.033970 2402 policy_none.go:49] "None policy: Start" May 17 00:37:17.034095 kubelet[2402]: I0517 00:37:17.033982 2402 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:37:17.034095 kubelet[2402]: I0517 00:37:17.033997 2402 state_mem.go:35] "Initializing new in-memory state store" May 17 00:37:17.034216 kubelet[2402]: I0517 00:37:17.034130 2402 state_mem.go:75] "Updated machine memory state" May 17 00:37:17.041592 kubelet[2402]: I0517 00:37:17.041567 2402 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:37:17.041753 kubelet[2402]: I0517 00:37:17.041738 2402 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:37:17.041827 kubelet[2402]: I0517 00:37:17.041757 2402 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:37:17.042589 kubelet[2402]: I0517 00:37:17.042567 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:37:17.050520 kubelet[2402]: E0517 00:37:17.050497 2402 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:37:17.067330 kubelet[2402]: I0517 00:37:17.067292 2402 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.069609 kubelet[2402]: I0517 00:37:17.067720 2402 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.070452 kubelet[2402]: I0517 00:37:17.067858 2402 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.079099 kubelet[2402]: W0517 00:37:17.079075 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:37:17.083274 kubelet[2402]: W0517 00:37:17.083256 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:37:17.083910 kubelet[2402]: W0517 00:37:17.083895 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:37:17.154013 kubelet[2402]: I0517 00:37:17.153910 2402 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.163484 kubelet[2402]: I0517 00:37:17.163460 2402 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.163706 kubelet[2402]: I0517 00:37:17.163685 2402 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256304 kubelet[2402]: I0517 00:37:17.256262 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256514 kubelet[2402]: I0517 00:37:17.256440 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/959ab538a776da28c3faefe7aac974ac-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-ec5807f93e\" (UID: \"959ab538a776da28c3faefe7aac974ac\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256514 kubelet[2402]: I0517 00:37:17.256477 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e54628016f0c5afbe38b7c68be68cef8-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" (UID: \"e54628016f0c5afbe38b7c68be68cef8\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256621 kubelet[2402]: I0517 00:37:17.256531 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256621 kubelet[2402]: I0517 00:37:17.256562 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256621 kubelet[2402]: I0517 00:37:17.256611 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256767 kubelet[2402]: I0517 00:37:17.256635 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/231d0d299a1cf1bd8dbc8b9a3f6a4af6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-ec5807f93e\" (UID: \"231d0d299a1cf1bd8dbc8b9a3f6a4af6\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256767 kubelet[2402]: I0517 00:37:17.256684 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e54628016f0c5afbe38b7c68be68cef8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" (UID: \"e54628016f0c5afbe38b7c68be68cef8\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.256767 kubelet[2402]: I0517 00:37:17.256712 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e54628016f0c5afbe38b7c68be68cef8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" (UID: \"e54628016f0c5afbe38b7c68be68cef8\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:17.569827 sudo[2430]: pam_unix(sudo:session): session closed for user root May 17 00:37:17.940680 kubelet[2402]: I0517 00:37:17.940564 2402 apiserver.go:52] "Watching apiserver" May 17 00:37:17.955283 kubelet[2402]: I0517 00:37:17.955243 2402 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:37:18.016912 kubelet[2402]: I0517 00:37:18.016874 2402 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:18.039943 kubelet[2402]: W0517 00:37:18.039913 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:37:18.040205 kubelet[2402]: E0517 00:37:18.040184 2402 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-ec5807f93e\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" May 17 00:37:18.062296 kubelet[2402]: I0517 00:37:18.062229 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ec5807f93e" podStartSLOduration=1.06221055 podStartE2EDuration="1.06221055s" podCreationTimestamp="2025-05-17 00:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:18.060730917 +0000 UTC m=+1.848383424" watchObservedRunningTime="2025-05-17 00:37:18.06221055 +0000 UTC m=+1.849863157" May 17 00:37:18.087063 kubelet[2402]: I0517 00:37:18.086992 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ec5807f93e" podStartSLOduration=1.086973223 podStartE2EDuration="1.086973223s" podCreationTimestamp="2025-05-17 00:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:18.086314253 +0000 UTC m=+1.873966860" watchObservedRunningTime="2025-05-17 00:37:18.086973223 +0000 UTC m=+1.874625730" May 17 00:37:18.087309 kubelet[2402]: I0517 00:37:18.087148 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-ec5807f93e" podStartSLOduration=1.0871364159999999 podStartE2EDuration="1.087136416s" podCreationTimestamp="2025-05-17 00:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:18.074722681 +0000 UTC m=+1.862375288" watchObservedRunningTime="2025-05-17 00:37:18.087136416 +0000 UTC m=+1.874788923" May 17 00:37:19.186550 sudo[1711]: pam_unix(sudo:session): session closed for user root May 17 00:37:19.293991 sshd[1708]: pam_unix(sshd:session): session closed for user core May 17 00:37:19.297264 systemd[1]: sshd@4-10.200.4.16:22-10.200.16.10:34968.service: Deactivated successfully. May 17 00:37:19.298266 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:37:19.298457 systemd[1]: session-7.scope: Consumed 4.482s CPU time. May 17 00:37:19.299080 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. May 17 00:37:19.300067 systemd-logind[1427]: Removed session 7. May 17 00:37:20.673521 kubelet[2402]: I0517 00:37:20.673486 2402 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:37:20.673976 env[1437]: time="2025-05-17T00:37:20.673854705Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:37:20.674299 kubelet[2402]: I0517 00:37:20.674071 2402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:37:21.631773 systemd[1]: Created slice kubepods-burstable-pod43ba3dcf_0a86_47c1_b9cc_1f43dea36111.slice. May 17 00:37:21.641495 systemd[1]: Created slice kubepods-besteffort-pod3589f946_d377_4a3e_b923_981b1cb1ce17.slice. May 17 00:37:21.643774 kubelet[2402]: W0517 00:37:21.643743 2402 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-n-ec5807f93e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object May 17 00:37:21.644010 kubelet[2402]: E0517 00:37:21.643974 2402 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" logger="UnhandledError" May 17 00:37:21.686194 kubelet[2402]: I0517 00:37:21.686156 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-net\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.686968 kubelet[2402]: I0517 00:37:21.686949 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hubble-tls\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687058 kubelet[2402]: I0517 00:37:21.687047 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3589f946-d377-4a3e-b923-981b1cb1ce17-kube-proxy\") pod \"kube-proxy-nbqmn\" (UID: \"3589f946-d377-4a3e-b923-981b1cb1ce17\") " pod="kube-system/kube-proxy-nbqmn" May 17 00:37:21.687133 kubelet[2402]: I0517 00:37:21.687122 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3589f946-d377-4a3e-b923-981b1cb1ce17-lib-modules\") pod \"kube-proxy-nbqmn\" (UID: \"3589f946-d377-4a3e-b923-981b1cb1ce17\") " pod="kube-system/kube-proxy-nbqmn" May 17 00:37:21.687207 kubelet[2402]: I0517 00:37:21.687195 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-lib-modules\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687290 kubelet[2402]: I0517 00:37:21.687279 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-run\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687380 kubelet[2402]: I0517 00:37:21.687366 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-bpf-maps\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687474 kubelet[2402]: I0517 00:37:21.687462 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-clustermesh-secrets\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687553 kubelet[2402]: I0517 00:37:21.687539 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2qc\" (UniqueName: \"kubernetes.io/projected/3589f946-d377-4a3e-b923-981b1cb1ce17-kube-api-access-bp2qc\") pod \"kube-proxy-nbqmn\" (UID: \"3589f946-d377-4a3e-b923-981b1cb1ce17\") " pod="kube-system/kube-proxy-nbqmn" May 17 00:37:21.687629 kubelet[2402]: I0517 00:37:21.687618 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-xtables-lock\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687824 kubelet[2402]: I0517 00:37:21.687810 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-cgroup\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687900 kubelet[2402]: I0517 00:37:21.687888 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-config-path\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.687974 kubelet[2402]: I0517 00:37:21.687963 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7r4\" (UniqueName: \"kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-kube-api-access-td7r4\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.688051 kubelet[2402]: I0517 00:37:21.688040 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3589f946-d377-4a3e-b923-981b1cb1ce17-xtables-lock\") pod \"kube-proxy-nbqmn\" (UID: \"3589f946-d377-4a3e-b923-981b1cb1ce17\") " pod="kube-system/kube-proxy-nbqmn" May 17 00:37:21.688123 kubelet[2402]: I0517 00:37:21.688113 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hostproc\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.688237 kubelet[2402]: I0517 00:37:21.688227 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-etc-cni-netd\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.688325 kubelet[2402]: I0517 00:37:21.688315 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cni-path\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.688454 kubelet[2402]: I0517 00:37:21.688431 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-kernel\") pod \"cilium-6czmx\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " pod="kube-system/cilium-6czmx" May 17 00:37:21.739667 systemd[1]: Created slice kubepods-besteffort-podfda90ecb_4af4_4f12_be29_d46469590d8a.slice. May 17 00:37:21.789622 kubelet[2402]: I0517 00:37:21.789541 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fda90ecb-4af4-4f12-be29-d46469590d8a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2x4b9\" (UID: \"fda90ecb-4af4-4f12-be29-d46469590d8a\") " pod="kube-system/cilium-operator-6c4d7847fc-2x4b9" May 17 00:37:21.790519 kubelet[2402]: I0517 00:37:21.790343 2402 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:37:21.790760 kubelet[2402]: I0517 00:37:21.790719 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x552r\" (UniqueName: \"kubernetes.io/projected/fda90ecb-4af4-4f12-be29-d46469590d8a-kube-api-access-x552r\") pod \"cilium-operator-6c4d7847fc-2x4b9\" (UID: \"fda90ecb-4af4-4f12-be29-d46469590d8a\") " pod="kube-system/cilium-operator-6c4d7847fc-2x4b9" May 17 00:37:21.938376 env[1437]: time="2025-05-17T00:37:21.937857994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6czmx,Uid:43ba3dcf-0a86-47c1-b9cc-1f43dea36111,Namespace:kube-system,Attempt:0,}" May 17 00:37:21.967600 env[1437]: time="2025-05-17T00:37:21.967382157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:21.967600 env[1437]: time="2025-05-17T00:37:21.967431655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:21.967600 env[1437]: time="2025-05-17T00:37:21.967447655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:21.967952 env[1437]: time="2025-05-17T00:37:21.967905235Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24 pid=2484 runtime=io.containerd.runc.v2 May 17 00:37:21.980173 systemd[1]: Started cri-containerd-2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24.scope. May 17 00:37:22.004071 env[1437]: time="2025-05-17T00:37:22.004029526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6czmx,Uid:43ba3dcf-0a86-47c1-b9cc-1f43dea36111,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\"" May 17 00:37:22.006245 env[1437]: time="2025-05-17T00:37:22.006127840Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:37:22.044077 env[1437]: time="2025-05-17T00:37:22.044033896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2x4b9,Uid:fda90ecb-4af4-4f12-be29-d46469590d8a,Namespace:kube-system,Attempt:0,}" May 17 00:37:22.089454 env[1437]: time="2025-05-17T00:37:22.089384149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:22.089454 env[1437]: time="2025-05-17T00:37:22.089422047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:22.089668 env[1437]: time="2025-05-17T00:37:22.089435347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:22.089968 env[1437]: time="2025-05-17T00:37:22.089871329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad pid=2524 runtime=io.containerd.runc.v2 May 17 00:37:22.104408 systemd[1]: Started cri-containerd-82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad.scope. May 17 00:37:22.146369 env[1437]: time="2025-05-17T00:37:22.145516762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2x4b9,Uid:fda90ecb-4af4-4f12-be29-d46469590d8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\"" May 17 00:37:22.850996 env[1437]: time="2025-05-17T00:37:22.850926226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbqmn,Uid:3589f946-d377-4a3e-b923-981b1cb1ce17,Namespace:kube-system,Attempt:0,}" May 17 00:37:22.895827 env[1437]: time="2025-05-17T00:37:22.895760500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:22.895827 env[1437]: time="2025-05-17T00:37:22.895796598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:22.896029 env[1437]: time="2025-05-17T00:37:22.895810198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:22.896270 env[1437]: time="2025-05-17T00:37:22.896213381Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64073d9b5e232bde76e98551c60d401ebe082cc71b9c4e9f8880ad94c8956063 pid=2564 runtime=io.containerd.runc.v2 May 17 00:37:22.925125 systemd[1]: run-containerd-runc-k8s.io-64073d9b5e232bde76e98551c60d401ebe082cc71b9c4e9f8880ad94c8956063-runc.RUHscj.mount: Deactivated successfully. May 17 00:37:22.930970 systemd[1]: Started cri-containerd-64073d9b5e232bde76e98551c60d401ebe082cc71b9c4e9f8880ad94c8956063.scope. May 17 00:37:22.952293 env[1437]: time="2025-05-17T00:37:22.952242599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbqmn,Uid:3589f946-d377-4a3e-b923-981b1cb1ce17,Namespace:kube-system,Attempt:0,} returns sandbox id \"64073d9b5e232bde76e98551c60d401ebe082cc71b9c4e9f8880ad94c8956063\"" May 17 00:37:22.955669 env[1437]: time="2025-05-17T00:37:22.955632961Z" level=info msg="CreateContainer within sandbox \"64073d9b5e232bde76e98551c60d401ebe082cc71b9c4e9f8880ad94c8956063\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:37:23.003479 env[1437]: time="2025-05-17T00:37:23.003437516Z" level=info msg="CreateContainer within sandbox \"64073d9b5e232bde76e98551c60d401ebe082cc71b9c4e9f8880ad94c8956063\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29d52f470cd067704add4bb324ab1f9276ea8fb9933ebc4ea8428088f070bee7\"" May 17 00:37:23.005857 env[1437]: time="2025-05-17T00:37:23.004063691Z" level=info msg="StartContainer for \"29d52f470cd067704add4bb324ab1f9276ea8fb9933ebc4ea8428088f070bee7\"" May 17 00:37:23.022051 systemd[1]: Started cri-containerd-29d52f470cd067704add4bb324ab1f9276ea8fb9933ebc4ea8428088f070bee7.scope. May 17 00:37:23.064276 env[1437]: time="2025-05-17T00:37:23.064239706Z" level=info msg="StartContainer for \"29d52f470cd067704add4bb324ab1f9276ea8fb9933ebc4ea8428088f070bee7\" returns successfully" May 17 00:37:24.095720 kubelet[2402]: I0517 00:37:24.095414 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbqmn" podStartSLOduration=3.095391737 podStartE2EDuration="3.095391737s" podCreationTimestamp="2025-05-17 00:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:24.094928555 +0000 UTC m=+7.882581162" watchObservedRunningTime="2025-05-17 00:37:24.095391737 +0000 UTC m=+7.883044244" May 17 00:37:27.884461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41237430.mount: Deactivated successfully. May 17 00:37:30.640734 env[1437]: time="2025-05-17T00:37:30.640682038Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.645926 env[1437]: time="2025-05-17T00:37:30.645815070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.650068 env[1437]: time="2025-05-17T00:37:30.650026331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:30.650820 env[1437]: time="2025-05-17T00:37:30.650778707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:37:30.652999 env[1437]: time="2025-05-17T00:37:30.652971935Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:37:30.654455 env[1437]: time="2025-05-17T00:37:30.654423887Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:37:30.684080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142513081.mount: Deactivated successfully. May 17 00:37:30.697996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63786766.mount: Deactivated successfully. May 17 00:37:30.704599 env[1437]: time="2025-05-17T00:37:30.704528442Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\"" May 17 00:37:30.706913 env[1437]: time="2025-05-17T00:37:30.705254318Z" level=info msg="StartContainer for \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\"" May 17 00:37:30.726056 systemd[1]: Started cri-containerd-5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c.scope. May 17 00:37:30.767759 env[1437]: time="2025-05-17T00:37:30.767704568Z" level=info msg="StartContainer for \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\" returns successfully" May 17 00:37:30.774588 systemd[1]: cri-containerd-5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c.scope: Deactivated successfully. May 17 00:37:31.681043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c-rootfs.mount: Deactivated successfully. May 17 00:37:34.724456 env[1437]: time="2025-05-17T00:37:34.724402037Z" level=info msg="shim disconnected" id=5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c May 17 00:37:34.724456 env[1437]: time="2025-05-17T00:37:34.724448236Z" level=warning msg="cleaning up after shim disconnected" id=5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c namespace=k8s.io May 17 00:37:34.724456 env[1437]: time="2025-05-17T00:37:34.724462336Z" level=info msg="cleaning up dead shim" May 17 00:37:34.733734 env[1437]: time="2025-05-17T00:37:34.733688163Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2815 runtime=io.containerd.runc.v2\n" May 17 00:37:35.111836 env[1437]: time="2025-05-17T00:37:35.111596270Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:37:35.151391 env[1437]: time="2025-05-17T00:37:35.151325225Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\"" May 17 00:37:35.153366 env[1437]: time="2025-05-17T00:37:35.152020705Z" level=info msg="StartContainer for \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\"" May 17 00:37:35.180327 systemd[1]: Started cri-containerd-74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733.scope. May 17 00:37:35.234320 env[1437]: time="2025-05-17T00:37:35.232524885Z" level=info msg="StartContainer for \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\" returns successfully" May 17 00:37:35.241014 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:37:35.241313 systemd[1]: Stopped systemd-sysctl.service. May 17 00:37:35.241523 systemd[1]: Stopping systemd-sysctl.service... May 17 00:37:35.244924 systemd[1]: Starting systemd-sysctl.service... May 17 00:37:35.245300 systemd[1]: cri-containerd-74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733.scope: Deactivated successfully. May 17 00:37:35.260451 systemd[1]: Finished systemd-sysctl.service. May 17 00:37:35.342011 env[1437]: time="2025-05-17T00:37:35.341958631Z" level=info msg="shim disconnected" id=74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733 May 17 00:37:35.342280 env[1437]: time="2025-05-17T00:37:35.342261322Z" level=warning msg="cleaning up after shim disconnected" id=74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733 namespace=k8s.io May 17 00:37:35.342369 env[1437]: time="2025-05-17T00:37:35.342340320Z" level=info msg="cleaning up dead shim" May 17 00:37:35.350708 env[1437]: time="2025-05-17T00:37:35.350653480Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2880 runtime=io.containerd.runc.v2\n" May 17 00:37:36.118347 env[1437]: time="2025-05-17T00:37:36.118191044Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:37:36.137621 systemd[1]: run-containerd-runc-k8s.io-74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733-runc.6m5SFZ.mount: Deactivated successfully. May 17 00:37:36.137756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733-rootfs.mount: Deactivated successfully. May 17 00:37:36.174544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524949100.mount: Deactivated successfully. May 17 00:37:36.200083 env[1437]: time="2025-05-17T00:37:36.200030645Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\"" May 17 00:37:36.201921 env[1437]: time="2025-05-17T00:37:36.200763225Z" level=info msg="StartContainer for \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\"" May 17 00:37:36.212022 env[1437]: time="2025-05-17T00:37:36.211533522Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:36.217176 env[1437]: time="2025-05-17T00:37:36.217135165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:36.220298 env[1437]: time="2025-05-17T00:37:36.220261677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:37:36.220634 env[1437]: time="2025-05-17T00:37:36.220596168Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:37:36.224005 env[1437]: time="2025-05-17T00:37:36.223672681Z" level=info msg="CreateContainer within sandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:37:36.231417 systemd[1]: Started cri-containerd-a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a.scope. May 17 00:37:36.265037 env[1437]: time="2025-05-17T00:37:36.264984421Z" level=info msg="CreateContainer within sandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\"" May 17 00:37:36.269190 env[1437]: time="2025-05-17T00:37:36.268070734Z" level=info msg="StartContainer for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\"" May 17 00:37:36.268303 systemd[1]: cri-containerd-a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a.scope: Deactivated successfully. May 17 00:37:36.269594 env[1437]: time="2025-05-17T00:37:36.269564692Z" level=info msg="StartContainer for \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\" returns successfully" May 17 00:37:36.293234 systemd[1]: Started cri-containerd-bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242.scope. May 17 00:37:36.821801 env[1437]: time="2025-05-17T00:37:36.821736280Z" level=info msg="StartContainer for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" returns successfully" May 17 00:37:36.823439 env[1437]: time="2025-05-17T00:37:36.823392834Z" level=info msg="shim disconnected" id=a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a May 17 00:37:36.823614 env[1437]: time="2025-05-17T00:37:36.823592928Z" level=warning msg="cleaning up after shim disconnected" id=a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a namespace=k8s.io May 17 00:37:36.823707 env[1437]: time="2025-05-17T00:37:36.823691025Z" level=info msg="cleaning up dead shim" May 17 00:37:36.838521 env[1437]: time="2025-05-17T00:37:36.838474910Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2981 runtime=io.containerd.runc.v2\n" May 17 00:37:37.127936 env[1437]: time="2025-05-17T00:37:37.127818871Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:37:37.138718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a-rootfs.mount: Deactivated successfully. May 17 00:37:37.174790 env[1437]: time="2025-05-17T00:37:37.174737986Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\"" May 17 00:37:37.175425 env[1437]: time="2025-05-17T00:37:37.175389868Z" level=info msg="StartContainer for \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\"" May 17 00:37:37.223210 systemd[1]: Started cri-containerd-4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b.scope. May 17 00:37:37.337572 env[1437]: time="2025-05-17T00:37:37.337516528Z" level=info msg="StartContainer for \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\" returns successfully" May 17 00:37:37.341970 systemd[1]: cri-containerd-4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b.scope: Deactivated successfully. May 17 00:37:37.375250 env[1437]: time="2025-05-17T00:37:37.375194397Z" level=info msg="shim disconnected" id=4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b May 17 00:37:37.375250 env[1437]: time="2025-05-17T00:37:37.375250695Z" level=warning msg="cleaning up after shim disconnected" id=4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b namespace=k8s.io May 17 00:37:37.375559 env[1437]: time="2025-05-17T00:37:37.375261495Z" level=info msg="cleaning up dead shim" May 17 00:37:37.399786 env[1437]: time="2025-05-17T00:37:37.399661027Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3037 runtime=io.containerd.runc.v2\n" May 17 00:37:38.138238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b-rootfs.mount: Deactivated successfully. May 17 00:37:38.150533 env[1437]: time="2025-05-17T00:37:38.148777812Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:37:38.168936 kubelet[2402]: I0517 00:37:38.168811 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2x4b9" podStartSLOduration=3.09328237 podStartE2EDuration="17.168792378s" podCreationTimestamp="2025-05-17 00:37:21 +0000 UTC" firstStartedPulling="2025-05-17 00:37:22.146701214 +0000 UTC m=+5.934353721" lastFinishedPulling="2025-05-17 00:37:36.222211222 +0000 UTC m=+20.009863729" observedRunningTime="2025-05-17 00:37:37.247835684 +0000 UTC m=+21.035488291" watchObservedRunningTime="2025-05-17 00:37:38.168792378 +0000 UTC m=+21.956444985" May 17 00:37:38.178393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount803257740.mount: Deactivated successfully. May 17 00:37:38.195126 env[1437]: time="2025-05-17T00:37:38.195052477Z" level=info msg="CreateContainer within sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\"" May 17 00:37:38.196858 env[1437]: time="2025-05-17T00:37:38.195858855Z" level=info msg="StartContainer for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\"" May 17 00:37:38.233185 systemd[1]: Started cri-containerd-830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9.scope. May 17 00:37:38.272676 env[1437]: time="2025-05-17T00:37:38.272628905Z" level=info msg="StartContainer for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" returns successfully" May 17 00:37:38.410318 kubelet[2402]: I0517 00:37:38.410212 2402 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:37:38.459655 systemd[1]: Created slice kubepods-burstable-podcf11f481_326c_4fbf_b692_c7117209f827.slice. May 17 00:37:38.468148 systemd[1]: Created slice kubepods-burstable-pod4aa58f58_57eb_48bf_91db_c92e6f376b49.slice. May 17 00:37:38.474706 kubelet[2402]: I0517 00:37:38.474654 2402 status_manager.go:890] "Failed to get status for pod" podUID="cf11f481-326c-4fbf-b692-c7117209f827" pod="kube-system/coredns-668d6bf9bc-zjw5h" err="pods \"coredns-668d6bf9bc-zjw5h\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" May 17 00:37:38.474918 kubelet[2402]: W0517 00:37:38.474896 2402 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.7-n-ec5807f93e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object May 17 00:37:38.475018 kubelet[2402]: E0517 00:37:38.474936 2402 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" logger="UnhandledError" May 17 00:37:38.519590 kubelet[2402]: I0517 00:37:38.519518 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf11f481-326c-4fbf-b692-c7117209f827-config-volume\") pod \"coredns-668d6bf9bc-zjw5h\" (UID: \"cf11f481-326c-4fbf-b692-c7117209f827\") " pod="kube-system/coredns-668d6bf9bc-zjw5h" May 17 00:37:38.519785 kubelet[2402]: I0517 00:37:38.519607 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5m7\" (UniqueName: \"kubernetes.io/projected/cf11f481-326c-4fbf-b692-c7117209f827-kube-api-access-kq5m7\") pod \"coredns-668d6bf9bc-zjw5h\" (UID: \"cf11f481-326c-4fbf-b692-c7117209f827\") " pod="kube-system/coredns-668d6bf9bc-zjw5h" May 17 00:37:38.519785 kubelet[2402]: I0517 00:37:38.519656 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4aa58f58-57eb-48bf-91db-c92e6f376b49-config-volume\") pod \"coredns-668d6bf9bc-qdbnz\" (UID: \"4aa58f58-57eb-48bf-91db-c92e6f376b49\") " pod="kube-system/coredns-668d6bf9bc-qdbnz" May 17 00:37:38.519785 kubelet[2402]: I0517 00:37:38.519676 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht9lg\" (UniqueName: \"kubernetes.io/projected/4aa58f58-57eb-48bf-91db-c92e6f376b49-kube-api-access-ht9lg\") pod \"coredns-668d6bf9bc-qdbnz\" (UID: \"4aa58f58-57eb-48bf-91db-c92e6f376b49\") " pod="kube-system/coredns-668d6bf9bc-qdbnz" May 17 00:37:39.166163 kubelet[2402]: I0517 00:37:39.166097 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6czmx" podStartSLOduration=9.519761649 podStartE2EDuration="18.166075558s" podCreationTimestamp="2025-05-17 00:37:21 +0000 UTC" firstStartedPulling="2025-05-17 00:37:22.005736256 +0000 UTC m=+5.793388763" lastFinishedPulling="2025-05-17 00:37:30.652050165 +0000 UTC m=+14.439702672" observedRunningTime="2025-05-17 00:37:39.16559887 +0000 UTC m=+22.953251377" watchObservedRunningTime="2025-05-17 00:37:39.166075558 +0000 UTC m=+22.953728165" May 17 00:37:39.620316 kubelet[2402]: E0517 00:37:39.620262 2402 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 17 00:37:39.620868 kubelet[2402]: E0517 00:37:39.620413 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cf11f481-326c-4fbf-b692-c7117209f827-config-volume podName:cf11f481-326c-4fbf-b692-c7117209f827 nodeName:}" failed. No retries permitted until 2025-05-17 00:37:40.120384728 +0000 UTC m=+23.908037235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cf11f481-326c-4fbf-b692-c7117209f827-config-volume") pod "coredns-668d6bf9bc-zjw5h" (UID: "cf11f481-326c-4fbf-b692-c7117209f827") : failed to sync configmap cache: timed out waiting for the condition May 17 00:37:39.620868 kubelet[2402]: E0517 00:37:39.620740 2402 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 17 00:37:39.620868 kubelet[2402]: E0517 00:37:39.620794 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4aa58f58-57eb-48bf-91db-c92e6f376b49-config-volume podName:4aa58f58-57eb-48bf-91db-c92e6f376b49 nodeName:}" failed. No retries permitted until 2025-05-17 00:37:40.120777917 +0000 UTC m=+23.908430424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4aa58f58-57eb-48bf-91db-c92e6f376b49-config-volume") pod "coredns-668d6bf9bc-qdbnz" (UID: "4aa58f58-57eb-48bf-91db-c92e6f376b49") : failed to sync configmap cache: timed out waiting for the condition May 17 00:37:40.276206 env[1437]: time="2025-05-17T00:37:40.276131529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qdbnz,Uid:4aa58f58-57eb-48bf-91db-c92e6f376b49,Namespace:kube-system,Attempt:0,}" May 17 00:37:40.277002 env[1437]: time="2025-05-17T00:37:40.276962708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjw5h,Uid:cf11f481-326c-4fbf-b692-c7117209f827,Namespace:kube-system,Attempt:0,}" May 17 00:37:40.600683 systemd-networkd[1590]: cilium_host: Link UP May 17 00:37:40.609714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:37:40.609863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:37:40.602898 systemd-networkd[1590]: cilium_net: Link UP May 17 00:37:40.606092 systemd-networkd[1590]: cilium_net: Gained carrier May 17 00:37:40.612685 systemd-networkd[1590]: cilium_host: Gained carrier May 17 00:37:40.854910 systemd-networkd[1590]: cilium_vxlan: Link UP May 17 00:37:40.854922 systemd-networkd[1590]: cilium_vxlan: Gained carrier May 17 00:37:41.072533 systemd-networkd[1590]: cilium_net: Gained IPv6LL May 17 00:37:41.110384 kernel: NET: Registered PF_ALG protocol family May 17 00:37:41.520648 systemd-networkd[1590]: cilium_host: Gained IPv6LL May 17 00:37:41.946933 systemd-networkd[1590]: lxc_health: Link UP May 17 00:37:41.978866 systemd-networkd[1590]: lxc_health: Gained carrier May 17 00:37:41.979476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:37:42.352594 systemd-networkd[1590]: cilium_vxlan: Gained IPv6LL May 17 00:37:42.360874 systemd-networkd[1590]: lxcdec7b240ee2a: Link UP May 17 00:37:42.367384 kernel: eth0: renamed from tmp2025c May 17 00:37:42.375519 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdec7b240ee2a: link becomes ready May 17 00:37:42.375306 systemd-networkd[1590]: lxcdec7b240ee2a: Gained carrier May 17 00:37:42.381503 systemd-networkd[1590]: lxc6af8f9dc7f06: Link UP May 17 00:37:42.388446 kernel: eth0: renamed from tmp62ada May 17 00:37:42.402134 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6af8f9dc7f06: link becomes ready May 17 00:37:42.398657 systemd-networkd[1590]: lxc6af8f9dc7f06: Gained carrier May 17 00:37:43.440608 systemd-networkd[1590]: lxc_health: Gained IPv6LL May 17 00:37:43.569547 systemd-networkd[1590]: lxc6af8f9dc7f06: Gained IPv6LL May 17 00:37:43.760645 systemd-networkd[1590]: lxcdec7b240ee2a: Gained IPv6LL May 17 00:37:46.059639 env[1437]: time="2025-05-17T00:37:46.059566092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:46.060133 env[1437]: time="2025-05-17T00:37:46.060100280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:46.060263 env[1437]: time="2025-05-17T00:37:46.060240177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:46.072783 env[1437]: time="2025-05-17T00:37:46.062508527Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62adaa31e738810fb2ff9a5a5bbcacbff7fe2ac767e31cf9fbc37c7b652c01b0 pid=3580 runtime=io.containerd.runc.v2 May 17 00:37:46.091683 systemd[1]: Started cri-containerd-62adaa31e738810fb2ff9a5a5bbcacbff7fe2ac767e31cf9fbc37c7b652c01b0.scope. May 17 00:37:46.112097 systemd[1]: run-containerd-runc-k8s.io-62adaa31e738810fb2ff9a5a5bbcacbff7fe2ac767e31cf9fbc37c7b652c01b0-runc.xeYWm6.mount: Deactivated successfully. May 17 00:37:46.119087 env[1437]: time="2025-05-17T00:37:46.119013787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:37:46.119255 env[1437]: time="2025-05-17T00:37:46.119099185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:37:46.119255 env[1437]: time="2025-05-17T00:37:46.119128385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:37:46.119491 env[1437]: time="2025-05-17T00:37:46.119434878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2025c239a4df978acdd51c387f28a0a7735d6bbb2ad1d841d6d4a772968391dd pid=3607 runtime=io.containerd.runc.v2 May 17 00:37:46.146641 systemd[1]: Started cri-containerd-2025c239a4df978acdd51c387f28a0a7735d6bbb2ad1d841d6d4a772968391dd.scope. May 17 00:37:46.204142 env[1437]: time="2025-05-17T00:37:46.204100120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjw5h,Uid:cf11f481-326c-4fbf-b692-c7117209f827,Namespace:kube-system,Attempt:0,} returns sandbox id \"62adaa31e738810fb2ff9a5a5bbcacbff7fe2ac767e31cf9fbc37c7b652c01b0\"" May 17 00:37:46.209658 env[1437]: time="2025-05-17T00:37:46.209624699Z" level=info msg="CreateContainer within sandbox \"62adaa31e738810fb2ff9a5a5bbcacbff7fe2ac767e31cf9fbc37c7b652c01b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:37:46.250457 env[1437]: time="2025-05-17T00:37:46.250408404Z" level=info msg="CreateContainer within sandbox \"62adaa31e738810fb2ff9a5a5bbcacbff7fe2ac767e31cf9fbc37c7b652c01b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"836af956613cd71e0f6ead0ee4d4d38e5cd5761b2064745bbebf84aa50a581ee\"" May 17 00:37:46.253253 env[1437]: time="2025-05-17T00:37:46.251495680Z" level=info msg="StartContainer for \"836af956613cd71e0f6ead0ee4d4d38e5cd5761b2064745bbebf84aa50a581ee\"" May 17 00:37:46.261076 env[1437]: time="2025-05-17T00:37:46.260801976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qdbnz,Uid:4aa58f58-57eb-48bf-91db-c92e6f376b49,Namespace:kube-system,Attempt:0,} returns sandbox id \"2025c239a4df978acdd51c387f28a0a7735d6bbb2ad1d841d6d4a772968391dd\"" May 17 00:37:46.266888 env[1437]: time="2025-05-17T00:37:46.266848943Z" level=info msg="CreateContainer within sandbox \"2025c239a4df978acdd51c387f28a0a7735d6bbb2ad1d841d6d4a772968391dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:37:46.295737 systemd[1]: Started cri-containerd-836af956613cd71e0f6ead0ee4d4d38e5cd5761b2064745bbebf84aa50a581ee.scope. May 17 00:37:46.314639 env[1437]: time="2025-05-17T00:37:46.314595495Z" level=info msg="CreateContainer within sandbox \"2025c239a4df978acdd51c387f28a0a7735d6bbb2ad1d841d6d4a772968391dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4669025ef990e06c86f13cd53619143212289f82699520ce75333c2e25118214\"" May 17 00:37:46.315740 env[1437]: time="2025-05-17T00:37:46.315705171Z" level=info msg="StartContainer for \"4669025ef990e06c86f13cd53619143212289f82699520ce75333c2e25118214\"" May 17 00:37:46.344795 env[1437]: time="2025-05-17T00:37:46.344747533Z" level=info msg="StartContainer for \"836af956613cd71e0f6ead0ee4d4d38e5cd5761b2064745bbebf84aa50a581ee\" returns successfully" May 17 00:37:46.354180 systemd[1]: Started cri-containerd-4669025ef990e06c86f13cd53619143212289f82699520ce75333c2e25118214.scope. May 17 00:37:46.400920 env[1437]: time="2025-05-17T00:37:46.400866402Z" level=info msg="StartContainer for \"4669025ef990e06c86f13cd53619143212289f82699520ce75333c2e25118214\" returns successfully" May 17 00:37:47.192893 kubelet[2402]: I0517 00:37:47.192836 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zjw5h" podStartSLOduration=26.19281712 podStartE2EDuration="26.19281712s" podCreationTimestamp="2025-05-17 00:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:47.192463127 +0000 UTC m=+30.980115734" watchObservedRunningTime="2025-05-17 00:37:47.19281712 +0000 UTC m=+30.980469727" May 17 00:37:47.193560 kubelet[2402]: I0517 00:37:47.193517 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qdbnz" podStartSLOduration=26.193501805 podStartE2EDuration="26.193501805s" podCreationTimestamp="2025-05-17 00:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:37:47.181490762 +0000 UTC m=+30.969143269" watchObservedRunningTime="2025-05-17 00:37:47.193501805 +0000 UTC m=+30.981154412" May 17 00:39:46.068884 update_engine[1429]: I0517 00:39:46.068830 1429 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:39:46.068884 update_engine[1429]: I0517 00:39:46.068876 1429 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:39:46.069691 update_engine[1429]: I0517 00:39:46.069061 1429 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:39:46.069856 update_engine[1429]: I0517 00:39:46.069823 1429 omaha_request_params.cc:62] Current group set to lts May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070024 1429 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070040 1429 update_attempter.cc:643] Scheduling an action processor start. May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070061 1429 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070101 1429 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070176 1429 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070184 1429 omaha_request_action.cc:271] Request: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: May 17 00:39:46.070462 update_engine[1429]: I0517 00:39:46.070192 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:39:46.072224 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:39:46.155434 update_engine[1429]: I0517 00:39:46.155395 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:39:46.155708 update_engine[1429]: I0517 00:39:46.155681 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:39:46.173532 update_engine[1429]: E0517 00:39:46.173491 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:39:46.173722 update_engine[1429]: I0517 00:39:46.173653 1429 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:39:53.080384 systemd[1]: Started sshd@5-10.200.4.16:22-10.200.16.10:39822.service. May 17 00:39:53.669038 sshd[3750]: Accepted publickey for core from 10.200.16.10 port 39822 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:39:53.670683 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:53.675596 systemd[1]: Started session-8.scope. May 17 00:39:53.676191 systemd-logind[1427]: New session 8 of user core. May 17 00:39:54.156462 sshd[3750]: pam_unix(sshd:session): session closed for user core May 17 00:39:54.159599 systemd[1]: sshd@5-10.200.4.16:22-10.200.16.10:39822.service: Deactivated successfully. May 17 00:39:54.160551 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:39:54.161275 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. May 17 00:39:54.162232 systemd-logind[1427]: Removed session 8. May 17 00:39:55.976415 update_engine[1429]: I0517 00:39:55.976264 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:39:55.977009 update_engine[1429]: I0517 00:39:55.976846 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:39:55.977121 update_engine[1429]: I0517 00:39:55.977092 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:39:55.999653 update_engine[1429]: E0517 00:39:55.999616 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:39:55.999812 update_engine[1429]: I0517 00:39:55.999745 1429 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:39:59.255990 systemd[1]: Started sshd@6-10.200.4.16:22-10.200.16.10:53938.service. May 17 00:39:59.839799 sshd[3764]: Accepted publickey for core from 10.200.16.10 port 53938 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:39:59.841216 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:59.846467 systemd[1]: Started session-9.scope. May 17 00:39:59.846923 systemd-logind[1427]: New session 9 of user core. May 17 00:40:00.329973 sshd[3764]: pam_unix(sshd:session): session closed for user core May 17 00:40:00.332715 systemd[1]: sshd@6-10.200.4.16:22-10.200.16.10:53938.service: Deactivated successfully. May 17 00:40:00.333693 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:40:00.334395 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. May 17 00:40:00.335184 systemd-logind[1427]: Removed session 9. May 17 00:40:05.433709 systemd[1]: Started sshd@7-10.200.4.16:22-10.200.16.10:53946.service. May 17 00:40:05.980412 update_engine[1429]: I0517 00:40:05.980335 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:40:05.980865 update_engine[1429]: I0517 00:40:05.980660 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:40:05.980933 update_engine[1429]: I0517 00:40:05.980913 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:40:05.994141 update_engine[1429]: E0517 00:40:05.994103 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:40:05.994277 update_engine[1429]: I0517 00:40:05.994212 1429 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:40:06.026698 sshd[3776]: Accepted publickey for core from 10.200.16.10 port 53946 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:06.028438 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:06.034306 systemd[1]: Started session-10.scope. May 17 00:40:06.035214 systemd-logind[1427]: New session 10 of user core. May 17 00:40:06.509593 sshd[3776]: pam_unix(sshd:session): session closed for user core May 17 00:40:06.512787 systemd[1]: sshd@7-10.200.4.16:22-10.200.16.10:53946.service: Deactivated successfully. May 17 00:40:06.513657 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:40:06.514534 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. May 17 00:40:06.515326 systemd-logind[1427]: Removed session 10. May 17 00:40:11.608622 systemd[1]: Started sshd@8-10.200.4.16:22-10.200.16.10:50076.service. May 17 00:40:12.190829 sshd[3788]: Accepted publickey for core from 10.200.16.10 port 50076 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:12.192462 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:12.197987 systemd[1]: Started session-11.scope. May 17 00:40:12.198709 systemd-logind[1427]: New session 11 of user core. May 17 00:40:12.682302 sshd[3788]: pam_unix(sshd:session): session closed for user core May 17 00:40:12.688133 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. May 17 00:40:12.689730 systemd[1]: sshd@8-10.200.4.16:22-10.200.16.10:50076.service: Deactivated successfully. May 17 00:40:12.690909 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:40:12.691616 systemd-logind[1427]: Removed session 11. May 17 00:40:15.978009 update_engine[1429]: I0517 00:40:15.977949 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:40:15.978528 update_engine[1429]: I0517 00:40:15.978276 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:40:15.978644 update_engine[1429]: I0517 00:40:15.978577 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:40:15.997456 update_engine[1429]: E0517 00:40:15.997409 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:40:15.997618 update_engine[1429]: I0517 00:40:15.997520 1429 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:40:15.997618 update_engine[1429]: I0517 00:40:15.997531 1429 omaha_request_action.cc:621] Omaha request response: May 17 00:40:15.997712 update_engine[1429]: E0517 00:40:15.997623 1429 omaha_request_action.cc:640] Omaha request network transfer failed. May 17 00:40:15.997712 update_engine[1429]: I0517 00:40:15.997656 1429 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:40:15.997712 update_engine[1429]: I0517 00:40:15.997661 1429 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:40:15.997712 update_engine[1429]: I0517 00:40:15.997666 1429 update_attempter.cc:306] Processing Done. May 17 00:40:15.997712 update_engine[1429]: E0517 00:40:15.997680 1429 update_attempter.cc:619] Update failed. May 17 00:40:15.997712 update_engine[1429]: I0517 00:40:15.997686 1429 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:40:15.997712 update_engine[1429]: I0517 00:40:15.997691 1429 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:40:15.997712 update_engine[1429]: I0517 00:40:15.997696 1429 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:40:15.998003 update_engine[1429]: I0517 00:40:15.997781 1429 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:40:15.998003 update_engine[1429]: I0517 00:40:15.997804 1429 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 00:40:15.998003 update_engine[1429]: I0517 00:40:15.997810 1429 omaha_request_action.cc:271] Request: May 17 00:40:15.998003 update_engine[1429]: May 17 00:40:15.998003 update_engine[1429]: May 17 00:40:15.998003 update_engine[1429]: May 17 00:40:15.998003 update_engine[1429]: May 17 00:40:15.998003 update_engine[1429]: May 17 00:40:15.998003 update_engine[1429]: May 17 00:40:15.998003 update_engine[1429]: I0517 00:40:15.997817 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:40:15.998003 update_engine[1429]: I0517 00:40:15.997994 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:40:15.998412 update_engine[1429]: I0517 00:40:15.998158 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:40:15.998553 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:40:16.006168 update_engine[1429]: E0517 00:40:16.006139 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006232 1429 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006242 1429 omaha_request_action.cc:621] Omaha request response: May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006249 1429 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006254 1429 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006258 1429 update_attempter.cc:306] Processing Done. May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006263 1429 update_attempter.cc:310] Error event sent. May 17 00:40:16.006277 update_engine[1429]: I0517 00:40:16.006273 1429 update_check_scheduler.cc:74] Next update check in 49m23s May 17 00:40:16.006657 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:40:17.782569 systemd[1]: Started sshd@9-10.200.4.16:22-10.200.16.10:50084.service. May 17 00:40:18.379685 sshd[3803]: Accepted publickey for core from 10.200.16.10 port 50084 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:18.381453 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:18.385412 systemd-logind[1427]: New session 12 of user core. May 17 00:40:18.386506 systemd[1]: Started session-12.scope. May 17 00:40:18.861702 sshd[3803]: pam_unix(sshd:session): session closed for user core May 17 00:40:18.865551 systemd[1]: sshd@9-10.200.4.16:22-10.200.16.10:50084.service: Deactivated successfully. May 17 00:40:18.866731 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:40:18.867638 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. May 17 00:40:18.868643 systemd-logind[1427]: Removed session 12. May 17 00:40:18.960911 systemd[1]: Started sshd@10-10.200.4.16:22-10.200.16.10:57570.service. May 17 00:40:19.547080 sshd[3817]: Accepted publickey for core from 10.200.16.10 port 57570 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:19.548677 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:19.554076 systemd[1]: Started session-13.scope. May 17 00:40:19.554719 systemd-logind[1427]: New session 13 of user core. May 17 00:40:20.063250 sshd[3817]: pam_unix(sshd:session): session closed for user core May 17 00:40:20.066600 systemd[1]: sshd@10-10.200.4.16:22-10.200.16.10:57570.service: Deactivated successfully. May 17 00:40:20.067744 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:40:20.068664 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. May 17 00:40:20.069735 systemd-logind[1427]: Removed session 13. May 17 00:40:20.164337 systemd[1]: Started sshd@11-10.200.4.16:22-10.200.16.10:57586.service. May 17 00:40:20.750565 sshd[3827]: Accepted publickey for core from 10.200.16.10 port 57586 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:20.752027 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:20.756862 systemd-logind[1427]: New session 14 of user core. May 17 00:40:20.757403 systemd[1]: Started session-14.scope. May 17 00:40:21.249540 sshd[3827]: pam_unix(sshd:session): session closed for user core May 17 00:40:21.252726 systemd[1]: sshd@11-10.200.4.16:22-10.200.16.10:57586.service: Deactivated successfully. May 17 00:40:21.253661 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:40:21.254302 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. May 17 00:40:21.255065 systemd-logind[1427]: Removed session 14. May 17 00:40:26.355494 systemd[1]: Started sshd@12-10.200.4.16:22-10.200.16.10:57596.service. May 17 00:40:26.942379 sshd[3842]: Accepted publickey for core from 10.200.16.10 port 57596 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:26.944108 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:26.950096 systemd[1]: Started session-15.scope. May 17 00:40:26.950779 systemd-logind[1427]: New session 15 of user core. May 17 00:40:27.424649 sshd[3842]: pam_unix(sshd:session): session closed for user core May 17 00:40:27.427648 systemd[1]: sshd@12-10.200.4.16:22-10.200.16.10:57596.service: Deactivated successfully. May 17 00:40:27.428601 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:40:27.429344 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. May 17 00:40:27.430230 systemd-logind[1427]: Removed session 15. May 17 00:40:27.525088 systemd[1]: Started sshd@13-10.200.4.16:22-10.200.16.10:57608.service. May 17 00:40:28.116860 sshd[3854]: Accepted publickey for core from 10.200.16.10 port 57608 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:28.118446 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:28.123349 systemd[1]: Started session-16.scope. May 17 00:40:28.123968 systemd-logind[1427]: New session 16 of user core. May 17 00:40:28.617607 sshd[3854]: pam_unix(sshd:session): session closed for user core May 17 00:40:28.620489 systemd[1]: sshd@13-10.200.4.16:22-10.200.16.10:57608.service: Deactivated successfully. May 17 00:40:28.621675 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. May 17 00:40:28.621768 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:40:28.623508 systemd-logind[1427]: Removed session 16. May 17 00:40:28.716279 systemd[1]: Started sshd@14-10.200.4.16:22-10.200.16.10:46388.service. May 17 00:40:29.302979 sshd[3863]: Accepted publickey for core from 10.200.16.10 port 46388 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:29.304539 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:29.309415 systemd-logind[1427]: New session 17 of user core. May 17 00:40:29.309920 systemd[1]: Started session-17.scope. May 17 00:40:30.589141 sshd[3863]: pam_unix(sshd:session): session closed for user core May 17 00:40:30.592429 systemd[1]: sshd@14-10.200.4.16:22-10.200.16.10:46388.service: Deactivated successfully. May 17 00:40:30.593507 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:40:30.594208 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. May 17 00:40:30.595148 systemd-logind[1427]: Removed session 17. May 17 00:40:30.687858 systemd[1]: Started sshd@15-10.200.4.16:22-10.200.16.10:46402.service. May 17 00:40:31.276703 sshd[3881]: Accepted publickey for core from 10.200.16.10 port 46402 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:31.278276 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:31.283190 systemd[1]: Started session-18.scope. May 17 00:40:31.283837 systemd-logind[1427]: New session 18 of user core. May 17 00:40:31.861013 sshd[3881]: pam_unix(sshd:session): session closed for user core May 17 00:40:31.864187 systemd[1]: sshd@15-10.200.4.16:22-10.200.16.10:46402.service: Deactivated successfully. May 17 00:40:31.865036 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:40:31.866033 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. May 17 00:40:31.866895 systemd-logind[1427]: Removed session 18. May 17 00:40:31.960631 systemd[1]: Started sshd@16-10.200.4.16:22-10.200.16.10:46416.service. May 17 00:40:32.547711 sshd[3890]: Accepted publickey for core from 10.200.16.10 port 46416 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:32.549433 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:32.555422 systemd-logind[1427]: New session 19 of user core. May 17 00:40:32.556038 systemd[1]: Started session-19.scope. May 17 00:40:33.021776 sshd[3890]: pam_unix(sshd:session): session closed for user core May 17 00:40:33.025073 systemd[1]: sshd@16-10.200.4.16:22-10.200.16.10:46416.service: Deactivated successfully. May 17 00:40:33.026066 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:40:33.027493 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. May 17 00:40:33.028834 systemd-logind[1427]: Removed session 19. May 17 00:40:38.122444 systemd[1]: Started sshd@17-10.200.4.16:22-10.200.16.10:46422.service. May 17 00:40:38.712495 sshd[3904]: Accepted publickey for core from 10.200.16.10 port 46422 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:38.714088 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:38.719017 systemd-logind[1427]: New session 20 of user core. May 17 00:40:38.719550 systemd[1]: Started session-20.scope. May 17 00:40:39.191006 sshd[3904]: pam_unix(sshd:session): session closed for user core May 17 00:40:39.194322 systemd[1]: sshd@17-10.200.4.16:22-10.200.16.10:46422.service: Deactivated successfully. May 17 00:40:39.195498 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:40:39.196235 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. May 17 00:40:39.197119 systemd-logind[1427]: Removed session 20. May 17 00:40:44.292686 systemd[1]: Started sshd@18-10.200.4.16:22-10.200.16.10:56624.service. May 17 00:40:44.892432 sshd[3916]: Accepted publickey for core from 10.200.16.10 port 56624 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:44.894231 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:44.899978 systemd[1]: Started session-21.scope. May 17 00:40:44.900613 systemd-logind[1427]: New session 21 of user core. May 17 00:40:45.374271 sshd[3916]: pam_unix(sshd:session): session closed for user core May 17 00:40:45.377929 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. May 17 00:40:45.378193 systemd[1]: sshd@18-10.200.4.16:22-10.200.16.10:56624.service: Deactivated successfully. May 17 00:40:45.379316 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:40:45.380399 systemd-logind[1427]: Removed session 21. May 17 00:40:50.474421 systemd[1]: Started sshd@19-10.200.4.16:22-10.200.16.10:58370.service. May 17 00:40:51.065030 sshd[3929]: Accepted publickey for core from 10.200.16.10 port 58370 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:51.066480 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:51.071180 systemd-logind[1427]: New session 22 of user core. May 17 00:40:51.071851 systemd[1]: Started session-22.scope. May 17 00:40:51.548125 sshd[3929]: pam_unix(sshd:session): session closed for user core May 17 00:40:51.551962 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. May 17 00:40:51.552222 systemd[1]: sshd@19-10.200.4.16:22-10.200.16.10:58370.service: Deactivated successfully. May 17 00:40:51.553162 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:40:51.554035 systemd-logind[1427]: Removed session 22. May 17 00:40:51.646196 systemd[1]: Started sshd@20-10.200.4.16:22-10.200.16.10:58380.service. May 17 00:40:52.230913 sshd[3941]: Accepted publickey for core from 10.200.16.10 port 58380 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:52.232450 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:52.237705 systemd[1]: Started session-23.scope. May 17 00:40:52.238149 systemd-logind[1427]: New session 23 of user core. May 17 00:40:53.865230 env[1437]: time="2025-05-17T00:40:53.864829612Z" level=info msg="StopContainer for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" with timeout 30 (s)" May 17 00:40:53.866926 env[1437]: time="2025-05-17T00:40:53.866885847Z" level=info msg="Stop container \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" with signal terminated" May 17 00:40:53.874764 systemd[1]: run-containerd-runc-k8s.io-830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9-runc.catvqp.mount: Deactivated successfully. May 17 00:40:53.891529 systemd[1]: cri-containerd-bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242.scope: Deactivated successfully. May 17 00:40:53.896203 env[1437]: time="2025-05-17T00:40:53.896144049Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:40:53.905625 env[1437]: time="2025-05-17T00:40:53.905585411Z" level=info msg="StopContainer for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" with timeout 2 (s)" May 17 00:40:53.906043 env[1437]: time="2025-05-17T00:40:53.906007918Z" level=info msg="Stop container \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" with signal terminated" May 17 00:40:53.918103 systemd-networkd[1590]: lxc_health: Link DOWN May 17 00:40:53.918114 systemd-networkd[1590]: lxc_health: Lost carrier May 17 00:40:53.922754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242-rootfs.mount: Deactivated successfully. May 17 00:40:53.939682 systemd[1]: cri-containerd-830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9.scope: Deactivated successfully. May 17 00:40:53.939979 systemd[1]: cri-containerd-830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9.scope: Consumed 7.307s CPU time. May 17 00:40:53.960529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9-rootfs.mount: Deactivated successfully. May 17 00:40:53.971229 env[1437]: time="2025-05-17T00:40:53.971113834Z" level=info msg="shim disconnected" id=bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242 May 17 00:40:53.971458 env[1437]: time="2025-05-17T00:40:53.971230836Z" level=warning msg="cleaning up after shim disconnected" id=bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242 namespace=k8s.io May 17 00:40:53.971458 env[1437]: time="2025-05-17T00:40:53.971251936Z" level=info msg="cleaning up dead shim" May 17 00:40:53.976305 env[1437]: time="2025-05-17T00:40:53.976261922Z" level=info msg="shim disconnected" id=830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9 May 17 00:40:53.976465 env[1437]: time="2025-05-17T00:40:53.976445326Z" level=warning msg="cleaning up after shim disconnected" id=830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9 namespace=k8s.io May 17 00:40:53.976543 env[1437]: time="2025-05-17T00:40:53.976530927Z" level=info msg="cleaning up dead shim" May 17 00:40:53.982053 env[1437]: time="2025-05-17T00:40:53.982015721Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" May 17 00:40:53.986028 env[1437]: time="2025-05-17T00:40:53.985990689Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4017 runtime=io.containerd.runc.v2\n" May 17 00:40:53.986476 env[1437]: time="2025-05-17T00:40:53.986443297Z" level=info msg="StopContainer for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" returns successfully" May 17 00:40:53.987181 env[1437]: time="2025-05-17T00:40:53.987136309Z" level=info msg="StopPodSandbox for \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\"" May 17 00:40:53.987324 env[1437]: time="2025-05-17T00:40:53.987308812Z" level=info msg="Container to stop \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:53.994432 systemd[1]: cri-containerd-82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad.scope: Deactivated successfully. May 17 00:40:53.996115 env[1437]: time="2025-05-17T00:40:53.995342149Z" level=info msg="StopContainer for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" returns successfully" May 17 00:40:53.996115 env[1437]: time="2025-05-17T00:40:53.996017161Z" level=info msg="StopPodSandbox for \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\"" May 17 00:40:53.996115 env[1437]: time="2025-05-17T00:40:53.996100062Z" level=info msg="Container to stop \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:53.997840 env[1437]: time="2025-05-17T00:40:53.996118663Z" level=info msg="Container to stop \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:53.997840 env[1437]: time="2025-05-17T00:40:53.996134963Z" level=info msg="Container to stop \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:53.997840 env[1437]: time="2025-05-17T00:40:53.996206964Z" level=info msg="Container to stop \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:53.997840 env[1437]: time="2025-05-17T00:40:53.996558270Z" level=info msg="Container to stop \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:54.004910 systemd[1]: cri-containerd-2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24.scope: Deactivated successfully. May 17 00:40:54.041077 env[1437]: time="2025-05-17T00:40:54.041020823Z" level=info msg="shim disconnected" id=82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad May 17 00:40:54.041463 env[1437]: time="2025-05-17T00:40:54.041435630Z" level=warning msg="cleaning up after shim disconnected" id=82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad namespace=k8s.io May 17 00:40:54.041610 env[1437]: time="2025-05-17T00:40:54.041589733Z" level=info msg="cleaning up dead shim" May 17 00:40:54.042081 env[1437]: time="2025-05-17T00:40:54.041393530Z" level=info msg="shim disconnected" id=2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24 May 17 00:40:54.042181 env[1437]: time="2025-05-17T00:40:54.042088241Z" level=warning msg="cleaning up after shim disconnected" id=2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24 namespace=k8s.io May 17 00:40:54.042181 env[1437]: time="2025-05-17T00:40:54.042102742Z" level=info msg="cleaning up dead shim" May 17 00:40:54.053041 env[1437]: time="2025-05-17T00:40:54.053009426Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4073 runtime=io.containerd.runc.v2\n" May 17 00:40:54.054565 env[1437]: time="2025-05-17T00:40:54.054527252Z" level=info msg="TearDown network for sandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" successfully" May 17 00:40:54.054649 env[1437]: time="2025-05-17T00:40:54.054564552Z" level=info msg="StopPodSandbox for \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" returns successfully" May 17 00:40:54.056702 env[1437]: time="2025-05-17T00:40:54.056327382Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4074 runtime=io.containerd.runc.v2\n" May 17 00:40:54.057194 env[1437]: time="2025-05-17T00:40:54.057074495Z" level=info msg="TearDown network for sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" successfully" May 17 00:40:54.057194 env[1437]: time="2025-05-17T00:40:54.057105995Z" level=info msg="StopPodSandbox for \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" returns successfully" May 17 00:40:54.158054 kubelet[2402]: I0517 00:40:54.157928 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-bpf-maps\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.158054 kubelet[2402]: I0517 00:40:54.158013 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-kernel\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.158054 kubelet[2402]: I0517 00:40:54.158048 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fda90ecb-4af4-4f12-be29-d46469590d8a-cilium-config-path\") pod \"fda90ecb-4af4-4f12-be29-d46469590d8a\" (UID: \"fda90ecb-4af4-4f12-be29-d46469590d8a\") " May 17 00:40:54.158851 kubelet[2402]: I0517 00:40:54.158096 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.158851 kubelet[2402]: I0517 00:40:54.158164 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-clustermesh-secrets\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.160948 kubelet[2402]: I0517 00:40:54.160912 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.162107 kubelet[2402]: I0517 00:40:54.160980 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.162107 kubelet[2402]: I0517 00:40:54.162057 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda90ecb-4af4-4f12-be29-d46469590d8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fda90ecb-4af4-4f12-be29-d46469590d8a" (UID: "fda90ecb-4af4-4f12-be29-d46469590d8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:40:54.162281 kubelet[2402]: I0517 00:40:54.160950 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-etc-cni-netd\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162281 kubelet[2402]: I0517 00:40:54.162154 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-lib-modules\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162281 kubelet[2402]: I0517 00:40:54.162181 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-cgroup\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162281 kubelet[2402]: I0517 00:40:54.162205 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hostproc\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162281 kubelet[2402]: I0517 00:40:54.162230 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-xtables-lock\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162281 kubelet[2402]: I0517 00:40:54.162261 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x552r\" (UniqueName: \"kubernetes.io/projected/fda90ecb-4af4-4f12-be29-d46469590d8a-kube-api-access-x552r\") pod \"fda90ecb-4af4-4f12-be29-d46469590d8a\" (UID: \"fda90ecb-4af4-4f12-be29-d46469590d8a\") " May 17 00:40:54.162595 kubelet[2402]: I0517 00:40:54.162293 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-config-path\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162595 kubelet[2402]: I0517 00:40:54.162320 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7r4\" (UniqueName: \"kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-kube-api-access-td7r4\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162595 kubelet[2402]: I0517 00:40:54.162347 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-net\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162595 kubelet[2402]: I0517 00:40:54.162411 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-run\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162595 kubelet[2402]: I0517 00:40:54.162435 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cni-path\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162595 kubelet[2402]: I0517 00:40:54.162484 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hubble-tls\") pod \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\" (UID: \"43ba3dcf-0a86-47c1-b9cc-1f43dea36111\") " May 17 00:40:54.162889 kubelet[2402]: I0517 00:40:54.162561 2402 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-bpf-maps\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.162889 kubelet[2402]: I0517 00:40:54.162580 2402 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.162889 kubelet[2402]: I0517 00:40:54.162598 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fda90ecb-4af4-4f12-be29-d46469590d8a-cilium-config-path\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.162889 kubelet[2402]: I0517 00:40:54.162611 2402 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-etc-cni-netd\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.166658 kubelet[2402]: I0517 00:40:54.166606 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:40:54.167274 kubelet[2402]: I0517 00:40:54.167245 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.167491 kubelet[2402]: I0517 00:40:54.167469 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.167643 kubelet[2402]: I0517 00:40:54.167623 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cni-path" (OuterVolumeSpecName: "cni-path") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.167836 kubelet[2402]: I0517 00:40:54.167817 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:40:54.167972 kubelet[2402]: I0517 00:40:54.167951 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.168102 kubelet[2402]: I0517 00:40:54.168085 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.168223 kubelet[2402]: I0517 00:40:54.168208 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hostproc" (OuterVolumeSpecName: "hostproc") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.168399 kubelet[2402]: I0517 00:40:54.168383 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:54.170811 kubelet[2402]: I0517 00:40:54.170775 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:40:54.171976 kubelet[2402]: I0517 00:40:54.171939 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-kube-api-access-td7r4" (OuterVolumeSpecName: "kube-api-access-td7r4") pod "43ba3dcf-0a86-47c1-b9cc-1f43dea36111" (UID: "43ba3dcf-0a86-47c1-b9cc-1f43dea36111"). InnerVolumeSpecName "kube-api-access-td7r4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:40:54.172602 kubelet[2402]: I0517 00:40:54.172565 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda90ecb-4af4-4f12-be29-d46469590d8a-kube-api-access-x552r" (OuterVolumeSpecName: "kube-api-access-x552r") pod "fda90ecb-4af4-4f12-be29-d46469590d8a" (UID: "fda90ecb-4af4-4f12-be29-d46469590d8a"). InnerVolumeSpecName "kube-api-access-x552r". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:40:54.262862 kubelet[2402]: I0517 00:40:54.262798 2402 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-host-proc-sys-net\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.262862 kubelet[2402]: I0517 00:40:54.262840 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-config-path\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.262922 2402 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-td7r4\" (UniqueName: \"kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-kube-api-access-td7r4\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.262944 2402 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hubble-tls\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.262958 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-run\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.262969 2402 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cni-path\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.262981 2402 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-lib-modules\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.262993 2402 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-clustermesh-secrets\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.263005 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-cilium-cgroup\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263153 kubelet[2402]: I0517 00:40:54.263017 2402 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-hostproc\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263426 kubelet[2402]: I0517 00:40:54.263028 2402 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43ba3dcf-0a86-47c1-b9cc-1f43dea36111-xtables-lock\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.263426 kubelet[2402]: I0517 00:40:54.263040 2402 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x552r\" (UniqueName: \"kubernetes.io/projected/fda90ecb-4af4-4f12-be29-d46469590d8a-kube-api-access-x552r\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:54.546957 kubelet[2402]: I0517 00:40:54.546840 2402 scope.go:117] "RemoveContainer" containerID="bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242" May 17 00:40:54.550418 env[1437]: time="2025-05-17T00:40:54.550375736Z" level=info msg="RemoveContainer for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\"" May 17 00:40:54.554451 systemd[1]: Removed slice kubepods-besteffort-podfda90ecb_4af4_4f12_be29_d46469590d8a.slice. May 17 00:40:54.561920 systemd[1]: Removed slice kubepods-burstable-pod43ba3dcf_0a86_47c1_b9cc_1f43dea36111.slice. May 17 00:40:54.562033 systemd[1]: kubepods-burstable-pod43ba3dcf_0a86_47c1_b9cc_1f43dea36111.slice: Consumed 7.420s CPU time. May 17 00:40:54.563936 env[1437]: time="2025-05-17T00:40:54.563040250Z" level=info msg="RemoveContainer for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" returns successfully" May 17 00:40:54.566450 kubelet[2402]: I0517 00:40:54.564625 2402 scope.go:117] "RemoveContainer" containerID="bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242" May 17 00:40:54.566450 kubelet[2402]: E0517 00:40:54.565152 2402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\": not found" containerID="bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242" May 17 00:40:54.566450 kubelet[2402]: I0517 00:40:54.565185 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242"} err="failed to get container status \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\": not found" May 17 00:40:54.566450 kubelet[2402]: I0517 00:40:54.565272 2402 scope.go:117] "RemoveContainer" containerID="830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9" May 17 00:40:54.566717 env[1437]: time="2025-05-17T00:40:54.564869881Z" level=error msg="ContainerStatus for \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc8ecef22d6a594c861bc30509352602edf5038e0653d425717ed1db1b7f242\": not found" May 17 00:40:54.566787 env[1437]: time="2025-05-17T00:40:54.566705012Z" level=info msg="RemoveContainer for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\"" May 17 00:40:54.575115 env[1437]: time="2025-05-17T00:40:54.575082253Z" level=info msg="RemoveContainer for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" returns successfully" May 17 00:40:54.575256 kubelet[2402]: I0517 00:40:54.575236 2402 scope.go:117] "RemoveContainer" containerID="4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b" May 17 00:40:54.578088 env[1437]: time="2025-05-17T00:40:54.578017603Z" level=info msg="RemoveContainer for \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\"" May 17 00:40:54.587268 env[1437]: time="2025-05-17T00:40:54.587133357Z" level=info msg="RemoveContainer for \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\" returns successfully" May 17 00:40:54.588967 kubelet[2402]: I0517 00:40:54.588943 2402 scope.go:117] "RemoveContainer" containerID="a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a" May 17 00:40:54.590735 env[1437]: time="2025-05-17T00:40:54.590437113Z" level=info msg="RemoveContainer for \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\"" May 17 00:40:54.604082 env[1437]: time="2025-05-17T00:40:54.604037643Z" level=info msg="RemoveContainer for \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\" returns successfully" May 17 00:40:54.604286 kubelet[2402]: I0517 00:40:54.604260 2402 scope.go:117] "RemoveContainer" containerID="74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733" May 17 00:40:54.605326 env[1437]: time="2025-05-17T00:40:54.605295164Z" level=info msg="RemoveContainer for \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\"" May 17 00:40:54.612892 env[1437]: time="2025-05-17T00:40:54.612854592Z" level=info msg="RemoveContainer for \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\" returns successfully" May 17 00:40:54.613070 kubelet[2402]: I0517 00:40:54.613046 2402 scope.go:117] "RemoveContainer" containerID="5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c" May 17 00:40:54.614083 env[1437]: time="2025-05-17T00:40:54.614053812Z" level=info msg="RemoveContainer for \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\"" May 17 00:40:54.625112 env[1437]: time="2025-05-17T00:40:54.625068399Z" level=info msg="RemoveContainer for \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\" returns successfully" May 17 00:40:54.625476 kubelet[2402]: I0517 00:40:54.625461 2402 scope.go:117] "RemoveContainer" containerID="830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9" May 17 00:40:54.625915 env[1437]: time="2025-05-17T00:40:54.625853412Z" level=error msg="ContainerStatus for \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\": not found" May 17 00:40:54.626619 kubelet[2402]: E0517 00:40:54.626576 2402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\": not found" containerID="830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9" May 17 00:40:54.627255 kubelet[2402]: I0517 00:40:54.627180 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9"} err="failed to get container status \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"830f79060d08c81698feda3df68431c6219fa5609b898902672784e72b0b30f9\": not found" May 17 00:40:54.627414 kubelet[2402]: I0517 00:40:54.627398 2402 scope.go:117] "RemoveContainer" containerID="4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b" May 17 00:40:54.627811 env[1437]: time="2025-05-17T00:40:54.627750544Z" level=error msg="ContainerStatus for \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\": not found" May 17 00:40:54.628165 kubelet[2402]: E0517 00:40:54.628095 2402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\": not found" containerID="4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b" May 17 00:40:54.628165 kubelet[2402]: I0517 00:40:54.628141 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b"} err="failed to get container status \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4284c0adfab1f5cf1e05265babbfbee80dcff129f14b1fb3e17836deb705e95b\": not found" May 17 00:40:54.628165 kubelet[2402]: I0517 00:40:54.628165 2402 scope.go:117] "RemoveContainer" containerID="a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a" May 17 00:40:54.628644 env[1437]: time="2025-05-17T00:40:54.628596858Z" level=error msg="ContainerStatus for \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\": not found" May 17 00:40:54.628811 kubelet[2402]: E0517 00:40:54.628791 2402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\": not found" containerID="a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a" May 17 00:40:54.628897 kubelet[2402]: I0517 00:40:54.628817 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a"} err="failed to get container status \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a882bffc866f8dcf2c1a54592da29260d6ab714aa8d80000a4ed95212dc1381a\": not found" May 17 00:40:54.628897 kubelet[2402]: I0517 00:40:54.628838 2402 scope.go:117] "RemoveContainer" containerID="74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733" May 17 00:40:54.629127 env[1437]: time="2025-05-17T00:40:54.629081566Z" level=error msg="ContainerStatus for \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\": not found" May 17 00:40:54.629294 kubelet[2402]: E0517 00:40:54.629272 2402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\": not found" containerID="74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733" May 17 00:40:54.629380 kubelet[2402]: I0517 00:40:54.629298 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733"} err="failed to get container status \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\": rpc error: code = NotFound desc = an error occurred when try to find container \"74de53384d24e3935b80a21f35009e3ff38fc1cd84878e841f78f44d62f4c733\": not found" May 17 00:40:54.629380 kubelet[2402]: I0517 00:40:54.629334 2402 scope.go:117] "RemoveContainer" containerID="5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c" May 17 00:40:54.629576 env[1437]: time="2025-05-17T00:40:54.629529974Z" level=error msg="ContainerStatus for \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\": not found" May 17 00:40:54.629724 kubelet[2402]: E0517 00:40:54.629703 2402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\": not found" containerID="5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c" May 17 00:40:54.629806 kubelet[2402]: I0517 00:40:54.629728 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c"} err="failed to get container status \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f82e942e6f9ccef595702db5f80400d4ce4dcd366b7b9edf30fa6da8ec5ad5c\": not found" May 17 00:40:54.859481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad-rootfs.mount: Deactivated successfully. May 17 00:40:54.859600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad-shm.mount: Deactivated successfully. May 17 00:40:54.859686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24-rootfs.mount: Deactivated successfully. May 17 00:40:54.859764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24-shm.mount: Deactivated successfully. May 17 00:40:54.859835 systemd[1]: var-lib-kubelet-pods-fda90ecb\x2d4af4\x2d4f12\x2dbe29\x2dd46469590d8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx552r.mount: Deactivated successfully. May 17 00:40:54.859910 systemd[1]: var-lib-kubelet-pods-43ba3dcf\x2d0a86\x2d47c1\x2db9cc\x2d1f43dea36111-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtd7r4.mount: Deactivated successfully. May 17 00:40:54.859988 systemd[1]: var-lib-kubelet-pods-43ba3dcf\x2d0a86\x2d47c1\x2db9cc\x2d1f43dea36111-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:40:54.860227 systemd[1]: var-lib-kubelet-pods-43ba3dcf\x2d0a86\x2d47c1\x2db9cc\x2d1f43dea36111-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:40:54.967808 kubelet[2402]: I0517 00:40:54.967711 2402 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ba3dcf-0a86-47c1-b9cc-1f43dea36111" path="/var/lib/kubelet/pods/43ba3dcf-0a86-47c1-b9cc-1f43dea36111/volumes" May 17 00:40:54.968783 kubelet[2402]: I0517 00:40:54.968674 2402 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda90ecb-4af4-4f12-be29-d46469590d8a" path="/var/lib/kubelet/pods/fda90ecb-4af4-4f12-be29-d46469590d8a/volumes" May 17 00:40:55.904770 sshd[3941]: pam_unix(sshd:session): session closed for user core May 17 00:40:55.908101 systemd[1]: sshd@20-10.200.4.16:22-10.200.16.10:58380.service: Deactivated successfully. May 17 00:40:55.909286 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:40:55.909319 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. May 17 00:40:55.910577 systemd-logind[1427]: Removed session 23. May 17 00:40:56.015056 systemd[1]: Started sshd@21-10.200.4.16:22-10.200.16.10:58390.service. May 17 00:40:56.604950 sshd[4107]: Accepted publickey for core from 10.200.16.10 port 58390 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:56.606576 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:56.611899 systemd-logind[1427]: New session 24 of user core. May 17 00:40:56.612404 systemd[1]: Started session-24.scope. May 17 00:40:57.104434 kubelet[2402]: E0517 00:40:57.104399 2402 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:40:57.313850 kubelet[2402]: I0517 00:40:57.313806 2402 memory_manager.go:355] "RemoveStaleState removing state" podUID="43ba3dcf-0a86-47c1-b9cc-1f43dea36111" containerName="cilium-agent" May 17 00:40:57.313850 kubelet[2402]: I0517 00:40:57.313853 2402 memory_manager.go:355] "RemoveStaleState removing state" podUID="fda90ecb-4af4-4f12-be29-d46469590d8a" containerName="cilium-operator" May 17 00:40:57.320947 systemd[1]: Created slice kubepods-burstable-podb5f3b6f5_c238_4be1_9a21_a417acbaa62b.slice. May 17 00:40:57.324643 kubelet[2402]: I0517 00:40:57.324609 2402 status_manager.go:890] "Failed to get status for pod" podUID="b5f3b6f5-c238-4be1-9a21-a417acbaa62b" pod="kube-system/cilium-rvfdz" err="pods \"cilium-rvfdz\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" May 17 00:40:57.324856 kubelet[2402]: W0517 00:40:57.324835 2402 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-n-ec5807f93e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object May 17 00:40:57.325141 kubelet[2402]: E0517 00:40:57.325118 2402 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" logger="UnhandledError" May 17 00:40:57.325247 kubelet[2402]: W0517 00:40:57.325007 2402 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.7-n-ec5807f93e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object May 17 00:40:57.325388 kubelet[2402]: E0517 00:40:57.325335 2402 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" logger="UnhandledError" May 17 00:40:57.325484 kubelet[2402]: W0517 00:40:57.325052 2402 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-n-ec5807f93e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object May 17 00:40:57.325588 kubelet[2402]: E0517 00:40:57.325563 2402 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" logger="UnhandledError" May 17 00:40:57.325663 kubelet[2402]: W0517 00:40:57.325080 2402 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-n-ec5807f93e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object May 17 00:40:57.325745 kubelet[2402]: E0517 00:40:57.325729 2402 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.7-n-ec5807f93e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-ec5807f93e' and this object" logger="UnhandledError" May 17 00:40:57.369954 sshd[4107]: pam_unix(sshd:session): session closed for user core May 17 00:40:57.374138 systemd-logind[1427]: Session 24 logged out. Waiting for processes to exit. May 17 00:40:57.375058 systemd[1]: sshd@21-10.200.4.16:22-10.200.16.10:58390.service: Deactivated successfully. May 17 00:40:57.375964 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:40:57.377305 systemd-logind[1427]: Removed session 24. May 17 00:40:57.468735 systemd[1]: Started sshd@22-10.200.4.16:22-10.200.16.10:58400.service. May 17 00:40:57.481536 kubelet[2402]: I0517 00:40:57.481502 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-etc-cni-netd\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481689 kubelet[2402]: I0517 00:40:57.481544 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-bpf-maps\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481689 kubelet[2402]: I0517 00:40:57.481588 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-kernel\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481689 kubelet[2402]: I0517 00:40:57.481610 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-cgroup\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481689 kubelet[2402]: I0517 00:40:57.481629 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-xtables-lock\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481689 kubelet[2402]: I0517 00:40:57.481649 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-run\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481689 kubelet[2402]: I0517 00:40:57.481667 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hostproc\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481963 kubelet[2402]: I0517 00:40:57.481707 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hubble-tls\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481963 kubelet[2402]: I0517 00:40:57.481732 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-net\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481963 kubelet[2402]: I0517 00:40:57.481760 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-clustermesh-secrets\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481963 kubelet[2402]: I0517 00:40:57.481783 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-config-path\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.481963 kubelet[2402]: I0517 00:40:57.481822 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-ipsec-secrets\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.482159 kubelet[2402]: I0517 00:40:57.481847 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4pj\" (UniqueName: \"kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-kube-api-access-hs4pj\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.482159 kubelet[2402]: I0517 00:40:57.481874 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cni-path\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:57.482159 kubelet[2402]: I0517 00:40:57.481894 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-lib-modules\") pod \"cilium-rvfdz\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " pod="kube-system/cilium-rvfdz" May 17 00:40:58.062030 sshd[4117]: Accepted publickey for core from 10.200.16.10 port 58400 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:58.063680 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:58.071772 systemd[1]: Started session-25.scope. May 17 00:40:58.072381 systemd-logind[1427]: New session 25 of user core. May 17 00:40:58.505550 kubelet[2402]: E0517 00:40:58.505491 2402 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-rvfdz" podUID="b5f3b6f5-c238-4be1-9a21-a417acbaa62b" May 17 00:40:58.582839 kubelet[2402]: E0517 00:40:58.582804 2402 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 17 00:40:58.583120 kubelet[2402]: E0517 00:40:58.583090 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-ipsec-secrets podName:b5f3b6f5-c238-4be1-9a21-a417acbaa62b nodeName:}" failed. No retries permitted until 2025-05-17 00:40:59.083070827 +0000 UTC m=+222.870723334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-ipsec-secrets") pod "cilium-rvfdz" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b") : failed to sync secret cache: timed out waiting for the condition May 17 00:40:58.589225 sshd[4117]: pam_unix(sshd:session): session closed for user core May 17 00:40:58.592884 systemd[1]: sshd@22-10.200.4.16:22-10.200.16.10:58400.service: Deactivated successfully. May 17 00:40:58.593713 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:40:58.594201 systemd-logind[1427]: Session 25 logged out. Waiting for processes to exit. May 17 00:40:58.595103 systemd-logind[1427]: Removed session 25. May 17 00:40:58.689781 systemd[1]: Started sshd@23-10.200.4.16:22-10.200.16.10:58128.service. May 17 00:40:58.691912 kubelet[2402]: I0517 00:40:58.691883 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-kernel\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692096 kubelet[2402]: I0517 00:40:58.692080 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-xtables-lock\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692228 kubelet[2402]: I0517 00:40:58.692212 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-config-path\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692335 kubelet[2402]: I0517 00:40:58.692322 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs4pj\" (UniqueName: \"kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-kube-api-access-hs4pj\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692460 kubelet[2402]: I0517 00:40:58.692445 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-cgroup\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692573 kubelet[2402]: I0517 00:40:58.692557 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cni-path\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692667 kubelet[2402]: I0517 00:40:58.692655 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-net\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692808 kubelet[2402]: I0517 00:40:58.692792 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-lib-modules\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.692919 kubelet[2402]: I0517 00:40:58.692905 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hostproc\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.693027 kubelet[2402]: I0517 00:40:58.693015 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hubble-tls\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.693893 kubelet[2402]: I0517 00:40:58.693108 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-etc-cni-netd\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.694057 kubelet[2402]: I0517 00:40:58.694039 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-bpf-maps\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.694267 kubelet[2402]: I0517 00:40:58.694204 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-run\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.694676 kubelet[2402]: I0517 00:40:58.694410 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-clustermesh-secrets\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:58.696696 kubelet[2402]: I0517 00:40:58.696661 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.696807 kubelet[2402]: I0517 00:40:58.696702 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.696807 kubelet[2402]: I0517 00:40:58.696725 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.698455 kubelet[2402]: I0517 00:40:58.698430 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:40:58.699810 kubelet[2402]: I0517 00:40:58.699787 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.699950 kubelet[2402]: I0517 00:40:58.699936 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.700298 kubelet[2402]: I0517 00:40:58.700280 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.700434 kubelet[2402]: I0517 00:40:58.700419 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.700553 kubelet[2402]: I0517 00:40:58.700539 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.700658 kubelet[2402]: I0517 00:40:58.700643 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.700754 kubelet[2402]: I0517 00:40:58.700741 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:40:58.708559 systemd[1]: var-lib-kubelet-pods-b5f3b6f5\x2dc238\x2d4be1\x2d9a21\x2da417acbaa62b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhs4pj.mount: Deactivated successfully. May 17 00:40:58.714733 kubelet[2402]: I0517 00:40:58.708567 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:40:58.713521 systemd[1]: var-lib-kubelet-pods-b5f3b6f5\x2dc238\x2d4be1\x2d9a21\x2da417acbaa62b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:40:58.715117 kubelet[2402]: I0517 00:40:58.715092 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:40:58.717643 kubelet[2402]: I0517 00:40:58.717616 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-kube-api-access-hs4pj" (OuterVolumeSpecName: "kube-api-access-hs4pj") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "kube-api-access-hs4pj". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:40:58.717921 systemd[1]: var-lib-kubelet-pods-b5f3b6f5\x2dc238\x2d4be1\x2d9a21\x2da417acbaa62b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:40:58.796742 kubelet[2402]: I0517 00:40:58.796334 2402 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.796742 kubelet[2402]: I0517 00:40:58.796456 2402 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-xtables-lock\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.796742 kubelet[2402]: I0517 00:40:58.796496 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-config-path\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.796742 kubelet[2402]: I0517 00:40:58.796713 2402 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hs4pj\" (UniqueName: \"kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-kube-api-access-hs4pj\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.796779 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-cgroup\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.796817 2402 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cni-path\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.796849 2402 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-host-proc-sys-net\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.796893 2402 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-lib-modules\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.796927 2402 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hostproc\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.796960 2402 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-hubble-tls\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.797040 2402 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-etc-cni-netd\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.797842 kubelet[2402]: I0517 00:40:58.797078 2402 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-bpf-maps\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.798254 kubelet[2402]: I0517 00:40:58.797113 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-run\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.798254 kubelet[2402]: I0517 00:40:58.797146 2402 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-clustermesh-secrets\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:58.969873 systemd[1]: Removed slice kubepods-burstable-podb5f3b6f5_c238_4be1_9a21_a417acbaa62b.slice. May 17 00:40:59.296071 sshd[4132]: Accepted publickey for core from 10.200.16.10 port 58128 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:40:59.297590 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:59.303379 kubelet[2402]: I0517 00:40:59.301182 2402 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-ipsec-secrets\") pod \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\" (UID: \"b5f3b6f5-c238-4be1-9a21-a417acbaa62b\") " May 17 00:40:59.302847 systemd[1]: Started session-26.scope. May 17 00:40:59.303809 systemd-logind[1427]: New session 26 of user core. May 17 00:40:59.307770 systemd[1]: var-lib-kubelet-pods-b5f3b6f5\x2dc238\x2d4be1\x2d9a21\x2da417acbaa62b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:40:59.308619 kubelet[2402]: I0517 00:40:59.308321 2402 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b5f3b6f5-c238-4be1-9a21-a417acbaa62b" (UID: "b5f3b6f5-c238-4be1-9a21-a417acbaa62b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:40:59.401782 kubelet[2402]: I0517 00:40:59.401732 2402 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5f3b6f5-c238-4be1-9a21-a417acbaa62b-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-ec5807f93e\" DevicePath \"\"" May 17 00:40:59.615738 systemd[1]: Created slice kubepods-burstable-pod9d569a60_b864_4319_bf1c_197a4a749429.slice. May 17 00:40:59.704443 kubelet[2402]: I0517 00:40:59.704397 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-bpf-maps\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.704900 kubelet[2402]: I0517 00:40:59.704458 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-host-proc-sys-kernel\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.704900 kubelet[2402]: I0517 00:40:59.704481 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-etc-cni-netd\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.704900 kubelet[2402]: I0517 00:40:59.704500 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d569a60-b864-4319-bf1c-197a4a749429-clustermesh-secrets\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.704900 kubelet[2402]: I0517 00:40:59.704537 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-cilium-cgroup\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.704900 kubelet[2402]: I0517 00:40:59.704560 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-cilium-run\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705080 kubelet[2402]: I0517 00:40:59.704584 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgc7b\" (UniqueName: \"kubernetes.io/projected/9d569a60-b864-4319-bf1c-197a4a749429-kube-api-access-qgc7b\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705080 kubelet[2402]: I0517 00:40:59.704625 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-lib-modules\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705080 kubelet[2402]: I0517 00:40:59.704646 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d569a60-b864-4319-bf1c-197a4a749429-cilium-ipsec-secrets\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705080 kubelet[2402]: I0517 00:40:59.704666 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-cni-path\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705080 kubelet[2402]: I0517 00:40:59.704702 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-xtables-lock\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705080 kubelet[2402]: I0517 00:40:59.704727 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-host-proc-sys-net\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705224 kubelet[2402]: I0517 00:40:59.704750 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d569a60-b864-4319-bf1c-197a4a749429-hostproc\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705224 kubelet[2402]: I0517 00:40:59.704784 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d569a60-b864-4319-bf1c-197a4a749429-hubble-tls\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.705224 kubelet[2402]: I0517 00:40:59.704806 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d569a60-b864-4319-bf1c-197a4a749429-cilium-config-path\") pod \"cilium-pckgs\" (UID: \"9d569a60-b864-4319-bf1c-197a4a749429\") " pod="kube-system/cilium-pckgs" May 17 00:40:59.924542 env[1437]: time="2025-05-17T00:40:59.924423096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pckgs,Uid:9d569a60-b864-4319-bf1c-197a4a749429,Namespace:kube-system,Attempt:0,}" May 17 00:40:59.959034 env[1437]: time="2025-05-17T00:40:59.958960441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:59.959216 env[1437]: time="2025-05-17T00:40:59.959189945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:59.959316 env[1437]: time="2025-05-17T00:40:59.959290747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:59.959769 env[1437]: time="2025-05-17T00:40:59.959722453Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426 pid=4158 runtime=io.containerd.runc.v2 May 17 00:40:59.979860 systemd[1]: Started cri-containerd-8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426.scope. May 17 00:41:00.012034 env[1437]: time="2025-05-17T00:41:00.011605770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pckgs,Uid:9d569a60-b864-4319-bf1c-197a4a749429,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\"" May 17 00:41:00.014832 env[1437]: time="2025-05-17T00:41:00.014625417Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:41:00.048141 env[1437]: time="2025-05-17T00:41:00.048093539Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54\"" May 17 00:41:00.048833 env[1437]: time="2025-05-17T00:41:00.048795550Z" level=info msg="StartContainer for \"52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54\"" May 17 00:41:00.065065 systemd[1]: Started cri-containerd-52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54.scope. May 17 00:41:00.099903 env[1437]: time="2025-05-17T00:41:00.099852845Z" level=info msg="StartContainer for \"52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54\" returns successfully" May 17 00:41:00.107485 systemd[1]: cri-containerd-52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54.scope: Deactivated successfully. May 17 00:41:00.195734 env[1437]: time="2025-05-17T00:41:00.195135229Z" level=info msg="shim disconnected" id=52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54 May 17 00:41:00.195734 env[1437]: time="2025-05-17T00:41:00.195189030Z" level=warning msg="cleaning up after shim disconnected" id=52e2cea0f2702af8c83582055a4e36fa0eeaeca6464c2796298d83d3a1ff5c54 namespace=k8s.io May 17 00:41:00.195734 env[1437]: time="2025-05-17T00:41:00.195200830Z" level=info msg="cleaning up dead shim" May 17 00:41:00.203341 env[1437]: time="2025-05-17T00:41:00.203300456Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4240 runtime=io.containerd.runc.v2\n" May 17 00:41:00.569723 env[1437]: time="2025-05-17T00:41:00.569679463Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:41:00.598609 env[1437]: time="2025-05-17T00:41:00.598562413Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8\"" May 17 00:41:00.599236 env[1437]: time="2025-05-17T00:41:00.599204123Z" level=info msg="StartContainer for \"693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8\"" May 17 00:41:00.616336 systemd[1]: Started cri-containerd-693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8.scope. May 17 00:41:00.649046 env[1437]: time="2025-05-17T00:41:00.648999898Z" level=info msg="StartContainer for \"693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8\" returns successfully" May 17 00:41:00.653591 systemd[1]: cri-containerd-693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8.scope: Deactivated successfully. May 17 00:41:00.684172 env[1437]: time="2025-05-17T00:41:00.684119945Z" level=info msg="shim disconnected" id=693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8 May 17 00:41:00.684414 env[1437]: time="2025-05-17T00:41:00.684228147Z" level=warning msg="cleaning up after shim disconnected" id=693731ce2caf94e40995474a9845224dcf8e87e4a21b6a6a8fc3843be4d193f8 namespace=k8s.io May 17 00:41:00.684414 env[1437]: time="2025-05-17T00:41:00.684242947Z" level=info msg="cleaning up dead shim" May 17 00:41:00.693094 env[1437]: time="2025-05-17T00:41:00.693057584Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4303 runtime=io.containerd.runc.v2\n" May 17 00:41:00.967211 kubelet[2402]: I0517 00:41:00.967084 2402 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f3b6f5-c238-4be1-9a21-a417acbaa62b" path="/var/lib/kubelet/pods/b5f3b6f5-c238-4be1-9a21-a417acbaa62b/volumes" May 17 00:41:01.543871 kubelet[2402]: I0517 00:41:01.543792 2402 setters.go:602] "Node became not ready" node="ci-3510.3.7-n-ec5807f93e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:41:01Z","lastTransitionTime":"2025-05-17T00:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:41:01.573129 env[1437]: time="2025-05-17T00:41:01.573076671Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:41:01.601209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1858278231.mount: Deactivated successfully. May 17 00:41:01.607880 env[1437]: time="2025-05-17T00:41:01.607841705Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912\"" May 17 00:41:01.608568 env[1437]: time="2025-05-17T00:41:01.608532016Z" level=info msg="StartContainer for \"e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912\"" May 17 00:41:01.633478 systemd[1]: Started cri-containerd-e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912.scope. May 17 00:41:01.669483 systemd[1]: cri-containerd-e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912.scope: Deactivated successfully. May 17 00:41:01.670505 env[1437]: time="2025-05-17T00:41:01.670462868Z" level=info msg="StartContainer for \"e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912\" returns successfully" May 17 00:41:01.701597 env[1437]: time="2025-05-17T00:41:01.701545745Z" level=info msg="shim disconnected" id=e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912 May 17 00:41:01.701597 env[1437]: time="2025-05-17T00:41:01.701596846Z" level=warning msg="cleaning up after shim disconnected" id=e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912 namespace=k8s.io May 17 00:41:01.701833 env[1437]: time="2025-05-17T00:41:01.701608346Z" level=info msg="cleaning up dead shim" May 17 00:41:01.708760 env[1437]: time="2025-05-17T00:41:01.708720455Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4363 runtime=io.containerd.runc.v2\n" May 17 00:41:01.818128 systemd[1]: run-containerd-runc-k8s.io-e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912-runc.lBXwSA.mount: Deactivated successfully. May 17 00:41:01.818265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e272050d3413dab33707168500db8ada15fa79c33d39d5c20febf1d6b85b9912-rootfs.mount: Deactivated successfully. May 17 00:41:02.106256 kubelet[2402]: E0517 00:41:02.106144 2402 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:41:02.577952 env[1437]: time="2025-05-17T00:41:02.577911092Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:41:02.601833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096774493.mount: Deactivated successfully. May 17 00:41:02.616304 env[1437]: time="2025-05-17T00:41:02.616256873Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877\"" May 17 00:41:02.616918 env[1437]: time="2025-05-17T00:41:02.616882282Z" level=info msg="StartContainer for \"f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877\"" May 17 00:41:02.636477 systemd[1]: Started cri-containerd-f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877.scope. May 17 00:41:02.673286 systemd[1]: cri-containerd-f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877.scope: Deactivated successfully. May 17 00:41:02.675073 env[1437]: time="2025-05-17T00:41:02.675000863Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d569a60_b864_4319_bf1c_197a4a749429.slice/cri-containerd-f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877.scope/memory.events\": no such file or directory" May 17 00:41:02.679053 env[1437]: time="2025-05-17T00:41:02.679017124Z" level=info msg="StartContainer for \"f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877\" returns successfully" May 17 00:41:02.709691 env[1437]: time="2025-05-17T00:41:02.709642389Z" level=info msg="shim disconnected" id=f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877 May 17 00:41:02.709691 env[1437]: time="2025-05-17T00:41:02.709689989Z" level=warning msg="cleaning up after shim disconnected" id=f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877 namespace=k8s.io May 17 00:41:02.709948 env[1437]: time="2025-05-17T00:41:02.709700589Z" level=info msg="cleaning up dead shim" May 17 00:41:02.717466 env[1437]: time="2025-05-17T00:41:02.717425507Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4420 runtime=io.containerd.runc.v2\n" May 17 00:41:02.818172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9210c564115b66d10cc8f779f1642e15711233d8ca4762c83ddbeeed9616877-rootfs.mount: Deactivated successfully. May 17 00:41:03.584476 env[1437]: time="2025-05-17T00:41:03.584331529Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:41:03.627566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686407813.mount: Deactivated successfully. May 17 00:41:03.647544 env[1437]: time="2025-05-17T00:41:03.647491873Z" level=info msg="CreateContainer within sandbox \"8aa95f39cd85db7e0f68ebcaebb132ebe5f4cb2d42a36887bf0ad921ad8d5426\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6\"" May 17 00:41:03.648168 env[1437]: time="2025-05-17T00:41:03.648131983Z" level=info msg="StartContainer for \"3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6\"" May 17 00:41:03.683567 systemd[1]: Started cri-containerd-3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6.scope. May 17 00:41:03.739314 env[1437]: time="2025-05-17T00:41:03.739268546Z" level=info msg="StartContainer for \"3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6\" returns successfully" May 17 00:41:04.120388 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:41:05.851805 systemd[1]: run-containerd-runc-k8s.io-3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6-runc.seurXZ.mount: Deactivated successfully. May 17 00:41:06.935490 systemd-networkd[1590]: lxc_health: Link UP May 17 00:41:06.960404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:41:06.963884 systemd-networkd[1590]: lxc_health: Gained carrier May 17 00:41:08.001603 kubelet[2402]: I0517 00:41:08.001534 2402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pckgs" podStartSLOduration=9.001499413 podStartE2EDuration="9.001499413s" podCreationTimestamp="2025-05-17 00:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:04.60485917 +0000 UTC m=+228.392511677" watchObservedRunningTime="2025-05-17 00:41:08.001499413 +0000 UTC m=+231.789151920" May 17 00:41:08.032496 systemd[1]: run-containerd-runc-k8s.io-3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6-runc.ZyTYpf.mount: Deactivated successfully. May 17 00:41:08.752502 systemd-networkd[1590]: lxc_health: Gained IPv6LL May 17 00:41:10.267265 systemd[1]: run-containerd-runc-k8s.io-3664bc1ac3e645969824edb640377754c42229fbd09cbdd7aef601ddb341eaf6-runc.eDFxIy.mount: Deactivated successfully. May 17 00:41:14.793667 sshd[4132]: pam_unix(sshd:session): session closed for user core May 17 00:41:14.797193 systemd[1]: sshd@23-10.200.4.16:22-10.200.16.10:58128.service: Deactivated successfully. May 17 00:41:14.798142 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:41:14.798844 systemd-logind[1427]: Session 26 logged out. Waiting for processes to exit. May 17 00:41:14.799723 systemd-logind[1427]: Removed session 26. May 17 00:41:16.955934 env[1437]: time="2025-05-17T00:41:16.955879256Z" level=info msg="StopPodSandbox for \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\"" May 17 00:41:16.956444 env[1437]: time="2025-05-17T00:41:16.955993157Z" level=info msg="TearDown network for sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" successfully" May 17 00:41:16.956444 env[1437]: time="2025-05-17T00:41:16.956043058Z" level=info msg="StopPodSandbox for \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" returns successfully" May 17 00:41:16.956804 env[1437]: time="2025-05-17T00:41:16.956770467Z" level=info msg="RemovePodSandbox for \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\"" May 17 00:41:16.956913 env[1437]: time="2025-05-17T00:41:16.956804267Z" level=info msg="Forcibly stopping sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\"" May 17 00:41:16.956913 env[1437]: time="2025-05-17T00:41:16.956889869Z" level=info msg="TearDown network for sandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" successfully" May 17 00:41:16.965691 env[1437]: time="2025-05-17T00:41:16.965626378Z" level=info msg="RemovePodSandbox \"2bcfec0a252f804ef63f57bbcfe3bdfaa90c9105f997dd7aaa756c743c4a4f24\" returns successfully" May 17 00:41:16.966549 env[1437]: time="2025-05-17T00:41:16.966517190Z" level=info msg="StopPodSandbox for \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\"" May 17 00:41:16.966644 env[1437]: time="2025-05-17T00:41:16.966604391Z" level=info msg="TearDown network for sandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" successfully" May 17 00:41:16.966692 env[1437]: time="2025-05-17T00:41:16.966644691Z" level=info msg="StopPodSandbox for \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" returns successfully" May 17 00:41:16.967015 env[1437]: time="2025-05-17T00:41:16.966987696Z" level=info msg="RemovePodSandbox for \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\"" May 17 00:41:16.967145 env[1437]: time="2025-05-17T00:41:16.967021896Z" level=info msg="Forcibly stopping sandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\"" May 17 00:41:16.967201 env[1437]: time="2025-05-17T00:41:16.967145898Z" level=info msg="TearDown network for sandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" successfully" May 17 00:41:16.976284 env[1437]: time="2025-05-17T00:41:16.976167411Z" level=info msg="RemovePodSandbox \"82d0529647c059406ec6eac129d09559664f2e03488a5a602627ae59ffb850ad\" returns successfully"