May 17 00:33:35.024810 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:33:35.024834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:35.024850 kernel: BIOS-provided physical RAM map: May 17 00:33:35.024861 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:33:35.024868 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:33:35.024873 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:33:35.024889 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 17 00:33:35.024900 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:33:35.024909 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:33:35.024918 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:33:35.024927 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:33:35.024936 kernel: printk: bootconsole [earlyser0] enabled May 17 00:33:35.029258 kernel: NX (Execute Disable) protection: active May 17 00:33:35.029268 kernel: efi: EFI v2.70 by Microsoft May 17 00:33:35.029284 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 May 17 00:33:35.029291 kernel: random: crng init done May 17 00:33:35.029298 kernel: SMBIOS 3.1.0 present. May 17 00:33:35.029307 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:33:35.029314 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:33:35.029322 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:33:35.029329 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 May 17 00:33:35.029339 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:33:35.029349 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:33:35.029358 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:33:35.029365 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:33:35.029373 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:33:35.029383 kernel: tsc: Detected 2593.906 MHz processor May 17 00:33:35.029390 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:33:35.029400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:33:35.029407 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:33:35.029415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:33:35.029425 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:33:35.029437 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:33:35.029448 kernel: Using GB pages for direct mapping May 17 00:33:35.029458 kernel: Secure boot disabled May 17 00:33:35.029469 kernel: ACPI: Early table checksum verification disabled May 17 00:33:35.029479 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:33:35.029489 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029499 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029507 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:33:35.029523 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:33:35.029534 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029542 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029550 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029557 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029567 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029577 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029586 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:35.029593 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:33:35.029602 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:33:35.029609 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:33:35.029619 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:33:35.029625 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:33:35.029633 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:33:35.029643 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:33:35.029654 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:33:35.029661 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:33:35.029668 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:33:35.029677 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:33:35.029685 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:33:35.029694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:33:35.029700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:33:35.029710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:33:35.029721 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:33:35.029729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:33:35.029736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:33:35.029745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:33:35.029752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:33:35.029759 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:33:35.029766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:33:35.029775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:33:35.029782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:33:35.029792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:33:35.029801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:33:35.029810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:33:35.029817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:33:35.029824 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:33:35.029833 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:33:35.029840 kernel: Zone ranges: May 17 00:33:35.029850 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:33:35.029857 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:33:35.029867 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:33:35.029875 kernel: Movable zone start for each node May 17 00:33:35.029885 kernel: Early memory node ranges May 17 00:33:35.029892 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:33:35.029899 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:33:35.029908 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:33:35.029917 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:33:35.029925 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:33:35.029932 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:33:35.029943 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:33:35.029951 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:33:35.029960 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:33:35.029966 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:33:35.029976 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:33:35.029983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:33:35.029993 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:33:35.030000 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:33:35.030008 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:33:35.030018 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:33:35.030028 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:33:35.030035 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:33:35.030043 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:33:35.030052 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:33:35.030061 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:33:35.030068 kernel: pcpu-alloc: [0] 0 1 May 17 00:33:35.030074 kernel: Hyper-V: PV spinlocks enabled May 17 00:33:35.030084 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:33:35.030095 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:33:35.030102 kernel: Policy zone: Normal May 17 00:33:35.030110 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:35.030120 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:33:35.030128 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:33:35.030137 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:33:35.030143 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:33:35.030153 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 308056K reserved, 0K cma-reserved) May 17 00:33:35.030163 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:33:35.030172 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:33:35.030188 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:33:35.030199 kernel: rcu: Hierarchical RCU implementation. May 17 00:33:35.030208 kernel: rcu: RCU event tracing is enabled. May 17 00:33:35.030216 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:33:35.030234 kernel: Rude variant of Tasks RCU enabled. May 17 00:33:35.030244 kernel: Tracing variant of Tasks RCU enabled. May 17 00:33:35.030251 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:33:35.030261 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:33:35.030268 kernel: Using NULL legacy PIC May 17 00:33:35.030281 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:33:35.030288 kernel: Console: colour dummy device 80x25 May 17 00:33:35.030297 kernel: printk: console [tty1] enabled May 17 00:33:35.030305 kernel: printk: console [ttyS0] enabled May 17 00:33:35.030315 kernel: printk: bootconsole [earlyser0] disabled May 17 00:33:35.030325 kernel: ACPI: Core revision 20210730 May 17 00:33:35.030334 kernel: Failed to register legacy timer interrupt May 17 00:33:35.030342 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:33:35.030351 kernel: Hyper-V: Using IPI hypercalls May 17 00:33:35.030359 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) May 17 00:33:35.030367 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:33:35.030377 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:33:35.030386 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:33:35.030394 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:33:35.030401 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:33:35.030413 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:33:35.030422 kernel: RETBleed: Vulnerable May 17 00:33:35.030430 kernel: Speculative Store Bypass: Vulnerable May 17 00:33:35.030437 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:33:35.030447 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:33:35.030456 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:33:35.030465 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:33:35.030471 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:33:35.030481 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:33:35.030489 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:33:35.030501 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:33:35.030508 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:33:35.030518 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:33:35.030526 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:33:35.030535 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:33:35.030542 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:33:35.030552 kernel: Freeing SMP alternatives memory: 32K May 17 00:33:35.030560 kernel: pid_max: default: 32768 minimum: 301 May 17 00:33:35.030569 kernel: LSM: Security Framework initializing May 17 00:33:35.030577 kernel: SELinux: Initializing. May 17 00:33:35.030586 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:33:35.030594 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:33:35.030606 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:33:35.030613 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:33:35.030623 kernel: signal: max sigframe size: 3632 May 17 00:33:35.030631 kernel: rcu: Hierarchical SRCU implementation. May 17 00:33:35.030640 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:33:35.030648 kernel: smp: Bringing up secondary CPUs ... May 17 00:33:35.030657 kernel: x86: Booting SMP configuration: May 17 00:33:35.030665 kernel: .... node #0, CPUs: #1 May 17 00:33:35.030676 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:33:35.030685 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:33:35.030695 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:33:35.030703 kernel: smpboot: Max logical packages: 1 May 17 00:33:35.030713 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 17 00:33:35.030720 kernel: devtmpfs: initialized May 17 00:33:35.030730 kernel: x86/mm: Memory block size: 128MB May 17 00:33:35.030738 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:33:35.030748 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:33:35.030755 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:33:35.030767 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:33:35.030776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:33:35.030785 kernel: audit: initializing netlink subsys (disabled) May 17 00:33:35.030792 kernel: audit: type=2000 audit(1747442014.023:1): state=initialized audit_enabled=0 res=1 May 17 00:33:35.030802 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:33:35.030811 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:33:35.030819 kernel: cpuidle: using governor menu May 17 00:33:35.030827 kernel: ACPI: bus type PCI registered May 17 00:33:35.030837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:33:35.030847 kernel: dca service started, version 1.12.1 May 17 00:33:35.030856 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:33:35.030863 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:33:35.030873 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:33:35.030882 kernel: ACPI: Added _OSI(Module Device) May 17 00:33:35.030891 kernel: ACPI: Added _OSI(Processor Device) May 17 00:33:35.030898 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:33:35.030908 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:33:35.030916 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:33:35.030927 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:33:35.030934 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:33:35.030944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:33:35.030953 kernel: ACPI: Interpreter enabled May 17 00:33:35.030962 kernel: ACPI: PM: (supports S0 S5) May 17 00:33:35.030969 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:33:35.030979 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:33:35.030988 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:33:35.030997 kernel: iommu: Default domain type: Translated May 17 00:33:35.031007 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:33:35.031017 kernel: vgaarb: loaded May 17 00:33:35.031026 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:33:35.031034 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:33:35.031041 kernel: PTP clock support registered May 17 00:33:35.031051 kernel: Registered efivars operations May 17 00:33:35.031060 kernel: PCI: Using ACPI for IRQ routing May 17 00:33:35.031068 kernel: PCI: System does not support PCI May 17 00:33:35.031075 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:33:35.031087 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:33:35.031096 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:33:35.031105 kernel: pnp: PnP ACPI init May 17 00:33:35.031112 kernel: pnp: PnP ACPI: found 3 devices May 17 00:33:35.031122 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:33:35.031130 kernel: NET: Registered PF_INET protocol family May 17 00:33:35.031139 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:33:35.031147 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:33:35.031157 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:33:35.031168 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:33:35.031176 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:33:35.031184 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:33:35.031194 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:33:35.031203 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:33:35.031211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:33:35.031218 kernel: NET: Registered PF_XDP protocol family May 17 00:33:35.031233 kernel: PCI: CLS 0 bytes, default 64 May 17 00:33:35.031243 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:33:35.031252 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) May 17 00:33:35.031263 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:33:35.031270 kernel: Initialise system trusted keyrings May 17 00:33:35.031280 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:33:35.031287 kernel: Key type asymmetric registered May 17 00:33:35.031296 kernel: Asymmetric key parser 'x509' registered May 17 00:33:35.031304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:33:35.031314 kernel: io scheduler mq-deadline registered May 17 00:33:35.031321 kernel: io scheduler kyber registered May 17 00:33:35.031331 kernel: io scheduler bfq registered May 17 00:33:35.031341 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:33:35.031350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:33:35.031358 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:33:35.031365 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:33:35.031375 kernel: i8042: PNP: No PS/2 controller found. May 17 00:33:35.031504 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:33:35.031590 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:33:34 UTC (1747442014) May 17 00:33:35.031676 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:33:35.031686 kernel: intel_pstate: CPU model not supported May 17 00:33:35.031695 kernel: efifb: probing for efifb May 17 00:33:35.031703 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:33:35.031714 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:33:35.031721 kernel: efifb: scrolling: redraw May 17 00:33:35.031730 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:33:35.031738 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:33:35.031750 kernel: fb0: EFI VGA frame buffer device May 17 00:33:35.031757 kernel: pstore: Registered efi as persistent store backend May 17 00:33:35.031767 kernel: NET: Registered PF_INET6 protocol family May 17 00:33:35.031775 kernel: Segment Routing with IPv6 May 17 00:33:35.031785 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:33:35.031793 kernel: NET: Registered PF_PACKET protocol family May 17 00:33:35.031803 kernel: Key type dns_resolver registered May 17 00:33:35.031810 kernel: IPI shorthand broadcast: enabled May 17 00:33:35.031820 kernel: sched_clock: Marking stable (734179500, 20590700)->(931380600, -176610400) May 17 00:33:35.031828 kernel: registered taskstats version 1 May 17 00:33:35.031841 kernel: Loading compiled-in X.509 certificates May 17 00:33:35.031849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:33:35.031858 kernel: Key type .fscrypt registered May 17 00:33:35.031865 kernel: Key type fscrypt-provisioning registered May 17 00:33:35.031875 kernel: pstore: Using crash dump compression: deflate May 17 00:33:35.031884 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:33:35.031893 kernel: ima: Allocated hash algorithm: sha1 May 17 00:33:35.031900 kernel: ima: No architecture policies found May 17 00:33:35.031911 kernel: clk: Disabling unused clocks May 17 00:33:35.031920 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:33:35.031929 kernel: Write protecting the kernel read-only data: 28672k May 17 00:33:35.031936 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:33:35.031946 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:33:35.031955 kernel: Run /init as init process May 17 00:33:35.031964 kernel: with arguments: May 17 00:33:35.031971 kernel: /init May 17 00:33:35.031981 kernel: with environment: May 17 00:33:35.031992 kernel: HOME=/ May 17 00:33:35.032000 kernel: TERM=linux May 17 00:33:35.032007 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:33:35.032019 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:33:35.032031 systemd[1]: Detected virtualization microsoft. May 17 00:33:35.032039 systemd[1]: Detected architecture x86-64. May 17 00:33:35.032048 systemd[1]: Running in initrd. May 17 00:33:35.032057 systemd[1]: No hostname configured, using default hostname. May 17 00:33:35.032069 systemd[1]: Hostname set to . May 17 00:33:35.032077 systemd[1]: Initializing machine ID from random generator. May 17 00:33:35.032087 systemd[1]: Queued start job for default target initrd.target. May 17 00:33:35.032096 systemd[1]: Started systemd-ask-password-console.path. May 17 00:33:35.032106 systemd[1]: Reached target cryptsetup.target. May 17 00:33:35.032113 systemd[1]: Reached target paths.target. May 17 00:33:35.032124 systemd[1]: Reached target slices.target. May 17 00:33:35.032132 systemd[1]: Reached target swap.target. May 17 00:33:35.032144 systemd[1]: Reached target timers.target. May 17 00:33:35.032154 systemd[1]: Listening on iscsid.socket. May 17 00:33:35.032163 systemd[1]: Listening on iscsiuio.socket. May 17 00:33:35.032173 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:33:35.032181 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:33:35.032192 systemd[1]: Listening on systemd-journald.socket. May 17 00:33:35.032200 systemd[1]: Listening on systemd-networkd.socket. May 17 00:33:35.032210 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:33:35.032220 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:33:35.039841 systemd[1]: Reached target sockets.target. May 17 00:33:35.039855 systemd[1]: Starting kmod-static-nodes.service... May 17 00:33:35.039867 systemd[1]: Finished network-cleanup.service. May 17 00:33:35.039878 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:33:35.039889 systemd[1]: Starting systemd-journald.service... May 17 00:33:35.039901 systemd[1]: Starting systemd-modules-load.service... May 17 00:33:35.039915 systemd[1]: Starting systemd-resolved.service... May 17 00:33:35.039928 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:33:35.039947 systemd[1]: Finished kmod-static-nodes.service. May 17 00:33:35.039961 kernel: audit: type=1130 audit(1747442015.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.039977 systemd-journald[183]: Journal started May 17 00:33:35.040038 systemd-journald[183]: Runtime Journal (/run/log/journal/45804de6892343f988936a78316e8951) is 8.0M, max 159.0M, 151.0M free. May 17 00:33:35.040082 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:33:35.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.025297 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:33:35.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.066250 kernel: audit: type=1130 audit(1747442015.055:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.066272 systemd[1]: Started systemd-journald.service. May 17 00:33:35.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.086816 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:33:35.128600 kernel: audit: type=1130 audit(1747442015.086:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.128636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:33:35.128653 kernel: audit: type=1130 audit(1747442015.099:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.089200 systemd-resolved[185]: Positive Trust Anchors: May 17 00:33:35.148969 kernel: Bridge firewalling registered May 17 00:33:35.148997 kernel: audit: type=1130 audit(1747442015.107:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.089208 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:33:35.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.089254 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:33:35.188103 kernel: audit: type=1130 audit(1747442015.155:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.091927 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:33:35.099433 systemd[1]: Started systemd-resolved.service. May 17 00:33:35.107520 systemd[1]: Reached target nss-lookup.target. May 17 00:33:35.124292 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:33:35.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.207875 dracut-cmdline[200]: dracut-dracut-053 May 17 00:33:35.207875 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:35.223892 kernel: audit: type=1130 audit(1747442015.184:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.143710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:33:35.152684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:33:35.152830 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:33:35.183071 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:33:35.186198 systemd[1]: Starting dracut-cmdline.service... May 17 00:33:35.238545 kernel: SCSI subsystem initialized May 17 00:33:35.262845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:33:35.262901 kernel: device-mapper: uevent: version 1.0.3 May 17 00:33:35.263862 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:33:35.272309 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:33:35.275203 systemd[1]: Finished systemd-modules-load.service. May 17 00:33:35.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.280799 systemd[1]: Starting systemd-sysctl.service... May 17 00:33:35.297469 kernel: audit: type=1130 audit(1747442015.279:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.301767 systemd[1]: Finished systemd-sysctl.service. May 17 00:33:35.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.318247 kernel: audit: type=1130 audit(1747442015.305:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.322243 kernel: Loading iSCSI transport class v2.0-870. May 17 00:33:35.342251 kernel: iscsi: registered transport (tcp) May 17 00:33:35.367604 kernel: iscsi: registered transport (qla4xxx) May 17 00:33:35.367661 kernel: QLogic iSCSI HBA Driver May 17 00:33:35.397320 systemd[1]: Finished dracut-cmdline.service. May 17 00:33:35.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.400404 systemd[1]: Starting dracut-pre-udev.service... May 17 00:33:35.451251 kernel: raid6: avx512x4 gen() 18594 MB/s May 17 00:33:35.471245 kernel: raid6: avx512x4 xor() 8573 MB/s May 17 00:33:35.491240 kernel: raid6: avx512x2 gen() 18559 MB/s May 17 00:33:35.511245 kernel: raid6: avx512x2 xor() 29832 MB/s May 17 00:33:35.531238 kernel: raid6: avx512x1 gen() 18517 MB/s May 17 00:33:35.550243 kernel: raid6: avx512x1 xor() 26973 MB/s May 17 00:33:35.570239 kernel: raid6: avx2x4 gen() 18486 MB/s May 17 00:33:35.590237 kernel: raid6: avx2x4 xor() 7840 MB/s May 17 00:33:35.610236 kernel: raid6: avx2x2 gen() 18457 MB/s May 17 00:33:35.630239 kernel: raid6: avx2x2 xor() 22336 MB/s May 17 00:33:35.650236 kernel: raid6: avx2x1 gen() 14179 MB/s May 17 00:33:35.670236 kernel: raid6: avx2x1 xor() 19453 MB/s May 17 00:33:35.691238 kernel: raid6: sse2x4 gen() 11739 MB/s May 17 00:33:35.711236 kernel: raid6: sse2x4 xor() 7313 MB/s May 17 00:33:35.730236 kernel: raid6: sse2x2 gen() 12982 MB/s May 17 00:33:35.750237 kernel: raid6: sse2x2 xor() 7732 MB/s May 17 00:33:35.770236 kernel: raid6: sse2x1 gen() 11671 MB/s May 17 00:33:35.792529 kernel: raid6: sse2x1 xor() 5933 MB/s May 17 00:33:35.792547 kernel: raid6: using algorithm avx512x4 gen() 18594 MB/s May 17 00:33:35.792561 kernel: raid6: .... xor() 8573 MB/s, rmw enabled May 17 00:33:35.796500 kernel: raid6: using avx512x2 recovery algorithm May 17 00:33:35.815249 kernel: xor: automatically using best checksumming function avx May 17 00:33:35.910258 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:33:35.918498 systemd[1]: Finished dracut-pre-udev.service. May 17 00:33:35.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.922000 audit: BPF prog-id=7 op=LOAD May 17 00:33:35.922000 audit: BPF prog-id=8 op=LOAD May 17 00:33:35.922970 systemd[1]: Starting systemd-udevd.service... May 17 00:33:35.937858 systemd-udevd[382]: Using default interface naming scheme 'v252'. May 17 00:33:35.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.942553 systemd[1]: Started systemd-udevd.service. May 17 00:33:35.945602 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:33:35.965673 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation May 17 00:33:35.994921 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:33:35.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:35.999960 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:33:36.032959 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:33:36.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.088250 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:33:36.115829 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:33:36.115891 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:33:36.120248 kernel: AES CTR mode by8 optimization enabled May 17 00:33:36.138298 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:33:36.145247 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:33:36.163249 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:33:36.167244 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:33:36.175260 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:33:36.175302 kernel: scsi host1: storvsc_host_t May 17 00:33:36.175468 kernel: scsi host0: storvsc_host_t May 17 00:33:36.181766 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:33:36.188254 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:33:36.218266 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:33:36.226671 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:33:36.255408 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:33:36.255598 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:33:36.255616 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:33:36.255793 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:33:36.255956 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:33:36.256115 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:33:36.256295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:36.256312 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:33:36.265344 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:33:36.266476 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:33:36.266499 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:33:36.367941 kernel: hv_netvsc 7c1e522c-fec4-7c1e-522c-fec47c1e522c eth0: VF slot 1 added May 17 00:33:36.387882 kernel: hv_vmbus: registering driver hv_pci May 17 00:33:36.387939 kernel: hv_pci f658ad2e-d598-43d7-be87-151bbed7861b: PCI VMBus probing: Using version 0x10004 May 17 00:33:36.460812 kernel: hv_pci f658ad2e-d598-43d7-be87-151bbed7861b: PCI host bridge to bus d598:00 May 17 00:33:36.460943 kernel: pci_bus d598:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:33:36.461055 kernel: pci_bus d598:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:33:36.461147 kernel: pci d598:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:33:36.461292 kernel: pci d598:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:33:36.461448 kernel: pci d598:00:02.0: enabling Extended Tags May 17 00:33:36.461602 kernel: pci d598:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d598:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:33:36.461761 kernel: pci_bus d598:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:33:36.461864 kernel: pci d598:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:33:36.552254 kernel: mlx5_core d598:00:02.0: firmware version: 14.30.5000 May 17 00:33:36.832497 kernel: mlx5_core d598:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 00:33:36.832682 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (446) May 17 00:33:36.832702 kernel: mlx5_core d598:00:02.0: Supported tc offload range - chains: 1, prios: 1 May 17 00:33:36.832840 kernel: mlx5_core d598:00:02.0: mlx5e_tc_post_act_init:40:(pid 474): firmware level support is missing May 17 00:33:36.832978 kernel: hv_netvsc 7c1e522c-fec4-7c1e-522c-fec47c1e522c eth0: VF registering: eth1 May 17 00:33:36.833075 kernel: mlx5_core d598:00:02.0 eth1: joined to eth0 May 17 00:33:36.752778 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:33:36.760851 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:33:36.845244 kernel: mlx5_core d598:00:02.0 enP54680s1: renamed from eth1 May 17 00:33:36.932905 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:33:36.946609 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:33:36.952327 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:33:36.963073 systemd[1]: Starting disk-uuid.service... May 17 00:33:36.979248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:36.988243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:37.995844 disk-uuid[564]: The operation has completed successfully. May 17 00:33:37.998568 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:38.060019 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:33:38.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.060121 systemd[1]: Finished disk-uuid.service. May 17 00:33:38.075989 systemd[1]: Starting verity-setup.service... May 17 00:33:38.112250 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:33:38.431444 systemd[1]: Found device dev-mapper-usr.device. May 17 00:33:38.437338 systemd[1]: Mounting sysusr-usr.mount... May 17 00:33:38.441284 systemd[1]: Finished verity-setup.service. May 17 00:33:38.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.518772 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:33:38.517861 systemd[1]: Mounted sysusr-usr.mount. May 17 00:33:38.519714 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:33:38.520493 systemd[1]: Starting ignition-setup.service... May 17 00:33:38.531438 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:33:38.552490 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:38.552539 kernel: BTRFS info (device sda6): using free space tree May 17 00:33:38.552557 kernel: BTRFS info (device sda6): has skinny extents May 17 00:33:38.599052 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:33:38.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.603000 audit: BPF prog-id=9 op=LOAD May 17 00:33:38.604743 systemd[1]: Starting systemd-networkd.service... May 17 00:33:38.626666 systemd-networkd[802]: lo: Link UP May 17 00:33:38.626675 systemd-networkd[802]: lo: Gained carrier May 17 00:33:38.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.627608 systemd-networkd[802]: Enumeration completed May 17 00:33:38.627689 systemd[1]: Started systemd-networkd.service. May 17 00:33:38.630674 systemd[1]: Reached target network.target. May 17 00:33:38.634060 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:33:38.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.635297 systemd[1]: Starting iscsiuio.service... May 17 00:33:38.643643 systemd[1]: Started iscsiuio.service. May 17 00:33:38.648917 systemd[1]: Starting iscsid.service... May 17 00:33:38.653470 iscsid[809]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:33:38.653470 iscsid[809]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:33:38.653470 iscsid[809]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:33:38.653470 iscsid[809]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:33:38.653470 iscsid[809]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:33:38.653470 iscsid[809]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:33:38.653470 iscsid[809]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:33:38.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.658399 systemd[1]: Started iscsid.service. May 17 00:33:38.687959 systemd[1]: Starting dracut-initqueue.service... May 17 00:33:38.704485 systemd[1]: Finished dracut-initqueue.service. May 17 00:33:38.709201 kernel: mlx5_core d598:00:02.0 enP54680s1: Link up May 17 00:33:38.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.709267 systemd[1]: Reached target remote-fs-pre.target. May 17 00:33:38.713109 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:33:38.717906 systemd[1]: Reached target remote-fs.target. May 17 00:33:38.722587 systemd[1]: Starting dracut-pre-mount.service... May 17 00:33:38.727044 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:33:38.731073 systemd[1]: Finished dracut-pre-mount.service. May 17 00:33:38.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.740247 kernel: hv_netvsc 7c1e522c-fec4-7c1e-522c-fec47c1e522c eth0: Data path switched to VF: enP54680s1 May 17 00:33:38.744502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:33:38.744701 systemd-networkd[802]: enP54680s1: Link UP May 17 00:33:38.744986 systemd-networkd[802]: eth0: Link UP May 17 00:33:38.745475 systemd-networkd[802]: eth0: Gained carrier May 17 00:33:38.750708 systemd-networkd[802]: enP54680s1: Gained carrier May 17 00:33:38.784293 systemd-networkd[802]: eth0: DHCPv4 address 10.200.4.42/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:33:38.818173 systemd[1]: Finished ignition-setup.service. May 17 00:33:38.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:38.820163 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:33:40.765493 systemd-networkd[802]: eth0: Gained IPv6LL May 17 00:33:42.518470 ignition[829]: Ignition 2.14.0 May 17 00:33:42.518487 ignition[829]: Stage: fetch-offline May 17 00:33:42.518577 ignition[829]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:42.518630 ignition[829]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:42.626075 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:42.629234 ignition[829]: parsed url from cmdline: "" May 17 00:33:42.629248 ignition[829]: no config URL provided May 17 00:33:42.629256 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:33:42.629275 ignition[829]: no config at "/usr/lib/ignition/user.ign" May 17 00:33:42.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.637320 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:33:42.660650 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:33:42.660684 kernel: audit: type=1130 audit(1747442022.641:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.629281 ignition[829]: failed to fetch config: resource requires networking May 17 00:33:42.643005 systemd[1]: Starting ignition-fetch.service... May 17 00:33:42.630595 ignition[829]: Ignition finished successfully May 17 00:33:42.651762 ignition[835]: Ignition 2.14.0 May 17 00:33:42.651769 ignition[835]: Stage: fetch May 17 00:33:42.651871 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:42.651897 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:42.655196 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:42.657464 ignition[835]: parsed url from cmdline: "" May 17 00:33:42.657542 ignition[835]: no config URL provided May 17 00:33:42.657559 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:33:42.658263 ignition[835]: no config at "/usr/lib/ignition/user.ign" May 17 00:33:42.658316 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:33:42.743895 ignition[835]: GET result: OK May 17 00:33:42.744052 ignition[835]: config has been read from IMDS userdata May 17 00:33:42.745396 ignition[835]: parsing config with SHA512: 90f4c3efd99b9460ae841856862bfe96a7e9fc7abc7d6d315f91be1602f32680a2c462f3cc302d7708cd0b9573a452d7dba0aadfef638627a8acebcae454133e May 17 00:33:42.751633 unknown[835]: fetched base config from "system" May 17 00:33:42.753996 unknown[835]: fetched base config from "system" May 17 00:33:42.754011 unknown[835]: fetched user config from "azure" May 17 00:33:42.758248 ignition[835]: fetch: fetch complete May 17 00:33:42.758257 ignition[835]: fetch: fetch passed May 17 00:33:42.758317 ignition[835]: Ignition finished successfully May 17 00:33:42.764306 systemd[1]: Finished ignition-fetch.service. May 17 00:33:42.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.767161 systemd[1]: Starting ignition-kargs.service... May 17 00:33:42.783370 kernel: audit: type=1130 audit(1747442022.766:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.791938 ignition[841]: Ignition 2.14.0 May 17 00:33:42.791949 ignition[841]: Stage: kargs May 17 00:33:42.792086 ignition[841]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:42.792119 ignition[841]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:42.796857 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:42.797996 ignition[841]: kargs: kargs passed May 17 00:33:42.798049 ignition[841]: Ignition finished successfully May 17 00:33:42.807123 systemd[1]: Finished ignition-kargs.service. May 17 00:33:42.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.810037 systemd[1]: Starting ignition-disks.service... May 17 00:33:42.827725 kernel: audit: type=1130 audit(1747442022.808:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.835746 ignition[847]: Ignition 2.14.0 May 17 00:33:42.835756 ignition[847]: Stage: disks May 17 00:33:42.835890 ignition[847]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:42.835922 ignition[847]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:42.844475 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:42.850119 ignition[847]: disks: disks passed May 17 00:33:42.850179 ignition[847]: Ignition finished successfully May 17 00:33:42.853943 systemd[1]: Finished ignition-disks.service. May 17 00:33:42.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.856161 systemd[1]: Reached target initrd-root-device.target. May 17 00:33:42.875438 kernel: audit: type=1130 audit(1747442022.855:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.871051 systemd[1]: Reached target local-fs-pre.target. May 17 00:33:42.875427 systemd[1]: Reached target local-fs.target. May 17 00:33:42.877422 systemd[1]: Reached target sysinit.target. May 17 00:33:42.878359 systemd[1]: Reached target basic.target. May 17 00:33:42.879641 systemd[1]: Starting systemd-fsck-root.service... May 17 00:33:42.940310 systemd-fsck[855]: ROOT: clean, 619/7326000 files, 481079/7359488 blocks May 17 00:33:42.945580 systemd[1]: Finished systemd-fsck-root.service. May 17 00:33:42.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.950519 systemd[1]: Mounting sysroot.mount... May 17 00:33:42.966909 kernel: audit: type=1130 audit(1747442022.949:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:42.978323 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:33:42.978462 systemd[1]: Mounted sysroot.mount. May 17 00:33:42.980286 systemd[1]: Reached target initrd-root-fs.target. May 17 00:33:43.016551 systemd[1]: Mounting sysroot-usr.mount... May 17 00:33:43.021748 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:33:43.026580 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:33:43.026620 systemd[1]: Reached target ignition-diskful.target. May 17 00:33:43.034114 systemd[1]: Mounted sysroot-usr.mount. May 17 00:33:43.090683 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:33:43.096730 systemd[1]: Starting initrd-setup-root.service... May 17 00:33:43.114321 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (866) May 17 00:33:43.114360 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:43.122384 kernel: BTRFS info (device sda6): using free space tree May 17 00:33:43.122415 kernel: BTRFS info (device sda6): has skinny extents May 17 00:33:43.126737 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:33:43.133458 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:33:43.150782 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory May 17 00:33:43.171422 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:33:43.176269 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:33:43.731395 systemd[1]: Finished initrd-setup-root.service. May 17 00:33:43.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:43.746191 systemd[1]: Starting ignition-mount.service... May 17 00:33:43.756056 kernel: audit: type=1130 audit(1747442023.733:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:43.749628 systemd[1]: Starting sysroot-boot.service... May 17 00:33:43.759694 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:33:43.759825 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:33:43.779146 systemd[1]: Finished sysroot-boot.service. May 17 00:33:43.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:43.792886 ignition[935]: INFO : Ignition 2.14.0 May 17 00:33:43.797060 kernel: audit: type=1130 audit(1747442023.782:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:43.797091 ignition[935]: INFO : Stage: mount May 17 00:33:43.797091 ignition[935]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:43.797091 ignition[935]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:43.807492 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:43.807492 ignition[935]: INFO : mount: mount passed May 17 00:33:43.807492 ignition[935]: INFO : Ignition finished successfully May 17 00:33:43.814460 systemd[1]: Finished ignition-mount.service. May 17 00:33:43.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:43.830246 kernel: audit: type=1130 audit(1747442023.818:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:44.618129 coreos-metadata[865]: May 17 00:33:44.618 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:33:44.637419 coreos-metadata[865]: May 17 00:33:44.637 INFO Fetch successful May 17 00:33:44.673140 coreos-metadata[865]: May 17 00:33:44.673 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:33:44.688405 coreos-metadata[865]: May 17 00:33:44.688 INFO Fetch successful May 17 00:33:44.705783 coreos-metadata[865]: May 17 00:33:44.705 INFO wrote hostname ci-3510.3.7-n-b02eecf252 to /sysroot/etc/hostname May 17 00:33:44.711555 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:33:44.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:44.716956 systemd[1]: Starting ignition-files.service... May 17 00:33:44.728949 kernel: audit: type=1130 audit(1747442024.715:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:44.734473 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:33:44.747253 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (945) May 17 00:33:44.755420 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:44.755455 kernel: BTRFS info (device sda6): using free space tree May 17 00:33:44.755468 kernel: BTRFS info (device sda6): has skinny extents May 17 00:33:44.765969 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:33:44.779653 ignition[964]: INFO : Ignition 2.14.0 May 17 00:33:44.779653 ignition[964]: INFO : Stage: files May 17 00:33:44.783479 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:44.783479 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:44.796421 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:44.812316 ignition[964]: DEBUG : files: compiled without relabeling support, skipping May 17 00:33:44.815397 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:33:44.815397 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:33:44.854551 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:33:44.858026 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:33:44.942509 unknown[964]: wrote ssh authorized keys file for user: core May 17 00:33:44.945434 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:33:44.945434 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:33:44.945434 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:33:44.989447 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:33:45.051493 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:33:45.056388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:33:45.056388 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:33:45.538205 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:33:45.580984 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:33:45.586714 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948254936" May 17 00:33:45.651949 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948254936": device or resource busy May 17 00:33:45.651949 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3948254936", trying btrfs: device or resource busy May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948254936" May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948254936" May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3948254936" May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3948254936" May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4240836609" May 17 00:33:45.651949 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4240836609": device or resource busy May 17 00:33:45.651949 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4240836609", trying btrfs: device or resource busy May 17 00:33:45.651949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4240836609" May 17 00:33:45.596462 systemd[1]: mnt-oem3948254936.mount: Deactivated successfully. May 17 00:33:45.722643 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4240836609" May 17 00:33:45.722643 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem4240836609" May 17 00:33:45.722643 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem4240836609" May 17 00:33:45.722643 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:33:45.722643 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:45.722643 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:33:45.615222 systemd[1]: mnt-oem4240836609.mount: Deactivated successfully. May 17 00:33:46.395312 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK May 17 00:33:46.583882 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:46.583882 ignition[964]: INFO : files: op(14): [started] processing unit "waagent.service" May 17 00:33:46.583882 ignition[964]: INFO : files: op(14): [finished] processing unit "waagent.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(15): [started] processing unit "nvidia.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(15): [finished] processing unit "nvidia.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(16): [started] processing unit "prepare-helm.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" May 17 00:33:46.596023 ignition[964]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:33:46.596023 ignition[964]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:33:46.596023 ignition[964]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:33:46.596023 ignition[964]: INFO : files: files passed May 17 00:33:46.596023 ignition[964]: INFO : Ignition finished successfully May 17 00:33:46.664934 kernel: audit: type=1130 audit(1747442026.600:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.596476 systemd[1]: Finished ignition-files.service. May 17 00:33:46.614591 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:33:46.624357 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:33:46.666762 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:33:46.626792 systemd[1]: Starting ignition-quench.service... May 17 00:33:46.629587 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:33:46.629685 systemd[1]: Finished ignition-quench.service. May 17 00:33:46.644929 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:33:46.662161 systemd[1]: Reached target ignition-complete.target. May 17 00:33:46.701024 systemd[1]: Starting initrd-parse-etc.service... May 17 00:33:46.718239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:33:46.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.718357 systemd[1]: Finished initrd-parse-etc.service. May 17 00:33:46.722860 systemd[1]: Reached target initrd-fs.target. May 17 00:33:46.726704 systemd[1]: Reached target initrd.target. May 17 00:33:46.728673 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:33:46.729489 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:33:46.745625 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:33:46.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.750442 systemd[1]: Starting initrd-cleanup.service... May 17 00:33:46.760199 systemd[1]: Stopped target nss-lookup.target. May 17 00:33:46.764318 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:33:46.768882 systemd[1]: Stopped target timers.target. May 17 00:33:46.770907 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:33:46.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.771041 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:33:46.774704 systemd[1]: Stopped target initrd.target. May 17 00:33:46.778721 systemd[1]: Stopped target basic.target. May 17 00:33:46.783545 systemd[1]: Stopped target ignition-complete.target. May 17 00:33:46.790273 systemd[1]: Stopped target ignition-diskful.target. May 17 00:33:46.794717 systemd[1]: Stopped target initrd-root-device.target. May 17 00:33:46.799289 systemd[1]: Stopped target remote-fs.target. May 17 00:33:46.803076 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:33:46.807412 systemd[1]: Stopped target sysinit.target. May 17 00:33:46.811301 systemd[1]: Stopped target local-fs.target. May 17 00:33:46.811788 systemd[1]: Stopped target local-fs-pre.target. May 17 00:33:46.820627 systemd[1]: Stopped target swap.target. May 17 00:33:46.824317 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:33:46.826565 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:33:46.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.827616 systemd[1]: Stopped target cryptsetup.target. May 17 00:33:46.835306 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:33:46.837647 systemd[1]: Stopped dracut-initqueue.service. May 17 00:33:46.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.841710 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:33:46.844474 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:33:46.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.849239 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:33:46.851426 systemd[1]: Stopped ignition-files.service. May 17 00:33:46.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.855355 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:33:46.857951 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:33:46.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.863260 systemd[1]: Stopping ignition-mount.service... May 17 00:33:46.865165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:33:46.865325 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:33:46.868897 systemd[1]: Stopping sysroot-boot.service... May 17 00:33:46.887179 ignition[1002]: INFO : Ignition 2.14.0 May 17 00:33:46.887179 ignition[1002]: INFO : Stage: umount May 17 00:33:46.887179 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:46.887179 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:46.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.870843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:33:46.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.910935 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:46.910935 ignition[1002]: INFO : umount: umount passed May 17 00:33:46.910935 ignition[1002]: INFO : Ignition finished successfully May 17 00:33:46.871017 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:33:46.873460 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:33:46.873610 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:33:46.890463 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:33:46.890550 systemd[1]: Finished initrd-cleanup.service. May 17 00:33:46.898899 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:33:46.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.898979 systemd[1]: Stopped ignition-mount.service. May 17 00:33:46.904501 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:33:46.904554 systemd[1]: Stopped ignition-disks.service. May 17 00:33:46.906600 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:33:46.906639 systemd[1]: Stopped ignition-kargs.service. May 17 00:33:46.910913 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:33:46.910959 systemd[1]: Stopped ignition-fetch.service. May 17 00:33:46.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.944981 systemd[1]: Stopped target network.target. May 17 00:33:46.948527 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:33:46.951192 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:33:46.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.955359 systemd[1]: Stopped target paths.target. May 17 00:33:46.957756 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:33:46.963285 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:33:46.966326 systemd[1]: Stopped target slices.target. May 17 00:33:46.970238 systemd[1]: Stopped target sockets.target. May 17 00:33:46.972123 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:33:46.972157 systemd[1]: Closed iscsid.socket. May 17 00:33:46.980408 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:33:46.980445 systemd[1]: Closed iscsiuio.socket. May 17 00:33:46.985434 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:33:46.985498 systemd[1]: Stopped ignition-setup.service. May 17 00:33:46.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:46.991604 systemd[1]: Stopping systemd-networkd.service... May 17 00:33:46.995348 systemd[1]: Stopping systemd-resolved.service... May 17 00:33:46.999277 systemd-networkd[802]: eth0: DHCPv6 lease lost May 17 00:33:47.000509 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:33:47.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.002435 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:33:47.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.002525 systemd[1]: Stopped systemd-networkd.service. May 17 00:33:47.005333 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:33:47.013000 audit: BPF prog-id=9 op=UNLOAD May 17 00:33:47.013000 audit: BPF prog-id=6 op=UNLOAD May 17 00:33:47.005412 systemd[1]: Stopped systemd-resolved.service. May 17 00:33:47.015402 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:33:47.015440 systemd[1]: Closed systemd-networkd.socket. May 17 00:33:47.018827 systemd[1]: Stopping network-cleanup.service... May 17 00:33:47.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.025028 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:33:47.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.025091 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:33:47.031306 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:33:47.031360 systemd[1]: Stopped systemd-sysctl.service. May 17 00:33:47.035638 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:33:47.035680 systemd[1]: Stopped systemd-modules-load.service. May 17 00:33:47.037871 systemd[1]: Stopping systemd-udevd.service... May 17 00:33:47.052482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:33:47.055733 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:33:47.057957 systemd[1]: Stopped systemd-udevd.service. May 17 00:33:47.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.062888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:33:47.062960 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:33:47.067543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:33:47.067599 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:33:47.075427 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:33:47.075488 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:33:47.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.081466 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:33:47.081516 systemd[1]: Stopped dracut-cmdline.service. May 17 00:33:47.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.087458 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:33:47.087506 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:33:47.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.094387 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:33:47.096575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:33:47.111782 kernel: hv_netvsc 7c1e522c-fec4-7c1e-522c-fec47c1e522c eth0: Data path switched from VF: enP54680s1 May 17 00:33:47.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.096638 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:33:47.100933 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:33:47.101018 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:33:47.128821 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:33:47.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.128907 systemd[1]: Stopped network-cleanup.service. May 17 00:33:47.664395 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:33:47.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.664539 systemd[1]: Stopped sysroot-boot.service. May 17 00:33:47.690849 kernel: kauditd_printk_skb: 38 callbacks suppressed May 17 00:33:47.690881 kernel: audit: type=1131 audit(1747442027.668:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.669177 systemd[1]: Reached target initrd-switch-root.target. May 17 00:33:47.687020 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:33:47.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.687090 systemd[1]: Stopped initrd-setup-root.service. May 17 00:33:47.715124 kernel: audit: type=1131 audit(1747442027.690:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:47.691675 systemd[1]: Starting initrd-switch-root.service... May 17 00:33:47.713238 systemd[1]: Switching root. May 17 00:33:47.748333 iscsid[809]: iscsid shutting down. May 17 00:33:47.750048 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). May 17 00:33:47.750109 systemd-journald[183]: Journal stopped May 17 00:34:05.846548 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:34:05.846576 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:34:05.846588 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:34:05.846599 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:34:05.846607 kernel: SELinux: policy capability open_perms=1 May 17 00:34:05.846618 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:34:05.846627 kernel: SELinux: policy capability always_check_network=0 May 17 00:34:05.846641 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:34:05.846649 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:34:05.846659 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:34:05.846667 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:34:05.846679 kernel: audit: type=1403 audit(1747442030.585:79): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:34:05.846690 systemd[1]: Successfully loaded SELinux policy in 319.114ms. May 17 00:34:05.846702 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.613ms. May 17 00:34:05.846718 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:34:05.846728 systemd[1]: Detected virtualization microsoft. May 17 00:34:05.846739 systemd[1]: Detected architecture x86-64. May 17 00:34:05.846748 systemd[1]: Detected first boot. May 17 00:34:05.846762 systemd[1]: Hostname set to . May 17 00:34:05.846774 systemd[1]: Initializing machine ID from random generator. May 17 00:34:05.846787 kernel: audit: type=1400 audit(1747442031.515:80): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:34:05.846804 kernel: audit: type=1400 audit(1747442031.535:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:34:05.846825 kernel: audit: type=1400 audit(1747442031.535:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:34:05.846842 kernel: audit: type=1334 audit(1747442031.548:83): prog-id=10 op=LOAD May 17 00:34:05.846861 kernel: audit: type=1334 audit(1747442031.548:84): prog-id=10 op=UNLOAD May 17 00:34:05.846884 kernel: audit: type=1334 audit(1747442031.560:85): prog-id=11 op=LOAD May 17 00:34:05.846899 kernel: audit: type=1334 audit(1747442031.560:86): prog-id=11 op=UNLOAD May 17 00:34:05.846916 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:34:05.846934 kernel: audit: type=1400 audit(1747442033.395:87): avc: denied { associate } for pid=1036 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:34:05.846955 kernel: audit: type=1300 audit(1747442033.395:87): arch=c000003e syscall=188 success=yes exit=0 a0=c00018a792 a1=c00018ea20 a2=c00019cc40 a3=32 items=0 ppid=1019 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:05.846975 kernel: audit: type=1327 audit(1747442033.395:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:34:05.846991 kernel: audit: type=1400 audit(1747442033.402:88): avc: denied { associate } for pid=1036 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:34:05.847015 kernel: audit: type=1300 audit(1747442033.402:88): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018a869 a2=1ed a3=0 items=2 ppid=1019 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:05.847032 kernel: audit: type=1307 audit(1747442033.402:88): cwd="/" May 17 00:34:05.847054 kernel: audit: type=1302 audit(1747442033.402:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.847071 kernel: audit: type=1302 audit(1747442033.402:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.847090 kernel: audit: type=1327 audit(1747442033.402:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:34:05.847108 systemd[1]: Populated /etc with preset unit settings. May 17 00:34:05.847130 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:05.847148 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:05.847174 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:05.847191 kernel: audit: type=1334 audit(1747442045.287:89): prog-id=12 op=LOAD May 17 00:34:05.847209 kernel: audit: type=1334 audit(1747442045.287:90): prog-id=3 op=UNLOAD May 17 00:34:05.847236 kernel: audit: type=1334 audit(1747442045.292:91): prog-id=13 op=LOAD May 17 00:34:05.847253 kernel: audit: type=1334 audit(1747442045.297:92): prog-id=14 op=LOAD May 17 00:34:05.847271 kernel: audit: type=1334 audit(1747442045.297:93): prog-id=4 op=UNLOAD May 17 00:34:05.847291 kernel: audit: type=1334 audit(1747442045.297:94): prog-id=5 op=UNLOAD May 17 00:34:05.847307 kernel: audit: type=1334 audit(1747442045.302:95): prog-id=15 op=LOAD May 17 00:34:05.847325 kernel: audit: type=1334 audit(1747442045.302:96): prog-id=12 op=UNLOAD May 17 00:34:05.847341 kernel: audit: type=1334 audit(1747442045.321:97): prog-id=16 op=LOAD May 17 00:34:05.847358 kernel: audit: type=1334 audit(1747442045.326:98): prog-id=17 op=LOAD May 17 00:34:05.847374 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:34:05.847393 systemd[1]: Stopped iscsiuio.service. May 17 00:34:05.847412 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:34:05.847434 systemd[1]: Stopped iscsid.service. May 17 00:34:05.847453 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:34:05.847470 systemd[1]: Stopped initrd-switch-root.service. May 17 00:34:05.847490 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:34:05.847509 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:34:05.847531 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:34:05.847551 systemd[1]: Created slice system-getty.slice. May 17 00:34:05.847569 systemd[1]: Created slice system-modprobe.slice. May 17 00:34:05.847592 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:34:05.847611 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:34:05.847630 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:34:05.847647 systemd[1]: Created slice user.slice. May 17 00:34:05.847665 systemd[1]: Started systemd-ask-password-console.path. May 17 00:34:05.847683 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:34:05.847698 systemd[1]: Set up automount boot.automount. May 17 00:34:05.847729 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:34:05.847746 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:34:05.847760 systemd[1]: Stopped target initrd-fs.target. May 17 00:34:05.847773 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:34:05.847785 systemd[1]: Reached target integritysetup.target. May 17 00:34:05.847795 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:34:05.847807 systemd[1]: Reached target remote-fs.target. May 17 00:34:05.847816 systemd[1]: Reached target slices.target. May 17 00:34:05.847828 systemd[1]: Reached target swap.target. May 17 00:34:05.847838 systemd[1]: Reached target torcx.target. May 17 00:34:05.847853 systemd[1]: Reached target veritysetup.target. May 17 00:34:05.847863 systemd[1]: Listening on systemd-coredump.socket. May 17 00:34:05.847876 systemd[1]: Listening on systemd-initctl.socket. May 17 00:34:05.847885 systemd[1]: Listening on systemd-networkd.socket. May 17 00:34:05.847900 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:34:05.847910 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:34:05.847922 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:34:05.847931 systemd[1]: Mounting dev-hugepages.mount... May 17 00:34:05.847943 systemd[1]: Mounting dev-mqueue.mount... May 17 00:34:05.847953 systemd[1]: Mounting media.mount... May 17 00:34:05.847966 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:05.847976 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:34:05.847988 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:34:05.848000 systemd[1]: Mounting tmp.mount... May 17 00:34:05.848012 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:34:05.848022 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:05.848034 systemd[1]: Starting kmod-static-nodes.service... May 17 00:34:05.848047 systemd[1]: Starting modprobe@configfs.service... May 17 00:34:05.848059 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:05.848069 systemd[1]: Starting modprobe@drm.service... May 17 00:34:05.848079 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:05.848088 systemd[1]: Starting modprobe@fuse.service... May 17 00:34:05.848100 systemd[1]: Starting modprobe@loop.service... May 17 00:34:05.848112 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:34:05.848123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:34:05.848132 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:34:05.848142 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:34:05.848151 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:34:05.848161 systemd[1]: Stopped systemd-journald.service. May 17 00:34:05.848170 systemd[1]: Starting systemd-journald.service... May 17 00:34:05.848180 systemd[1]: Starting systemd-modules-load.service... May 17 00:34:05.848191 systemd[1]: Starting systemd-network-generator.service... May 17 00:34:05.848201 systemd[1]: Starting systemd-remount-fs.service... May 17 00:34:05.848210 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:34:05.848220 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:34:05.848242 systemd[1]: Stopped verity-setup.service. May 17 00:34:05.848256 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:05.848266 systemd[1]: Mounted dev-hugepages.mount. May 17 00:34:05.848279 systemd[1]: Mounted dev-mqueue.mount. May 17 00:34:05.848295 kernel: loop: module loaded May 17 00:34:05.848313 systemd[1]: Mounted media.mount. May 17 00:34:05.848325 kernel: fuse: init (API version 7.34) May 17 00:34:05.848336 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:34:05.848348 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:34:05.848360 systemd[1]: Mounted tmp.mount. May 17 00:34:05.848372 systemd[1]: Finished kmod-static-nodes.service. May 17 00:34:05.848384 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:34:05.848394 systemd[1]: Finished modprobe@configfs.service. May 17 00:34:05.848408 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:34:05.848418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:05.848431 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:05.848445 systemd-journald[1118]: Journal started May 17 00:34:05.848493 systemd-journald[1118]: Runtime Journal (/run/log/journal/172dfb5356604edbb76cdfa55de588b3) is 8.0M, max 159.0M, 151.0M free. May 17 00:33:50.585000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:33:51.515000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:33:51.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:33:51.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:33:51.548000 audit: BPF prog-id=10 op=LOAD May 17 00:33:51.548000 audit: BPF prog-id=10 op=UNLOAD May 17 00:33:51.560000 audit: BPF prog-id=11 op=LOAD May 17 00:33:51.560000 audit: BPF prog-id=11 op=UNLOAD May 17 00:33:53.395000 audit[1036]: AVC avc: denied { associate } for pid=1036 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:33:53.395000 audit[1036]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018a792 a1=c00018ea20 a2=c00019cc40 a3=32 items=0 ppid=1019 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:53.395000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:33:53.402000 audit[1036]: AVC avc: denied { associate } for pid=1036 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:33:53.402000 audit[1036]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018a869 a2=1ed a3=0 items=2 ppid=1019 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:53.402000 audit: CWD cwd="/" May 17 00:33:53.402000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:53.402000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:53.402000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:34:05.287000 audit: BPF prog-id=12 op=LOAD May 17 00:34:05.287000 audit: BPF prog-id=3 op=UNLOAD May 17 00:34:05.292000 audit: BPF prog-id=13 op=LOAD May 17 00:34:05.297000 audit: BPF prog-id=14 op=LOAD May 17 00:34:05.297000 audit: BPF prog-id=4 op=UNLOAD May 17 00:34:05.297000 audit: BPF prog-id=5 op=UNLOAD May 17 00:34:05.302000 audit: BPF prog-id=15 op=LOAD May 17 00:34:05.302000 audit: BPF prog-id=12 op=UNLOAD May 17 00:34:05.321000 audit: BPF prog-id=16 op=LOAD May 17 00:34:05.326000 audit: BPF prog-id=17 op=LOAD May 17 00:34:05.326000 audit: BPF prog-id=13 op=UNLOAD May 17 00:34:05.326000 audit: BPF prog-id=14 op=UNLOAD May 17 00:34:05.330000 audit: BPF prog-id=18 op=LOAD May 17 00:34:05.330000 audit: BPF prog-id=15 op=UNLOAD May 17 00:34:05.335000 audit: BPF prog-id=19 op=LOAD May 17 00:34:05.336000 audit: BPF prog-id=20 op=LOAD May 17 00:34:05.336000 audit: BPF prog-id=16 op=UNLOAD May 17 00:34:05.336000 audit: BPF prog-id=17 op=UNLOAD May 17 00:34:05.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.345000 audit: BPF prog-id=18 op=UNLOAD May 17 00:34:05.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.674000 audit: BPF prog-id=21 op=LOAD May 17 00:34:05.674000 audit: BPF prog-id=22 op=LOAD May 17 00:34:05.674000 audit: BPF prog-id=23 op=LOAD May 17 00:34:05.674000 audit: BPF prog-id=19 op=UNLOAD May 17 00:34:05.674000 audit: BPF prog-id=20 op=UNLOAD May 17 00:34:05.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.843000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:34:05.843000 audit[1118]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff19e44eb0 a2=4000 a3=7fff19e44f4c items=0 ppid=1 pid=1118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:05.843000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:33:53.319603 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:34:05.286175 systemd[1]: Queued start job for default target multi-user.target. May 17 00:34:05.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:53.343197 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:34:05.286188 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:33:53.343252 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:34:05.336684 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:33:53.343301 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:33:53.343320 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:33:53.343370 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:33:53.343388 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:33:53.343638 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:33:53.343685 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:33:53.343700 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:33:53.377601 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:33:53.377665 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:33:53.377702 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:33:53.377734 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:33:53.377754 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:33:53.377767 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:33:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:34:03.983937 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:34:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:34:03.984183 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:34:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:34:03.984325 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:34:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:34:03.984498 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:34:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:34:03.984544 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:34:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:34:03.984626 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2025-05-17T00:34:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:34:05.855338 systemd[1]: Started systemd-journald.service. May 17 00:34:05.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.856345 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:34:05.856488 systemd[1]: Finished modprobe@drm.service. May 17 00:34:05.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.859063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:05.859204 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:05.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.861610 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:34:05.861759 systemd[1]: Finished modprobe@fuse.service. May 17 00:34:05.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.863952 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:05.864086 systemd[1]: Finished modprobe@loop.service. May 17 00:34:05.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.866316 systemd[1]: Finished systemd-modules-load.service. May 17 00:34:05.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.868780 systemd[1]: Finished systemd-network-generator.service. May 17 00:34:05.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.871576 systemd[1]: Finished systemd-remount-fs.service. May 17 00:34:05.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.874858 systemd[1]: Reached target network-pre.target. May 17 00:34:05.878655 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:34:05.882479 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:34:05.888005 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:34:05.937839 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:34:05.941125 systemd[1]: Starting systemd-journal-flush.service... May 17 00:34:05.943258 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:05.944418 systemd[1]: Starting systemd-random-seed.service... May 17 00:34:05.946385 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:05.947525 systemd[1]: Starting systemd-sysctl.service... May 17 00:34:05.951083 systemd[1]: Starting systemd-sysusers.service... May 17 00:34:05.955277 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:34:05.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.957840 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:34:05.960470 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:34:05.964140 systemd[1]: Starting systemd-udev-settle.service... May 17 00:34:05.976275 systemd[1]: Finished systemd-random-seed.service. May 17 00:34:05.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.978687 systemd[1]: Reached target first-boot-complete.target. May 17 00:34:05.982056 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:34:05.984540 systemd-journald[1118]: Time spent on flushing to /var/log/journal/172dfb5356604edbb76cdfa55de588b3 is 26.550ms for 1175 entries. May 17 00:34:05.984540 systemd-journald[1118]: System Journal (/var/log/journal/172dfb5356604edbb76cdfa55de588b3) is 8.0M, max 2.6G, 2.6G free. May 17 00:34:06.074889 systemd-journald[1118]: Received client request to flush runtime journal. May 17 00:34:06.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.001573 systemd[1]: Finished systemd-sysctl.service. May 17 00:34:06.076048 systemd[1]: Finished systemd-journal-flush.service. May 17 00:34:06.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.750118 systemd[1]: Finished systemd-sysusers.service. May 17 00:34:07.332323 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:34:07.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:07.335000 audit: BPF prog-id=24 op=LOAD May 17 00:34:07.335000 audit: BPF prog-id=25 op=LOAD May 17 00:34:07.335000 audit: BPF prog-id=7 op=UNLOAD May 17 00:34:07.335000 audit: BPF prog-id=8 op=UNLOAD May 17 00:34:07.336152 systemd[1]: Starting systemd-udevd.service... May 17 00:34:07.353494 systemd-udevd[1163]: Using default interface naming scheme 'v252'. May 17 00:34:07.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:07.997000 audit: BPF prog-id=26 op=LOAD May 17 00:34:07.993526 systemd[1]: Started systemd-udevd.service. May 17 00:34:08.000499 systemd[1]: Starting systemd-networkd.service... May 17 00:34:08.035724 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:34:08.098291 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:34:08.110120 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:34:08.110201 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:34:08.114794 kernel: Console: switching to colour dummy device 80x25 May 17 00:34:08.128971 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:34:08.129069 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:34:08.133872 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:34:08.133934 kernel: hv_vmbus: registering driver hv_utils May 17 00:34:08.133954 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:34:08.138660 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:34:08.142415 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:34:09.070000 audit: BPF prog-id=27 op=LOAD May 17 00:34:08.134000 audit[1164]: AVC avc: denied { confidentiality } for pid=1164 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:34:09.074000 audit: BPF prog-id=28 op=LOAD May 17 00:34:09.074000 audit: BPF prog-id=29 op=LOAD May 17 00:34:09.076531 systemd[1]: Starting systemd-userdbd.service... May 17 00:34:09.093011 kernel: hv_vmbus: registering driver hv_balloon May 17 00:34:09.103026 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:34:08.134000 audit[1164]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f234cc28f0 a1=f884 a2=7f0d48515bc5 a3=5 items=12 ppid=1163 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:08.134000 audit: CWD cwd="/" May 17 00:34:08.134000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=1 name=(null) inode=14876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=2 name=(null) inode=14876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=3 name=(null) inode=14877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=4 name=(null) inode=14876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=5 name=(null) inode=14878 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=6 name=(null) inode=14876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=7 name=(null) inode=14879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=8 name=(null) inode=14876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=9 name=(null) inode=14880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=10 name=(null) inode=14876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PATH item=11 name=(null) inode=14881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:08.134000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:34:09.136951 systemd[1]: Started systemd-userdbd.service. May 17 00:34:09.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:09.352302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:34:09.368013 kernel: KVM: vmx: using Hyper-V Enlightened VMCS May 17 00:34:09.456917 systemd-networkd[1174]: lo: Link UP May 17 00:34:09.456929 systemd-networkd[1174]: lo: Gained carrier May 17 00:34:09.457526 systemd-networkd[1174]: Enumeration completed May 17 00:34:09.457645 systemd[1]: Started systemd-networkd.service. May 17 00:34:09.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:09.461037 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:34:09.488048 systemd-networkd[1174]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:34:09.541031 kernel: mlx5_core d598:00:02.0 enP54680s1: Link up May 17 00:34:09.563381 kernel: hv_netvsc 7c1e522c-fec4-7c1e-522c-fec47c1e522c eth0: Data path switched to VF: enP54680s1 May 17 00:34:09.564617 systemd-networkd[1174]: enP54680s1: Link UP May 17 00:34:09.564959 systemd-networkd[1174]: eth0: Link UP May 17 00:34:09.565075 systemd-networkd[1174]: eth0: Gained carrier May 17 00:34:09.571687 systemd-networkd[1174]: enP54680s1: Gained carrier May 17 00:34:09.599113 systemd-networkd[1174]: eth0: DHCPv4 address 10.200.4.42/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:34:09.718416 systemd[1]: Finished systemd-udev-settle.service. May 17 00:34:09.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:09.722427 systemd[1]: Starting lvm2-activation-early.service... May 17 00:34:10.203336 lvm[1241]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:34:10.230167 systemd[1]: Finished lvm2-activation-early.service. May 17 00:34:10.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.232801 systemd[1]: Reached target cryptsetup.target. May 17 00:34:10.236177 systemd[1]: Starting lvm2-activation.service... May 17 00:34:10.240770 lvm[1242]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:34:10.268124 systemd[1]: Finished lvm2-activation.service. May 17 00:34:10.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.270653 systemd[1]: Reached target local-fs-pre.target. May 17 00:34:10.272974 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:34:10.273019 systemd[1]: Reached target local-fs.target. May 17 00:34:10.275419 systemd[1]: Reached target machines.target. May 17 00:34:10.279065 systemd[1]: Starting ldconfig.service... May 17 00:34:10.297358 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:10.297462 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:10.298867 systemd[1]: Starting systemd-boot-update.service... May 17 00:34:10.301985 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:34:10.305620 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:34:10.309237 systemd[1]: Starting systemd-sysext.service... May 17 00:34:10.495210 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1244 (bootctl) May 17 00:34:10.497386 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:34:10.506665 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:34:10.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.564368 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:34:10.596982 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:34:10.597737 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:34:10.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.603188 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:34:10.603385 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:34:10.644017 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:34:10.693027 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:34:10.709019 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:34:10.739392 (sd-sysext)[1256]: Using extensions 'kubernetes'. May 17 00:34:10.740284 (sd-sysext)[1256]: Merged extensions into '/usr'. May 17 00:34:10.755977 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:10.757574 systemd[1]: Mounting usr-share-oem.mount... May 17 00:34:10.759877 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:10.763100 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:10.765111 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:10.767179 systemd[1]: Starting modprobe@loop.service... May 17 00:34:10.768176 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:10.768316 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:10.768448 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:10.771831 systemd[1]: Mounted usr-share-oem.mount. May 17 00:34:10.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.773108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:10.773244 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:10.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.776647 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:10.776813 systemd[1]: Finished modprobe@loop.service. May 17 00:34:10.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.778379 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:10.779493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:10.779646 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:10.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.781368 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:10.782777 systemd[1]: Finished systemd-sysext.service. May 17 00:34:10.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.784861 systemd[1]: Starting ensure-sysext.service... May 17 00:34:10.788473 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:34:10.795199 systemd[1]: Reloading. May 17 00:34:10.847576 /usr/lib/systemd/system-generators/torcx-generator[1283]: time="2025-05-17T00:34:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:34:10.848049 /usr/lib/systemd/system-generators/torcx-generator[1283]: time="2025-05-17T00:34:10Z" level=info msg="torcx already run" May 17 00:34:10.966768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:10.966790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:10.982836 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:11.002101 systemd-networkd[1174]: eth0: Gained IPv6LL May 17 00:34:11.038156 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:34:11.049000 audit: BPF prog-id=30 op=LOAD May 17 00:34:11.049000 audit: BPF prog-id=31 op=LOAD May 17 00:34:11.049000 audit: BPF prog-id=24 op=UNLOAD May 17 00:34:11.049000 audit: BPF prog-id=25 op=UNLOAD May 17 00:34:11.050000 audit: BPF prog-id=32 op=LOAD May 17 00:34:11.050000 audit: BPF prog-id=26 op=UNLOAD May 17 00:34:11.051000 audit: BPF prog-id=33 op=LOAD May 17 00:34:11.051000 audit: BPF prog-id=21 op=UNLOAD May 17 00:34:11.051000 audit: BPF prog-id=34 op=LOAD May 17 00:34:11.051000 audit: BPF prog-id=35 op=LOAD May 17 00:34:11.051000 audit: BPF prog-id=22 op=UNLOAD May 17 00:34:11.051000 audit: BPF prog-id=23 op=UNLOAD May 17 00:34:11.053000 audit: BPF prog-id=36 op=LOAD May 17 00:34:11.053000 audit: BPF prog-id=27 op=UNLOAD May 17 00:34:11.053000 audit: BPF prog-id=37 op=LOAD May 17 00:34:11.053000 audit: BPF prog-id=38 op=LOAD May 17 00:34:11.053000 audit: BPF prog-id=28 op=UNLOAD May 17 00:34:11.053000 audit: BPF prog-id=29 op=UNLOAD May 17 00:34:11.058709 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:34:11.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.071364 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:11.071670 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:11.073198 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:11.075805 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:11.078142 systemd[1]: Starting modprobe@loop.service... May 17 00:34:11.079157 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:11.079328 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:11.079491 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:11.080694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:11.080884 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:11.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.088321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:11.088472 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:11.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.089905 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:11.090031 systemd[1]: Finished modprobe@loop.service. May 17 00:34:11.091716 systemd[1]: Finished ensure-sysext.service. May 17 00:34:11.093113 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:11.093347 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:11.094458 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:11.096615 systemd[1]: Starting modprobe@drm.service... May 17 00:34:11.097806 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:11.097894 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:11.098015 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:11.098107 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:11.100508 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:34:11.100662 systemd[1]: Finished modprobe@drm.service. May 17 00:34:11.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.102927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:11.103088 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:11.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.105398 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:11.209049 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:34:11.391430 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:34:11.609445 systemd-fsck[1251]: fsck.fat 4.2 (2021-01-31) May 17 00:34:11.609445 systemd-fsck[1251]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:34:11.612770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:34:11.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.619886 kernel: kauditd_printk_skb: 120 callbacks suppressed May 17 00:34:11.619939 kernel: audit: type=1130 audit(1747442051.614:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.631690 systemd[1]: Mounting boot.mount... May 17 00:34:11.645057 systemd[1]: Mounted boot.mount. May 17 00:34:11.660245 systemd[1]: Finished systemd-boot-update.service. May 17 00:34:11.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:11.674015 kernel: audit: type=1130 audit(1747442051.662:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.062400 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:34:12.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.066873 systemd[1]: Starting audit-rules.service... May 17 00:34:12.078698 kernel: audit: type=1130 audit(1747442052.064:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.079889 systemd[1]: Starting clean-ca-certificates.service... May 17 00:34:12.083547 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:34:12.086000 audit: BPF prog-id=39 op=LOAD May 17 00:34:12.096050 kernel: audit: type=1334 audit(1747442052.086:205): prog-id=39 op=LOAD May 17 00:34:12.089908 systemd[1]: Starting systemd-resolved.service... May 17 00:34:12.102664 kernel: audit: type=1334 audit(1747442052.095:206): prog-id=40 op=LOAD May 17 00:34:12.095000 audit: BPF prog-id=40 op=LOAD May 17 00:34:12.101213 systemd[1]: Starting systemd-timesyncd.service... May 17 00:34:12.105311 systemd[1]: Starting systemd-update-utmp.service... May 17 00:34:12.117000 audit[1362]: SYSTEM_BOOT pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:34:12.120034 systemd[1]: Finished systemd-update-utmp.service. May 17 00:34:12.134221 kernel: audit: type=1127 audit(1747442052.117:207): pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:34:12.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.147014 kernel: audit: type=1130 audit(1747442052.134:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.178443 systemd[1]: Finished clean-ca-certificates.service. May 17 00:34:12.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.181051 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:34:12.195157 kernel: audit: type=1130 audit(1747442052.179:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.295417 systemd-resolved[1360]: Positive Trust Anchors: May 17 00:34:12.295434 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:34:12.295472 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:34:12.298348 systemd[1]: Started systemd-timesyncd.service. May 17 00:34:12.300736 systemd[1]: Reached target time-set.target. May 17 00:34:12.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.315325 kernel: audit: type=1130 audit(1747442052.300:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.393663 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:34:12.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.410677 kernel: audit: type=1130 audit(1747442052.395:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.502281 systemd-resolved[1360]: Using system hostname 'ci-3510.3.7-n-b02eecf252'. May 17 00:34:12.504249 systemd[1]: Started systemd-resolved.service. May 17 00:34:12.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:12.506790 systemd[1]: Reached target network.target. May 17 00:34:12.508721 systemd[1]: Reached target network-online.target. May 17 00:34:12.511139 systemd[1]: Reached target nss-lookup.target. May 17 00:34:12.525000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:34:12.525000 audit[1377]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcda31b5f0 a2=420 a3=0 items=0 ppid=1356 pid=1377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.525000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:34:12.526574 augenrules[1377]: No rules May 17 00:34:12.527129 systemd[1]: Finished audit-rules.service. May 17 00:34:12.621534 systemd-timesyncd[1361]: Contacted time server 77.74.199.184:123 (0.flatcar.pool.ntp.org). May 17 00:34:12.621638 systemd-timesyncd[1361]: Initial clock synchronization to Sat 2025-05-17 00:34:12.623655 UTC. May 17 00:34:18.917074 ldconfig[1243]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:34:18.933724 systemd[1]: Finished ldconfig.service. May 17 00:34:18.937380 systemd[1]: Starting systemd-update-done.service... May 17 00:34:18.945627 systemd[1]: Finished systemd-update-done.service. May 17 00:34:18.948173 systemd[1]: Reached target sysinit.target. May 17 00:34:18.950533 systemd[1]: Started motdgen.path. May 17 00:34:18.952380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:34:18.955387 systemd[1]: Started logrotate.timer. May 17 00:34:18.957353 systemd[1]: Started mdadm.timer. May 17 00:34:18.959176 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:34:18.961447 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:34:18.961493 systemd[1]: Reached target paths.target. May 17 00:34:18.963589 systemd[1]: Reached target timers.target. May 17 00:34:18.966316 systemd[1]: Listening on dbus.socket. May 17 00:34:18.969304 systemd[1]: Starting docker.socket... May 17 00:34:19.000203 systemd[1]: Listening on sshd.socket. May 17 00:34:19.002304 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:19.002878 systemd[1]: Listening on docker.socket. May 17 00:34:19.004844 systemd[1]: Reached target sockets.target. May 17 00:34:19.006797 systemd[1]: Reached target basic.target. May 17 00:34:19.008736 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:34:19.008767 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:34:19.009824 systemd[1]: Starting containerd.service... May 17 00:34:19.012974 systemd[1]: Starting dbus.service... May 17 00:34:19.015732 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:34:19.019291 systemd[1]: Starting extend-filesystems.service... May 17 00:34:19.021325 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:34:19.022948 systemd[1]: Starting kubelet.service... May 17 00:34:19.028123 systemd[1]: Starting motdgen.service... May 17 00:34:19.031456 systemd[1]: Started nvidia.service. May 17 00:34:19.035590 systemd[1]: Starting prepare-helm.service... May 17 00:34:19.040205 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:34:19.043504 systemd[1]: Starting sshd-keygen.service... May 17 00:34:19.049352 systemd[1]: Starting systemd-logind.service... May 17 00:34:19.051453 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:19.051575 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:34:19.052159 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:34:19.054469 systemd[1]: Starting update-engine.service... May 17 00:34:19.058684 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:34:19.066681 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:34:19.067889 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:34:19.098409 jq[1404]: true May 17 00:34:19.099875 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:34:19.100962 jq[1387]: false May 17 00:34:19.100215 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:34:19.101381 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:34:19.101577 systemd[1]: Finished motdgen.service. May 17 00:34:19.127477 jq[1414]: true May 17 00:34:19.140640 extend-filesystems[1388]: Found loop1 May 17 00:34:19.146859 extend-filesystems[1388]: Found sda May 17 00:34:19.150510 extend-filesystems[1388]: Found sda1 May 17 00:34:19.152598 extend-filesystems[1388]: Found sda2 May 17 00:34:19.154653 extend-filesystems[1388]: Found sda3 May 17 00:34:19.157143 extend-filesystems[1388]: Found usr May 17 00:34:19.157143 extend-filesystems[1388]: Found sda4 May 17 00:34:19.157143 extend-filesystems[1388]: Found sda6 May 17 00:34:19.157143 extend-filesystems[1388]: Found sda7 May 17 00:34:19.157143 extend-filesystems[1388]: Found sda9 May 17 00:34:19.157143 extend-filesystems[1388]: Checking size of /dev/sda9 May 17 00:34:19.170451 env[1413]: time="2025-05-17T00:34:19.170342701Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:34:19.188618 tar[1408]: linux-amd64/helm May 17 00:34:19.190822 env[1413]: time="2025-05-17T00:34:19.190784847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:34:19.191093 env[1413]: time="2025-05-17T00:34:19.191068974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.194960545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.195002549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.195278675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.195301177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.195317279Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.195330280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:34:19.195686 env[1413]: time="2025-05-17T00:34:19.195417188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:19.197661 env[1413]: time="2025-05-17T00:34:19.196805820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:19.197661 env[1413]: time="2025-05-17T00:34:19.197031442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:19.197661 env[1413]: time="2025-05-17T00:34:19.197056144Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:34:19.198082 env[1413]: time="2025-05-17T00:34:19.198054339Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:34:19.198158 env[1413]: time="2025-05-17T00:34:19.198083842Z" level=info msg="metadata content store policy set" policy=shared May 17 00:34:19.225883 systemd-logind[1400]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:34:19.231212 systemd-logind[1400]: New seat seat0. May 17 00:34:19.242814 env[1413]: time="2025-05-17T00:34:19.242767196Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:34:19.242930 env[1413]: time="2025-05-17T00:34:19.242823802Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:34:19.242930 env[1413]: time="2025-05-17T00:34:19.242841603Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:34:19.242930 env[1413]: time="2025-05-17T00:34:19.242882007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:34:19.242930 env[1413]: time="2025-05-17T00:34:19.242908210Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:34:19.242930 env[1413]: time="2025-05-17T00:34:19.242925511Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:34:19.243124 env[1413]: time="2025-05-17T00:34:19.242942813Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:34:19.243124 env[1413]: time="2025-05-17T00:34:19.242962015Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:34:19.243124 env[1413]: time="2025-05-17T00:34:19.242980217Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:34:19.243124 env[1413]: time="2025-05-17T00:34:19.243015820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:34:19.243124 env[1413]: time="2025-05-17T00:34:19.243034622Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:34:19.243124 env[1413]: time="2025-05-17T00:34:19.243055824Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:34:19.243329 env[1413]: time="2025-05-17T00:34:19.243187036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:34:19.243329 env[1413]: time="2025-05-17T00:34:19.243280845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:34:19.244199 env[1413]: time="2025-05-17T00:34:19.244168330Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:34:19.244288 env[1413]: time="2025-05-17T00:34:19.244219535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:34:19.244504 env[1413]: time="2025-05-17T00:34:19.244242637Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:34:19.244504 env[1413]: time="2025-05-17T00:34:19.244435255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:34:19.244504 env[1413]: time="2025-05-17T00:34:19.244454657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:34:19.244652 env[1413]: time="2025-05-17T00:34:19.244555367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:34:19.244652 env[1413]: time="2025-05-17T00:34:19.244575769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:34:19.244652 env[1413]: time="2025-05-17T00:34:19.244592770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:34:19.244652 env[1413]: time="2025-05-17T00:34:19.244613772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.244631274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245466853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245492456Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245711077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245744480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245766382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245782183Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:34:19.245807 env[1413]: time="2025-05-17T00:34:19.245804786Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:34:19.246149 env[1413]: time="2025-05-17T00:34:19.245820987Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:34:19.246149 env[1413]: time="2025-05-17T00:34:19.245842389Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:34:19.246149 env[1413]: time="2025-05-17T00:34:19.245883893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:34:19.246338 env[1413]: time="2025-05-17T00:34:19.246162220Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:34:19.246338 env[1413]: time="2025-05-17T00:34:19.246237527Z" level=info msg="Connect containerd service" May 17 00:34:19.246338 env[1413]: time="2025-05-17T00:34:19.246283931Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.247005300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.251498228Z" level=info msg="Start subscribing containerd event" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.251555433Z" level=info msg="Start recovering state" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.251630340Z" level=info msg="Start event monitor" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.253178988Z" level=info msg="Start snapshots syncer" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.253196689Z" level=info msg="Start cni network conf syncer for default" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.253206890Z" level=info msg="Start streaming server" May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.253638131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.253699137Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:34:19.295371 env[1413]: time="2025-05-17T00:34:19.286397150Z" level=info msg="containerd successfully booted in 0.117141s" May 17 00:34:19.295712 extend-filesystems[1388]: Old size kept for /dev/sda9 May 17 00:34:19.295712 extend-filesystems[1388]: Found sr0 May 17 00:34:19.253896 systemd[1]: Started containerd.service. May 17 00:34:19.292328 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:34:19.292531 systemd[1]: Finished extend-filesystems.service. May 17 00:34:19.353466 bash[1439]: Updated "/home/core/.ssh/authorized_keys" May 17 00:34:19.350373 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:34:19.411087 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:34:19.435336 dbus-daemon[1386]: [system] SELinux support is enabled May 17 00:34:19.436140 systemd[1]: Started dbus.service. May 17 00:34:19.440911 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:34:19.440943 systemd[1]: Reached target system-config.target. May 17 00:34:19.443017 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:34:19.443039 systemd[1]: Reached target user-config.target. May 17 00:34:19.446422 systemd[1]: Started systemd-logind.service. May 17 00:34:19.449023 dbus-daemon[1386]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:34:19.903560 tar[1408]: linux-amd64/LICENSE May 17 00:34:19.903809 tar[1408]: linux-amd64/README.md May 17 00:34:19.910621 systemd[1]: Finished prepare-helm.service. May 17 00:34:20.088091 update_engine[1402]: I0517 00:34:20.086617 1402 main.cc:92] Flatcar Update Engine starting May 17 00:34:20.140275 systemd[1]: Started update-engine.service. May 17 00:34:20.142834 update_engine[1402]: I0517 00:34:20.141501 1402 update_check_scheduler.cc:74] Next update check in 5m19s May 17 00:34:20.145182 systemd[1]: Started locksmithd.service. May 17 00:34:20.732080 systemd[1]: Started kubelet.service. May 17 00:34:20.851329 sshd_keygen[1405]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:34:20.878693 systemd[1]: Finished sshd-keygen.service. May 17 00:34:20.883389 systemd[1]: Starting issuegen.service... May 17 00:34:20.887260 systemd[1]: Started waagent.service. May 17 00:34:20.892682 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:34:20.892860 systemd[1]: Finished issuegen.service. May 17 00:34:20.896732 systemd[1]: Starting systemd-user-sessions.service... May 17 00:34:20.926450 systemd[1]: Finished systemd-user-sessions.service. May 17 00:34:20.930790 systemd[1]: Started getty@tty1.service. May 17 00:34:20.934260 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:34:20.936627 systemd[1]: Reached target getty.target. May 17 00:34:20.938516 systemd[1]: Reached target multi-user.target. May 17 00:34:20.942286 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:34:20.951043 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:34:20.951176 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:34:20.954287 systemd[1]: Startup finished in 1.039s (firmware) + 32.677s (loader) + 902ms (kernel) + 15.310s (initrd) + 29.999s (userspace) = 1min 19.928s. May 17 00:34:21.457937 kubelet[1492]: E0517 00:34:21.457887 1492 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:21.459615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:21.459784 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:21.460085 systemd[1]: kubelet.service: Consumed 1.092s CPU time. May 17 00:34:21.544110 login[1513]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 00:34:21.561160 login[1512]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:34:21.628016 systemd[1]: Created slice user-500.slice. May 17 00:34:21.629619 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:34:21.634051 systemd-logind[1400]: New session 1 of user core. May 17 00:34:21.640355 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:34:21.642189 systemd[1]: Starting user@500.service... May 17 00:34:21.645875 (systemd)[1516]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:21.864888 systemd[1516]: Queued start job for default target default.target. May 17 00:34:21.865506 systemd[1516]: Reached target paths.target. May 17 00:34:21.865535 systemd[1516]: Reached target sockets.target. May 17 00:34:21.865552 systemd[1516]: Reached target timers.target. May 17 00:34:21.865568 systemd[1516]: Reached target basic.target. May 17 00:34:21.865693 systemd[1]: Started user@500.service. May 17 00:34:21.866909 systemd[1]: Started session-1.scope. May 17 00:34:21.867461 systemd[1516]: Reached target default.target. May 17 00:34:21.867642 systemd[1516]: Startup finished in 215ms. May 17 00:34:21.875460 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:34:22.546239 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:34:22.551560 systemd[1]: Started session-2.scope. May 17 00:34:22.552057 systemd-logind[1400]: New session 2 of user core. May 17 00:34:28.223672 waagent[1507]: 2025-05-17T00:34:28.223561Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:34:28.227799 waagent[1507]: 2025-05-17T00:34:28.227720Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:34:28.230285 waagent[1507]: 2025-05-17T00:34:28.230222Z INFO Daemon Daemon Python: 3.9.16 May 17 00:34:28.232981 waagent[1507]: 2025-05-17T00:34:28.232903Z INFO Daemon Daemon Run daemon May 17 00:34:28.235542 waagent[1507]: 2025-05-17T00:34:28.235480Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:34:28.248458 waagent[1507]: 2025-05-17T00:34:28.248334Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:34:28.256349 waagent[1507]: 2025-05-17T00:34:28.256243Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:34:28.261424 waagent[1507]: 2025-05-17T00:34:28.261362Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:34:28.264237 waagent[1507]: 2025-05-17T00:34:28.264176Z INFO Daemon Daemon Using waagent for provisioning May 17 00:34:28.267509 waagent[1507]: 2025-05-17T00:34:28.267449Z INFO Daemon Daemon Activate resource disk May 17 00:34:28.269966 waagent[1507]: 2025-05-17T00:34:28.269909Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:34:28.280172 waagent[1507]: 2025-05-17T00:34:28.280111Z INFO Daemon Daemon Found device: None May 17 00:34:28.282582 waagent[1507]: 2025-05-17T00:34:28.282522Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:34:28.286877 waagent[1507]: 2025-05-17T00:34:28.286822Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:34:28.293263 waagent[1507]: 2025-05-17T00:34:28.293203Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:34:28.296439 waagent[1507]: 2025-05-17T00:34:28.296381Z INFO Daemon Daemon Running default provisioning handler May 17 00:34:28.306868 waagent[1507]: 2025-05-17T00:34:28.306723Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:34:28.314529 waagent[1507]: 2025-05-17T00:34:28.314426Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:34:28.319170 waagent[1507]: 2025-05-17T00:34:28.319109Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:34:28.321614 waagent[1507]: 2025-05-17T00:34:28.321555Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:34:28.422329 waagent[1507]: 2025-05-17T00:34:28.418022Z INFO Daemon Daemon Successfully mounted dvd May 17 00:34:28.504339 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:34:28.525589 waagent[1507]: 2025-05-17T00:34:28.525464Z INFO Daemon Daemon Detect protocol endpoint May 17 00:34:28.529008 waagent[1507]: 2025-05-17T00:34:28.528920Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:34:28.532122 waagent[1507]: 2025-05-17T00:34:28.532063Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:34:28.535265 waagent[1507]: 2025-05-17T00:34:28.535205Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:34:28.538099 waagent[1507]: 2025-05-17T00:34:28.538042Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:34:28.540740 waagent[1507]: 2025-05-17T00:34:28.540682Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:34:28.646943 waagent[1507]: 2025-05-17T00:34:28.646854Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:34:28.654258 waagent[1507]: 2025-05-17T00:34:28.647757Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:34:28.654258 waagent[1507]: 2025-05-17T00:34:28.648508Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:34:29.195822 waagent[1507]: 2025-05-17T00:34:29.195671Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:34:29.207037 waagent[1507]: 2025-05-17T00:34:29.206945Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:34:29.212007 waagent[1507]: 2025-05-17T00:34:29.207370Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:34:29.276859 waagent[1507]: 2025-05-17T00:34:29.276727Z INFO Daemon Daemon Found private key matching thumbprint 0E8CD958E3B2F114419E54EE4CADE2A156EFCD8E May 17 00:34:29.281424 waagent[1507]: 2025-05-17T00:34:29.281346Z INFO Daemon Daemon Fetch goal state completed May 17 00:34:29.303582 waagent[1507]: 2025-05-17T00:34:29.303514Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 0bf7e12b-9509-4a52-9d35-b0976e649f3f New eTag: 14019277043328831308] May 17 00:34:29.309195 waagent[1507]: 2025-05-17T00:34:29.309128Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:34:29.320658 waagent[1507]: 2025-05-17T00:34:29.320596Z INFO Daemon Daemon Starting provisioning May 17 00:34:29.323319 waagent[1507]: 2025-05-17T00:34:29.323258Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:34:29.325811 waagent[1507]: 2025-05-17T00:34:29.325753Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-b02eecf252] May 17 00:34:29.366657 waagent[1507]: 2025-05-17T00:34:29.366499Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-b02eecf252] May 17 00:34:29.370415 waagent[1507]: 2025-05-17T00:34:29.370328Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:34:29.373788 waagent[1507]: 2025-05-17T00:34:29.373717Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:34:29.388018 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:34:29.388286 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:34:29.388359 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:34:29.388766 systemd[1]: Stopping systemd-networkd.service... May 17 00:34:29.394052 systemd-networkd[1174]: eth0: DHCPv6 lease lost May 17 00:34:29.395495 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:34:29.395647 systemd[1]: Stopped systemd-networkd.service. May 17 00:34:29.398326 systemd[1]: Starting systemd-networkd.service... May 17 00:34:29.430102 systemd-networkd[1559]: enP54680s1: Link UP May 17 00:34:29.430113 systemd-networkd[1559]: enP54680s1: Gained carrier May 17 00:34:29.431550 systemd-networkd[1559]: eth0: Link UP May 17 00:34:29.431560 systemd-networkd[1559]: eth0: Gained carrier May 17 00:34:29.432007 systemd-networkd[1559]: lo: Link UP May 17 00:34:29.432016 systemd-networkd[1559]: lo: Gained carrier May 17 00:34:29.432376 systemd-networkd[1559]: eth0: Gained IPv6LL May 17 00:34:29.432651 systemd-networkd[1559]: Enumeration completed May 17 00:34:29.438440 waagent[1507]: 2025-05-17T00:34:29.434066Z INFO Daemon Daemon Create user account if not exists May 17 00:34:29.432765 systemd[1]: Started systemd-networkd.service. May 17 00:34:29.435734 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:34:29.439928 waagent[1507]: 2025-05-17T00:34:29.439838Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:34:29.443667 systemd-networkd[1559]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:34:29.443803 waagent[1507]: 2025-05-17T00:34:29.443620Z INFO Daemon Daemon Configure sudoer May 17 00:34:29.446972 waagent[1507]: 2025-05-17T00:34:29.446894Z INFO Daemon Daemon Configure sshd May 17 00:34:29.449470 waagent[1507]: 2025-05-17T00:34:29.449398Z INFO Daemon Daemon Deploy ssh public key. May 17 00:34:29.464062 systemd-networkd[1559]: eth0: DHCPv4 address 10.200.4.42/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:34:29.467385 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:34:30.574720 waagent[1507]: 2025-05-17T00:34:30.574624Z INFO Daemon Daemon Provisioning complete May 17 00:34:30.589364 waagent[1507]: 2025-05-17T00:34:30.589292Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:34:30.592634 waagent[1507]: 2025-05-17T00:34:30.592566Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:34:30.598235 waagent[1507]: 2025-05-17T00:34:30.598174Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:34:30.867707 waagent[1565]: 2025-05-17T00:34:30.867539Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:34:30.868441 waagent[1565]: 2025-05-17T00:34:30.868374Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:30.868589 waagent[1565]: 2025-05-17T00:34:30.868531Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:30.879436 waagent[1565]: 2025-05-17T00:34:30.879362Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:34:30.879603 waagent[1565]: 2025-05-17T00:34:30.879547Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:34:30.931419 waagent[1565]: 2025-05-17T00:34:30.931296Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0E8CD958E3B2F114419E54EE4CADE2A156EFCD8E May 17 00:34:30.931716 waagent[1565]: 2025-05-17T00:34:30.931655Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:34:30.944581 waagent[1565]: 2025-05-17T00:34:30.944514Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 067acec6-4d0e-407b-82d3-870238a96750 New eTag: 14019277043328831308] May 17 00:34:30.945121 waagent[1565]: 2025-05-17T00:34:30.945060Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:34:31.049045 waagent[1565]: 2025-05-17T00:34:31.048879Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:34:31.092857 waagent[1565]: 2025-05-17T00:34:31.092741Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1565 May 17 00:34:31.096736 waagent[1565]: 2025-05-17T00:34:31.096659Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:34:31.097905 waagent[1565]: 2025-05-17T00:34:31.097842Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:34:31.189900 waagent[1565]: 2025-05-17T00:34:31.189833Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:34:31.190385 waagent[1565]: 2025-05-17T00:34:31.190319Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:34:31.198557 waagent[1565]: 2025-05-17T00:34:31.198498Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:34:31.199064 waagent[1565]: 2025-05-17T00:34:31.198986Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:34:31.200219 waagent[1565]: 2025-05-17T00:34:31.200152Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:34:31.201503 waagent[1565]: 2025-05-17T00:34:31.201443Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:34:31.201913 waagent[1565]: 2025-05-17T00:34:31.201856Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:31.202100 waagent[1565]: 2025-05-17T00:34:31.202048Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:31.202617 waagent[1565]: 2025-05-17T00:34:31.202560Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:34:31.202900 waagent[1565]: 2025-05-17T00:34:31.202842Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:34:31.202900 waagent[1565]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:34:31.202900 waagent[1565]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:34:31.202900 waagent[1565]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:34:31.202900 waagent[1565]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:31.202900 waagent[1565]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:31.202900 waagent[1565]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:31.206091 waagent[1565]: 2025-05-17T00:34:31.205866Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:34:31.207261 waagent[1565]: 2025-05-17T00:34:31.207200Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:34:31.207484 waagent[1565]: 2025-05-17T00:34:31.207422Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:31.207605 waagent[1565]: 2025-05-17T00:34:31.207550Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:34:31.208357 waagent[1565]: 2025-05-17T00:34:31.208303Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:31.208703 waagent[1565]: 2025-05-17T00:34:31.208638Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:34:31.208822 waagent[1565]: 2025-05-17T00:34:31.208763Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:34:31.209420 waagent[1565]: 2025-05-17T00:34:31.209366Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:34:31.211488 waagent[1565]: 2025-05-17T00:34:31.211423Z INFO EnvHandler ExtHandler Configure routes May 17 00:34:31.212279 waagent[1565]: 2025-05-17T00:34:31.212235Z INFO EnvHandler ExtHandler Gateway:None May 17 00:34:31.212717 waagent[1565]: 2025-05-17T00:34:31.212672Z INFO EnvHandler ExtHandler Routes:None May 17 00:34:31.228171 waagent[1565]: 2025-05-17T00:34:31.228112Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:34:31.228862 waagent[1565]: 2025-05-17T00:34:31.228823Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:34:31.229765 waagent[1565]: 2025-05-17T00:34:31.229719Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:34:31.251817 waagent[1565]: 2025-05-17T00:34:31.251739Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1559' May 17 00:34:31.253321 waagent[1565]: 2025-05-17T00:34:31.253234Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:34:31.343379 waagent[1565]: 2025-05-17T00:34:31.343255Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:34:31.343379 waagent[1565]: Executing ['ip', '-a', '-o', 'link']: May 17 00:34:31.343379 waagent[1565]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:34:31.343379 waagent[1565]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2c:fe:c4 brd ff:ff:ff:ff:ff:ff May 17 00:34:31.343379 waagent[1565]: 3: enP54680s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2c:fe:c4 brd ff:ff:ff:ff:ff:ff\ altname enP54680p0s2 May 17 00:34:31.343379 waagent[1565]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:34:31.343379 waagent[1565]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:34:31.343379 waagent[1565]: 2: eth0 inet 10.200.4.42/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:34:31.343379 waagent[1565]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:34:31.343379 waagent[1565]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:34:31.343379 waagent[1565]: 2: eth0 inet6 fe80::7e1e:52ff:fe2c:fec4/64 scope link \ valid_lft forever preferred_lft forever May 17 00:34:31.450334 waagent[1565]: 2025-05-17T00:34:31.447440Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:34:31.518026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:34:31.518325 systemd[1]: Stopped kubelet.service. May 17 00:34:31.518386 systemd[1]: kubelet.service: Consumed 1.092s CPU time. May 17 00:34:31.520355 systemd[1]: Starting kubelet.service... May 17 00:34:31.602290 waagent[1507]: 2025-05-17T00:34:31.601233Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:34:31.607369 waagent[1507]: 2025-05-17T00:34:31.607250Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:34:32.159371 systemd[1]: Started kubelet.service. May 17 00:34:32.243657 kubelet[1598]: E0517 00:34:32.243595 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:32.247305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:32.247490 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:33.293721 waagent[1593]: 2025-05-17T00:34:33.293615Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:34:33.294976 waagent[1593]: 2025-05-17T00:34:33.294902Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:34:33.295163 waagent[1593]: 2025-05-17T00:34:33.295100Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:34:33.295322 waagent[1593]: 2025-05-17T00:34:33.295275Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 17 00:34:33.310581 waagent[1593]: 2025-05-17T00:34:33.310480Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:34:33.311001 waagent[1593]: 2025-05-17T00:34:33.310932Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:33.311184 waagent[1593]: 2025-05-17T00:34:33.311134Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:33.311415 waagent[1593]: 2025-05-17T00:34:33.311364Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:34:33.323421 waagent[1593]: 2025-05-17T00:34:33.323348Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:34:33.331627 waagent[1593]: 2025-05-17T00:34:33.331562Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 May 17 00:34:33.332529 waagent[1593]: 2025-05-17T00:34:33.332469Z INFO ExtHandler May 17 00:34:33.332675 waagent[1593]: 2025-05-17T00:34:33.332625Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9798b779-8ba9-4304-9d4a-27b7b71a9c59 eTag: 14019277043328831308 source: Fabric] May 17 00:34:33.333376 waagent[1593]: 2025-05-17T00:34:33.333318Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:34:33.334462 waagent[1593]: 2025-05-17T00:34:33.334402Z INFO ExtHandler May 17 00:34:33.334596 waagent[1593]: 2025-05-17T00:34:33.334545Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:34:33.340479 waagent[1593]: 2025-05-17T00:34:33.340428Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:34:33.340894 waagent[1593]: 2025-05-17T00:34:33.340847Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:34:33.360365 waagent[1593]: 2025-05-17T00:34:33.360303Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:34:33.413987 waagent[1593]: 2025-05-17T00:34:33.413862Z INFO ExtHandler Downloaded certificate {'thumbprint': '0E8CD958E3B2F114419E54EE4CADE2A156EFCD8E', 'hasPrivateKey': True} May 17 00:34:33.415277 waagent[1593]: 2025-05-17T00:34:33.415209Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:34:33.416090 waagent[1593]: 2025-05-17T00:34:33.416031Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:34:33.433499 waagent[1593]: 2025-05-17T00:34:33.433405Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:34:33.441290 waagent[1593]: 2025-05-17T00:34:33.441200Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:34:33.444715 waagent[1593]: 2025-05-17T00:34:33.444620Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:34:33.444925 waagent[1593]: 2025-05-17T00:34:33.444873Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:34:33.572292 waagent[1593]: 2025-05-17T00:34:33.572118Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:34:33.572292 waagent[1593]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:34:33.572292 waagent[1593]: pkts bytes target prot opt in out source destination May 17 00:34:33.572292 waagent[1593]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:34:33.572292 waagent[1593]: pkts bytes target prot opt in out source destination May 17 00:34:33.572292 waagent[1593]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:34:33.572292 waagent[1593]: pkts bytes target prot opt in out source destination May 17 00:34:33.572292 waagent[1593]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:34:33.572292 waagent[1593]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:34:33.572292 waagent[1593]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:34:33.573356 waagent[1593]: 2025-05-17T00:34:33.573287Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:34:33.575874 waagent[1593]: 2025-05-17T00:34:33.575774Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:34:33.576141 waagent[1593]: 2025-05-17T00:34:33.576088Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:34:33.576505 waagent[1593]: 2025-05-17T00:34:33.576447Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:34:33.584541 waagent[1593]: 2025-05-17T00:34:33.584486Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:34:33.585006 waagent[1593]: 2025-05-17T00:34:33.584936Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:34:33.592163 waagent[1593]: 2025-05-17T00:34:33.592100Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1593 May 17 00:34:33.595023 waagent[1593]: 2025-05-17T00:34:33.594953Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:34:33.595736 waagent[1593]: 2025-05-17T00:34:33.595678Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:34:33.596546 waagent[1593]: 2025-05-17T00:34:33.596481Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:34:33.598964 waagent[1593]: 2025-05-17T00:34:33.598903Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:34:33.600237 waagent[1593]: 2025-05-17T00:34:33.600178Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:34:33.600563 waagent[1593]: 2025-05-17T00:34:33.600508Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:33.600926 waagent[1593]: 2025-05-17T00:34:33.600872Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:33.601428 waagent[1593]: 2025-05-17T00:34:33.601370Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:34:33.601703 waagent[1593]: 2025-05-17T00:34:33.601647Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:34:33.601703 waagent[1593]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:34:33.601703 waagent[1593]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:34:33.601703 waagent[1593]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:34:33.601703 waagent[1593]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:33.601703 waagent[1593]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:33.601703 waagent[1593]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:33.603912 waagent[1593]: 2025-05-17T00:34:33.603825Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:34:33.604829 waagent[1593]: 2025-05-17T00:34:33.604767Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:33.605150 waagent[1593]: 2025-05-17T00:34:33.605082Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:34:33.605427 waagent[1593]: 2025-05-17T00:34:33.605375Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:34:33.608396 waagent[1593]: 2025-05-17T00:34:33.608313Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:33.609016 waagent[1593]: 2025-05-17T00:34:33.608944Z INFO EnvHandler ExtHandler Configure routes May 17 00:34:33.609181 waagent[1593]: 2025-05-17T00:34:33.609131Z INFO EnvHandler ExtHandler Gateway:None May 17 00:34:33.609435 waagent[1593]: 2025-05-17T00:34:33.609378Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:34:33.609763 waagent[1593]: 2025-05-17T00:34:33.609712Z INFO EnvHandler ExtHandler Routes:None May 17 00:34:33.609908 waagent[1593]: 2025-05-17T00:34:33.609856Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:34:33.618501 waagent[1593]: 2025-05-17T00:34:33.618414Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:34:33.618501 waagent[1593]: Executing ['ip', '-a', '-o', 'link']: May 17 00:34:33.618501 waagent[1593]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:34:33.618501 waagent[1593]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2c:fe:c4 brd ff:ff:ff:ff:ff:ff May 17 00:34:33.618501 waagent[1593]: 3: enP54680s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2c:fe:c4 brd ff:ff:ff:ff:ff:ff\ altname enP54680p0s2 May 17 00:34:33.618501 waagent[1593]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:34:33.618501 waagent[1593]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:34:33.618501 waagent[1593]: 2: eth0 inet 10.200.4.42/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:34:33.618501 waagent[1593]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:34:33.618501 waagent[1593]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:34:33.618501 waagent[1593]: 2: eth0 inet6 fe80::7e1e:52ff:fe2c:fec4/64 scope link \ valid_lft forever preferred_lft forever May 17 00:34:33.618973 waagent[1593]: 2025-05-17T00:34:33.618606Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:34:33.635016 waagent[1593]: 2025-05-17T00:34:33.634928Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:34:33.650450 waagent[1593]: 2025-05-17T00:34:33.650387Z INFO ExtHandler ExtHandler May 17 00:34:33.651426 waagent[1593]: 2025-05-17T00:34:33.651366Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f97ccdc0-0c33-4909-b62a-e53b1a0ac8ad correlation 8aab08ba-6d34-4e48-a227-d3f9f23c58b5 created: 2025-05-17T00:32:44.585701Z] May 17 00:34:33.654474 waagent[1593]: 2025-05-17T00:34:33.654418Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:34:33.657792 waagent[1593]: 2025-05-17T00:34:33.657737Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 7 ms] May 17 00:34:33.669721 waagent[1593]: 2025-05-17T00:34:33.669649Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:34:33.685133 waagent[1593]: 2025-05-17T00:34:33.685071Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:34:33.687623 waagent[1593]: 2025-05-17T00:34:33.687560Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:34:33.691838 waagent[1593]: 2025-05-17T00:34:33.691779Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3D9E9CBC-8320-40CD-BF15-79244CAAD848;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:34:42.396296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:34:42.396613 systemd[1]: Stopped kubelet.service. May 17 00:34:42.398587 systemd[1]: Starting kubelet.service... May 17 00:34:42.608878 systemd[1]: Started kubelet.service. May 17 00:34:43.181705 kubelet[1648]: E0517 00:34:43.181650 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:43.183343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:43.183505 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:53.396357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:34:53.396682 systemd[1]: Stopped kubelet.service. May 17 00:34:53.398723 systemd[1]: Starting kubelet.service... May 17 00:34:53.739145 systemd[1]: Started kubelet.service. May 17 00:34:54.170131 kubelet[1657]: E0517 00:34:54.170081 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:54.171657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:54.171819 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:57.247786 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 17 00:35:01.850415 systemd[1]: Created slice system-sshd.slice. May 17 00:35:01.852287 systemd[1]: Started sshd@0-10.200.4.42:22-10.200.16.10:47310.service. May 17 00:35:02.892544 sshd[1663]: Accepted publickey for core from 10.200.16.10 port 47310 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:35:02.894117 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:02.897627 systemd-logind[1400]: New session 3 of user core. May 17 00:35:02.898831 systemd[1]: Started session-3.scope. May 17 00:35:03.407400 systemd[1]: Started sshd@1-10.200.4.42:22-10.200.16.10:47314.service. May 17 00:35:03.997300 sshd[1668]: Accepted publickey for core from 10.200.16.10 port 47314 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:35:03.998889 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:04.003496 systemd[1]: Started session-4.scope. May 17 00:35:04.003939 systemd-logind[1400]: New session 4 of user core. May 17 00:35:04.327154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:35:04.327398 systemd[1]: Stopped kubelet.service. May 17 00:35:04.329179 systemd[1]: Starting kubelet.service... May 17 00:35:04.430070 sshd[1668]: pam_unix(sshd:session): session closed for user core May 17 00:35:04.433587 systemd[1]: sshd@1-10.200.4.42:22-10.200.16.10:47314.service: Deactivated successfully. May 17 00:35:04.434450 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:35:04.435090 systemd-logind[1400]: Session 4 logged out. Waiting for processes to exit. May 17 00:35:04.435816 systemd-logind[1400]: Removed session 4. May 17 00:35:04.524425 systemd[1]: Started kubelet.service. May 17 00:35:04.530334 systemd[1]: Started sshd@2-10.200.4.42:22-10.200.16.10:47318.service. May 17 00:35:04.568644 kubelet[1677]: E0517 00:35:04.568595 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:35:04.570559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:35:04.570672 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:35:05.124922 sshd[1684]: Accepted publickey for core from 10.200.16.10 port 47318 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:35:05.126657 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:05.132017 systemd[1]: Started session-5.scope. May 17 00:35:05.132699 systemd-logind[1400]: New session 5 of user core. May 17 00:35:05.543854 sshd[1684]: pam_unix(sshd:session): session closed for user core May 17 00:35:05.547161 systemd[1]: sshd@2-10.200.4.42:22-10.200.16.10:47318.service: Deactivated successfully. May 17 00:35:05.548099 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:35:05.548828 systemd-logind[1400]: Session 5 logged out. Waiting for processes to exit. May 17 00:35:05.549658 systemd-logind[1400]: Removed session 5. May 17 00:35:05.643155 systemd[1]: Started sshd@3-10.200.4.42:22-10.200.16.10:47324.service. May 17 00:35:05.743017 update_engine[1402]: I0517 00:35:05.742926 1402 update_attempter.cc:509] Updating boot flags... May 17 00:35:06.234902 sshd[1690]: Accepted publickey for core from 10.200.16.10 port 47324 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:35:06.236678 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:06.242351 systemd[1]: Started session-6.scope. May 17 00:35:06.242779 systemd-logind[1400]: New session 6 of user core. May 17 00:35:06.671382 sshd[1690]: pam_unix(sshd:session): session closed for user core May 17 00:35:06.674344 systemd[1]: sshd@3-10.200.4.42:22-10.200.16.10:47324.service: Deactivated successfully. May 17 00:35:06.675143 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:35:06.675738 systemd-logind[1400]: Session 6 logged out. Waiting for processes to exit. May 17 00:35:06.676485 systemd-logind[1400]: Removed session 6. May 17 00:35:06.771212 systemd[1]: Started sshd@4-10.200.4.42:22-10.200.16.10:47336.service. May 17 00:35:07.363795 sshd[1762]: Accepted publickey for core from 10.200.16.10 port 47336 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:35:07.365487 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:07.370511 systemd[1]: Started session-7.scope. May 17 00:35:07.371229 systemd-logind[1400]: New session 7 of user core. May 17 00:35:08.161275 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:35:08.161647 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:35:08.202151 systemd[1]: Starting docker.service... May 17 00:35:08.238723 env[1775]: time="2025-05-17T00:35:08.238669574Z" level=info msg="Starting up" May 17 00:35:08.239870 env[1775]: time="2025-05-17T00:35:08.239842279Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:35:08.239981 env[1775]: time="2025-05-17T00:35:08.239969979Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:35:08.240452 env[1775]: time="2025-05-17T00:35:08.240236680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:35:08.240452 env[1775]: time="2025-05-17T00:35:08.240260081Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:35:08.242015 env[1775]: time="2025-05-17T00:35:08.241972387Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:35:08.242015 env[1775]: time="2025-05-17T00:35:08.241989788Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:35:08.242160 env[1775]: time="2025-05-17T00:35:08.242025288Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:35:08.242160 env[1775]: time="2025-05-17T00:35:08.242038788Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:35:08.248891 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2341396518-merged.mount: Deactivated successfully. May 17 00:35:08.338144 env[1775]: time="2025-05-17T00:35:08.338099275Z" level=info msg="Loading containers: start." May 17 00:35:08.517016 kernel: Initializing XFRM netlink socket May 17 00:35:08.542748 env[1775]: time="2025-05-17T00:35:08.542707199Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:35:08.683720 systemd-networkd[1559]: docker0: Link UP May 17 00:35:08.710118 env[1775]: time="2025-05-17T00:35:08.710077674Z" level=info msg="Loading containers: done." May 17 00:35:08.722980 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1212118958-merged.mount: Deactivated successfully. May 17 00:35:08.736301 env[1775]: time="2025-05-17T00:35:08.736264380Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:35:08.736501 env[1775]: time="2025-05-17T00:35:08.736477280Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:35:08.736611 env[1775]: time="2025-05-17T00:35:08.736589281Z" level=info msg="Daemon has completed initialization" May 17 00:35:08.775830 systemd[1]: Started docker.service. May 17 00:35:08.785784 env[1775]: time="2025-05-17T00:35:08.785726879Z" level=info msg="API listen on /run/docker.sock" May 17 00:35:10.506567 env[1413]: time="2025-05-17T00:35:10.506508823Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:35:11.351612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753549495.mount: Deactivated successfully. May 17 00:35:13.134124 env[1413]: time="2025-05-17T00:35:13.134053198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:13.140325 env[1413]: time="2025-05-17T00:35:13.140287516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:13.143579 env[1413]: time="2025-05-17T00:35:13.143544025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:13.148333 env[1413]: time="2025-05-17T00:35:13.148295639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:13.149174 env[1413]: time="2025-05-17T00:35:13.149143042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:35:13.149872 env[1413]: time="2025-05-17T00:35:13.149846144Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:35:14.646284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:35:14.646530 systemd[1]: Stopped kubelet.service. May 17 00:35:14.648392 systemd[1]: Starting kubelet.service... May 17 00:35:14.783801 systemd[1]: Started kubelet.service. May 17 00:35:15.411116 kubelet[1892]: E0517 00:35:15.411061 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:35:15.412768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:35:15.412931 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:35:15.438872 env[1413]: time="2025-05-17T00:35:15.438822888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:15.448492 env[1413]: time="2025-05-17T00:35:15.448425113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:15.456123 env[1413]: time="2025-05-17T00:35:15.456067832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:15.466134 env[1413]: time="2025-05-17T00:35:15.466095858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:15.466813 env[1413]: time="2025-05-17T00:35:15.466778660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:35:15.467610 env[1413]: time="2025-05-17T00:35:15.467583662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:35:16.863613 env[1413]: time="2025-05-17T00:35:16.863548545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:16.870273 env[1413]: time="2025-05-17T00:35:16.870229476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:16.876248 env[1413]: time="2025-05-17T00:35:16.876214503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:16.880915 env[1413]: time="2025-05-17T00:35:16.880882924Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:16.881576 env[1413]: time="2025-05-17T00:35:16.881544327Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:35:16.882718 env[1413]: time="2025-05-17T00:35:16.882691532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:35:18.300694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598937650.mount: Deactivated successfully. May 17 00:35:18.957696 env[1413]: time="2025-05-17T00:35:18.957631081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.966321 env[1413]: time="2025-05-17T00:35:18.966276741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.970172 env[1413]: time="2025-05-17T00:35:18.970092711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.973892 env[1413]: time="2025-05-17T00:35:18.973858081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.974318 env[1413]: time="2025-05-17T00:35:18.974288888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:35:18.975003 env[1413]: time="2025-05-17T00:35:18.974967501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:35:19.647279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419452182.mount: Deactivated successfully. May 17 00:35:21.092764 env[1413]: time="2025-05-17T00:35:21.092705303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.101083 env[1413]: time="2025-05-17T00:35:21.101036244Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.104733 env[1413]: time="2025-05-17T00:35:21.104679606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.109083 env[1413]: time="2025-05-17T00:35:21.109046080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.109945 env[1413]: time="2025-05-17T00:35:21.109912395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:35:21.110710 env[1413]: time="2025-05-17T00:35:21.110671108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:35:21.704316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464452037.mount: Deactivated successfully. May 17 00:35:21.737823 env[1413]: time="2025-05-17T00:35:21.737766441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.747935 env[1413]: time="2025-05-17T00:35:21.747895413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.753835 env[1413]: time="2025-05-17T00:35:21.753804013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.759731 env[1413]: time="2025-05-17T00:35:21.759697213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:21.760150 env[1413]: time="2025-05-17T00:35:21.760119020Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:35:21.760793 env[1413]: time="2025-05-17T00:35:21.760766431Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:35:22.431925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435696043.mount: Deactivated successfully. May 17 00:35:25.105210 env[1413]: time="2025-05-17T00:35:25.105142535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:25.115746 env[1413]: time="2025-05-17T00:35:25.115696596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:25.119149 env[1413]: time="2025-05-17T00:35:25.119116248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:25.124881 env[1413]: time="2025-05-17T00:35:25.124846435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:25.125613 env[1413]: time="2025-05-17T00:35:25.125578846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:35:25.646309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:35:25.646608 systemd[1]: Stopped kubelet.service. May 17 00:35:25.648680 systemd[1]: Starting kubelet.service... May 17 00:35:25.742789 systemd[1]: Started kubelet.service. May 17 00:35:25.797205 kubelet[1906]: E0517 00:35:25.797168 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:35:25.798899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:35:25.799077 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:35:28.658859 systemd[1]: Stopped kubelet.service. May 17 00:35:28.661734 systemd[1]: Starting kubelet.service... May 17 00:35:28.690586 systemd[1]: Reloading. May 17 00:35:28.815577 /usr/lib/systemd/system-generators/torcx-generator[1956]: time="2025-05-17T00:35:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:35:28.816043 /usr/lib/systemd/system-generators/torcx-generator[1956]: time="2025-05-17T00:35:28Z" level=info msg="torcx already run" May 17 00:35:28.884379 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:35:28.884397 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:35:28.906313 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:35:29.013650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:35:29.013741 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:35:29.014029 systemd[1]: Stopped kubelet.service. May 17 00:35:29.016104 systemd[1]: Starting kubelet.service... May 17 00:35:29.239709 systemd[1]: Started kubelet.service. May 17 00:35:29.281429 kubelet[2017]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:35:29.281429 kubelet[2017]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:35:29.281429 kubelet[2017]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:35:29.281429 kubelet[2017]: I0517 00:35:29.281005 2017 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:35:29.478063 kubelet[2017]: I0517 00:35:29.478022 2017 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:35:29.478063 kubelet[2017]: I0517 00:35:29.478054 2017 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:35:29.478372 kubelet[2017]: I0517 00:35:29.478351 2017 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:35:30.023979 kubelet[2017]: E0517 00:35:30.023105 2017 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:30.023979 kubelet[2017]: I0517 00:35:30.023641 2017 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:35:30.063327 kubelet[2017]: E0517 00:35:30.063289 2017 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:35:30.063542 kubelet[2017]: I0517 00:35:30.063528 2017 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:35:30.070528 kubelet[2017]: I0517 00:35:30.070503 2017 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:35:30.070787 kubelet[2017]: I0517 00:35:30.070769 2017 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:35:30.070984 kubelet[2017]: I0517 00:35:30.070956 2017 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:35:30.071209 kubelet[2017]: I0517 00:35:30.070981 2017 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-b02eecf252","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:35:30.071366 kubelet[2017]: I0517 00:35:30.071223 2017 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:35:30.071366 kubelet[2017]: I0517 00:35:30.071236 2017 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:35:30.071366 kubelet[2017]: I0517 00:35:30.071363 2017 state_mem.go:36] "Initialized new in-memory state store" May 17 00:35:30.077772 kubelet[2017]: I0517 00:35:30.077730 2017 kubelet.go:408] "Attempting to sync node with API server" May 17 00:35:30.077881 kubelet[2017]: I0517 00:35:30.077785 2017 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:35:30.077881 kubelet[2017]: I0517 00:35:30.077848 2017 kubelet.go:314] "Adding apiserver pod source" May 17 00:35:30.077881 kubelet[2017]: I0517 00:35:30.077874 2017 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:35:30.090451 kubelet[2017]: W0517 00:35:30.090391 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b02eecf252&limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:30.090636 kubelet[2017]: E0517 00:35:30.090614 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b02eecf252&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:30.091215 kubelet[2017]: I0517 00:35:30.091185 2017 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:35:30.091819 kubelet[2017]: I0517 00:35:30.091722 2017 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:35:30.091819 kubelet[2017]: W0517 00:35:30.091790 2017 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:35:30.101471 kubelet[2017]: W0517 00:35:30.100757 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:30.101471 kubelet[2017]: E0517 00:35:30.100819 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:30.101671 kubelet[2017]: I0517 00:35:30.101656 2017 server.go:1274] "Started kubelet" May 17 00:35:30.107398 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:35:30.107562 kubelet[2017]: I0517 00:35:30.107541 2017 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:35:30.113664 kubelet[2017]: I0517 00:35:30.113625 2017 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:35:30.114774 kubelet[2017]: I0517 00:35:30.114750 2017 server.go:449] "Adding debug handlers to kubelet server" May 17 00:35:30.117416 kubelet[2017]: I0517 00:35:30.117380 2017 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:35:30.117614 kubelet[2017]: I0517 00:35:30.117595 2017 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:35:30.117804 kubelet[2017]: I0517 00:35:30.117784 2017 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:35:30.119361 kubelet[2017]: I0517 00:35:30.119340 2017 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:35:30.119548 kubelet[2017]: E0517 00:35:30.119527 2017 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b02eecf252\" not found" May 17 00:35:30.120875 kubelet[2017]: E0517 00:35:30.114974 2017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-b02eecf252.1840295be6424eea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-b02eecf252,UID:ci-3510.3.7-n-b02eecf252,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-b02eecf252,},FirstTimestamp:2025-05-17 00:35:30.101620458 +0000 UTC m=+0.857690052,LastTimestamp:2025-05-17 00:35:30.101620458 +0000 UTC m=+0.857690052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-b02eecf252,}" May 17 00:35:30.121035 kubelet[2017]: E0517 00:35:30.121011 2017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b02eecf252?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="200ms" May 17 00:35:30.121099 kubelet[2017]: I0517 00:35:30.121065 2017 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:35:30.122175 kubelet[2017]: W0517 00:35:30.121619 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:30.122175 kubelet[2017]: E0517 00:35:30.121658 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:30.122175 kubelet[2017]: I0517 00:35:30.121810 2017 reconciler.go:26] "Reconciler: start to sync state" May 17 00:35:30.123168 kubelet[2017]: I0517 00:35:30.123148 2017 factory.go:221] Registration of the systemd container factory successfully May 17 00:35:30.123351 kubelet[2017]: I0517 00:35:30.123330 2017 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:35:30.125485 kubelet[2017]: I0517 00:35:30.125468 2017 factory.go:221] Registration of the containerd container factory successfully May 17 00:35:30.153126 kubelet[2017]: E0517 00:35:30.153090 2017 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:35:30.172743 kubelet[2017]: I0517 00:35:30.172648 2017 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:35:30.172743 kubelet[2017]: I0517 00:35:30.172748 2017 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:35:30.172917 kubelet[2017]: I0517 00:35:30.172767 2017 state_mem.go:36] "Initialized new in-memory state store" May 17 00:35:30.179667 kubelet[2017]: I0517 00:35:30.179381 2017 policy_none.go:49] "None policy: Start" May 17 00:35:30.180104 kubelet[2017]: I0517 00:35:30.180084 2017 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:35:30.180205 kubelet[2017]: I0517 00:35:30.180110 2017 state_mem.go:35] "Initializing new in-memory state store" May 17 00:35:30.181466 kubelet[2017]: I0517 00:35:30.181437 2017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:35:30.182617 kubelet[2017]: I0517 00:35:30.182595 2017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:35:30.182617 kubelet[2017]: I0517 00:35:30.182618 2017 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:35:30.184141 kubelet[2017]: I0517 00:35:30.182640 2017 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:35:30.184141 kubelet[2017]: E0517 00:35:30.182684 2017 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:35:30.184450 kubelet[2017]: W0517 00:35:30.184314 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:30.184450 kubelet[2017]: E0517 00:35:30.184380 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:30.190915 systemd[1]: Created slice kubepods.slice. May 17 00:35:30.195606 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:35:30.198583 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:35:30.205620 kubelet[2017]: I0517 00:35:30.205603 2017 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:35:30.206157 kubelet[2017]: I0517 00:35:30.206141 2017 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:35:30.206300 kubelet[2017]: I0517 00:35:30.206259 2017 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:35:30.206778 kubelet[2017]: I0517 00:35:30.206761 2017 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:35:30.208410 kubelet[2017]: E0517 00:35:30.208393 2017 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-b02eecf252\" not found" May 17 00:35:30.292706 systemd[1]: Created slice kubepods-burstable-pod117681e3e062a563df96a432edf5456d.slice. May 17 00:35:30.301837 systemd[1]: Created slice kubepods-burstable-pod6a7d4dc60bbf704a59ff1a97b8805a0a.slice. May 17 00:35:30.305485 systemd[1]: Created slice kubepods-burstable-pod4f24a88da43d9e543c73c59b93cda31a.slice. May 17 00:35:30.308333 kubelet[2017]: I0517 00:35:30.308310 2017 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:30.308843 kubelet[2017]: E0517 00:35:30.308814 2017 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:30.322274 kubelet[2017]: E0517 00:35:30.322244 2017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b02eecf252?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="400ms" May 17 00:35:30.323479 kubelet[2017]: I0517 00:35:30.323446 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/117681e3e062a563df96a432edf5456d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" (UID: \"117681e3e062a563df96a432edf5456d\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323572 kubelet[2017]: I0517 00:35:30.323481 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323572 kubelet[2017]: I0517 00:35:30.323505 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323572 kubelet[2017]: I0517 00:35:30.323524 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/117681e3e062a563df96a432edf5456d-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" (UID: \"117681e3e062a563df96a432edf5456d\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323572 kubelet[2017]: I0517 00:35:30.323547 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/117681e3e062a563df96a432edf5456d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" (UID: \"117681e3e062a563df96a432edf5456d\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323572 kubelet[2017]: I0517 00:35:30.323568 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323767 kubelet[2017]: I0517 00:35:30.323590 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323767 kubelet[2017]: I0517 00:35:30.323614 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.323767 kubelet[2017]: I0517 00:35:30.323665 2017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a7d4dc60bbf704a59ff1a97b8805a0a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-b02eecf252\" (UID: \"6a7d4dc60bbf704a59ff1a97b8805a0a\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-b02eecf252" May 17 00:35:30.511000 kubelet[2017]: I0517 00:35:30.510957 2017 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:30.511579 kubelet[2017]: E0517 00:35:30.511540 2017 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:30.601457 env[1413]: time="2025-05-17T00:35:30.601062684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-b02eecf252,Uid:117681e3e062a563df96a432edf5456d,Namespace:kube-system,Attempt:0,}" May 17 00:35:30.605103 env[1413]: time="2025-05-17T00:35:30.604930935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-b02eecf252,Uid:6a7d4dc60bbf704a59ff1a97b8805a0a,Namespace:kube-system,Attempt:0,}" May 17 00:35:30.609142 env[1413]: time="2025-05-17T00:35:30.608861787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-b02eecf252,Uid:4f24a88da43d9e543c73c59b93cda31a,Namespace:kube-system,Attempt:0,}" May 17 00:35:30.723165 kubelet[2017]: E0517 00:35:30.723109 2017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b02eecf252?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="800ms" May 17 00:35:30.913772 kubelet[2017]: I0517 00:35:30.913741 2017 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:30.914129 kubelet[2017]: E0517 00:35:30.914096 2017 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:30.992451 kubelet[2017]: W0517 00:35:30.992389 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:30.992614 kubelet[2017]: E0517 00:35:30.992459 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:31.251911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251944203.mount: Deactivated successfully. May 17 00:35:31.260877 kubelet[2017]: W0517 00:35:31.260817 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b02eecf252&limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:31.261038 kubelet[2017]: E0517 00:35:31.260889 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b02eecf252&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:31.264385 kubelet[2017]: W0517 00:35:31.264349 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:31.264476 kubelet[2017]: E0517 00:35:31.264403 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:31.282457 env[1413]: time="2025-05-17T00:35:31.282410625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.285409 env[1413]: time="2025-05-17T00:35:31.285372063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.292773 env[1413]: time="2025-05-17T00:35:31.292725958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.295333 env[1413]: time="2025-05-17T00:35:31.295243291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.299407 env[1413]: time="2025-05-17T00:35:31.299373544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.305493 env[1413]: time="2025-05-17T00:35:31.305456423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.309043 env[1413]: time="2025-05-17T00:35:31.309011769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.314256 env[1413]: time="2025-05-17T00:35:31.314222136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.317250 env[1413]: time="2025-05-17T00:35:31.317219275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.323060 env[1413]: time="2025-05-17T00:35:31.323028350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.332689 env[1413]: time="2025-05-17T00:35:31.332647574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.344233 env[1413]: time="2025-05-17T00:35:31.344192123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:31.383652 env[1413]: time="2025-05-17T00:35:31.383577232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:31.383652 env[1413]: time="2025-05-17T00:35:31.383614132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:31.383904 env[1413]: time="2025-05-17T00:35:31.383645433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:31.383904 env[1413]: time="2025-05-17T00:35:31.383798735Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74409b2fcea190795cecfd37aeb975cff2775b7c25edc2ef242302b312c9e082 pid=2057 runtime=io.containerd.runc.v2 May 17 00:35:31.410710 systemd[1]: Started cri-containerd-74409b2fcea190795cecfd37aeb975cff2775b7c25edc2ef242302b312c9e082.scope. May 17 00:35:31.419259 env[1413]: time="2025-05-17T00:35:31.418280880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:31.419259 env[1413]: time="2025-05-17T00:35:31.418384681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:31.419259 env[1413]: time="2025-05-17T00:35:31.418420582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:31.419259 env[1413]: time="2025-05-17T00:35:31.418623885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b3a0e44572377dc2a4202ca5461a6799b39df766ffbc13765959492b8771f74 pid=2085 runtime=io.containerd.runc.v2 May 17 00:35:31.446873 kubelet[2017]: E0517 00:35:31.441437 2017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-b02eecf252.1840295be6424eea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-b02eecf252,UID:ci-3510.3.7-n-b02eecf252,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-b02eecf252,},FirstTimestamp:2025-05-17 00:35:30.101620458 +0000 UTC m=+0.857690052,LastTimestamp:2025-05-17 00:35:30.101620458 +0000 UTC m=+0.857690052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-b02eecf252,}" May 17 00:35:31.447780 systemd[1]: Started cri-containerd-4b3a0e44572377dc2a4202ca5461a6799b39df766ffbc13765959492b8771f74.scope. May 17 00:35:31.452645 env[1413]: time="2025-05-17T00:35:31.452567723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:31.452645 env[1413]: time="2025-05-17T00:35:31.452622824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:31.452870 env[1413]: time="2025-05-17T00:35:31.452824626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:31.453137 env[1413]: time="2025-05-17T00:35:31.453091630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5396c736cc51c306fedbccec7d3e638aeb72f1a3b92ff3a1859d248dc7b2259c pid=2115 runtime=io.containerd.runc.v2 May 17 00:35:31.469581 systemd[1]: Started cri-containerd-5396c736cc51c306fedbccec7d3e638aeb72f1a3b92ff3a1859d248dc7b2259c.scope. May 17 00:35:31.506300 env[1413]: time="2025-05-17T00:35:31.504963700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-b02eecf252,Uid:117681e3e062a563df96a432edf5456d,Namespace:kube-system,Attempt:0,} returns sandbox id \"74409b2fcea190795cecfd37aeb975cff2775b7c25edc2ef242302b312c9e082\"" May 17 00:35:31.513306 env[1413]: time="2025-05-17T00:35:31.513264207Z" level=info msg="CreateContainer within sandbox \"74409b2fcea190795cecfd37aeb975cff2775b7c25edc2ef242302b312c9e082\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:35:31.524748 kubelet[2017]: E0517 00:35:31.524658 2017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b02eecf252?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="1.6s" May 17 00:35:31.545544 env[1413]: time="2025-05-17T00:35:31.545496023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-b02eecf252,Uid:4f24a88da43d9e543c73c59b93cda31a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b3a0e44572377dc2a4202ca5461a6799b39df766ffbc13765959492b8771f74\"" May 17 00:35:31.548197 env[1413]: time="2025-05-17T00:35:31.548161258Z" level=info msg="CreateContainer within sandbox \"4b3a0e44572377dc2a4202ca5461a6799b39df766ffbc13765959492b8771f74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:35:31.556863 env[1413]: time="2025-05-17T00:35:31.555989559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-b02eecf252,Uid:6a7d4dc60bbf704a59ff1a97b8805a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5396c736cc51c306fedbccec7d3e638aeb72f1a3b92ff3a1859d248dc7b2259c\"" May 17 00:35:31.560157 env[1413]: time="2025-05-17T00:35:31.560118812Z" level=info msg="CreateContainer within sandbox \"5396c736cc51c306fedbccec7d3e638aeb72f1a3b92ff3a1859d248dc7b2259c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:35:31.593524 env[1413]: time="2025-05-17T00:35:31.593483443Z" level=info msg="CreateContainer within sandbox \"74409b2fcea190795cecfd37aeb975cff2775b7c25edc2ef242302b312c9e082\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1ff4f65f9083da94519071d2ed70e31af6285732674cc6455e467ad9d389aa0\"" May 17 00:35:31.594287 env[1413]: time="2025-05-17T00:35:31.594253753Z" level=info msg="StartContainer for \"d1ff4f65f9083da94519071d2ed70e31af6285732674cc6455e467ad9d389aa0\"" May 17 00:35:31.617462 systemd[1]: Started cri-containerd-d1ff4f65f9083da94519071d2ed70e31af6285732674cc6455e467ad9d389aa0.scope. May 17 00:35:31.621249 env[1413]: time="2025-05-17T00:35:31.619036673Z" level=info msg="CreateContainer within sandbox \"4b3a0e44572377dc2a4202ca5461a6799b39df766ffbc13765959492b8771f74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea04ad4fca54e964ba982b5bf8fb880b1debef0372a8d9eccce2f592dd1c21bf\"" May 17 00:35:31.621249 env[1413]: time="2025-05-17T00:35:31.619628981Z" level=info msg="StartContainer for \"ea04ad4fca54e964ba982b5bf8fb880b1debef0372a8d9eccce2f592dd1c21bf\"" May 17 00:35:31.632493 env[1413]: time="2025-05-17T00:35:31.632440447Z" level=info msg="CreateContainer within sandbox \"5396c736cc51c306fedbccec7d3e638aeb72f1a3b92ff3a1859d248dc7b2259c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc31d55b89a70392f5c5fa85d1399575aafb4956b9448f61f9bf7e88df794c7b\"" May 17 00:35:31.633309 env[1413]: time="2025-05-17T00:35:31.633274957Z" level=info msg="StartContainer for \"cc31d55b89a70392f5c5fa85d1399575aafb4956b9448f61f9bf7e88df794c7b\"" May 17 00:35:31.645341 kubelet[2017]: W0517 00:35:31.643722 2017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.42:6443: connect: connection refused May 17 00:35:31.645341 kubelet[2017]: E0517 00:35:31.643806 2017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:31.654639 systemd[1]: Started cri-containerd-ea04ad4fca54e964ba982b5bf8fb880b1debef0372a8d9eccce2f592dd1c21bf.scope. May 17 00:35:31.678585 systemd[1]: Started cri-containerd-cc31d55b89a70392f5c5fa85d1399575aafb4956b9448f61f9bf7e88df794c7b.scope. May 17 00:35:31.716403 kubelet[2017]: I0517 00:35:31.716371 2017 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:31.716789 kubelet[2017]: E0517 00:35:31.716748 2017 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:31.717044 env[1413]: time="2025-05-17T00:35:31.717006839Z" level=info msg="StartContainer for \"d1ff4f65f9083da94519071d2ed70e31af6285732674cc6455e467ad9d389aa0\" returns successfully" May 17 00:35:31.747927 env[1413]: time="2025-05-17T00:35:31.747652135Z" level=info msg="StartContainer for \"ea04ad4fca54e964ba982b5bf8fb880b1debef0372a8d9eccce2f592dd1c21bf\" returns successfully" May 17 00:35:31.882043 env[1413]: time="2025-05-17T00:35:31.881918269Z" level=info msg="StartContainer for \"cc31d55b89a70392f5c5fa85d1399575aafb4956b9448f61f9bf7e88df794c7b\" returns successfully" May 17 00:35:33.318932 kubelet[2017]: I0517 00:35:33.318899 2017 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:35.157935 kubelet[2017]: I0517 00:35:35.157897 2017 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:36.153567 kubelet[2017]: I0517 00:35:36.153532 2017 apiserver.go:52] "Watching apiserver" May 17 00:35:36.221472 kubelet[2017]: I0517 00:35:36.221426 2017 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:35:36.499264 systemd[1]: Reloading. May 17 00:35:36.602463 /usr/lib/systemd/system-generators/torcx-generator[2305]: time="2025-05-17T00:35:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:35:36.612089 /usr/lib/systemd/system-generators/torcx-generator[2305]: time="2025-05-17T00:35:36Z" level=info msg="torcx already run" May 17 00:35:36.679438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:35:36.679458 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:35:36.695932 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:35:36.807263 systemd[1]: Stopping kubelet.service... May 17 00:35:36.826510 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:35:36.826718 systemd[1]: Stopped kubelet.service. May 17 00:35:36.828768 systemd[1]: Starting kubelet.service... May 17 00:35:37.117741 systemd[1]: Started kubelet.service. May 17 00:35:37.621935 kubelet[2369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:35:37.621935 kubelet[2369]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:35:37.621935 kubelet[2369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:35:37.622443 kubelet[2369]: I0517 00:35:37.622040 2369 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:35:37.629431 kubelet[2369]: I0517 00:35:37.629394 2369 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:35:37.629431 kubelet[2369]: I0517 00:35:37.629418 2369 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:35:37.629722 kubelet[2369]: I0517 00:35:37.629697 2369 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:35:37.631158 kubelet[2369]: I0517 00:35:37.631133 2369 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:35:37.633591 kubelet[2369]: I0517 00:35:37.633558 2369 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:35:37.639706 kubelet[2369]: E0517 00:35:37.639652 2369 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:35:37.639832 kubelet[2369]: I0517 00:35:37.639820 2369 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:35:37.643120 kubelet[2369]: I0517 00:35:37.643093 2369 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:35:37.643238 kubelet[2369]: I0517 00:35:37.643222 2369 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:35:37.643378 kubelet[2369]: I0517 00:35:37.643352 2369 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:35:37.643566 kubelet[2369]: I0517 00:35:37.643374 2369 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-b02eecf252","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:35:37.643700 kubelet[2369]: I0517 00:35:37.643575 2369 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:35:37.643700 kubelet[2369]: I0517 00:35:37.643589 2369 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:35:37.643700 kubelet[2369]: I0517 00:35:37.643621 2369 state_mem.go:36] "Initialized new in-memory state store" May 17 00:35:37.643831 kubelet[2369]: I0517 00:35:37.643737 2369 kubelet.go:408] "Attempting to sync node with API server" May 17 00:35:37.643831 kubelet[2369]: I0517 00:35:37.643753 2369 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:35:37.643831 kubelet[2369]: I0517 00:35:37.643789 2369 kubelet.go:314] "Adding apiserver pod source" May 17 00:35:37.643831 kubelet[2369]: I0517 00:35:37.643803 2369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:35:37.645173 kubelet[2369]: I0517 00:35:37.645153 2369 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:35:37.645664 kubelet[2369]: I0517 00:35:37.645644 2369 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:35:37.646198 kubelet[2369]: I0517 00:35:37.646179 2369 server.go:1274] "Started kubelet" May 17 00:35:37.651634 kubelet[2369]: I0517 00:35:37.651608 2369 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:35:37.652782 kubelet[2369]: I0517 00:35:37.652765 2369 server.go:449] "Adding debug handlers to kubelet server" May 17 00:35:37.655231 kubelet[2369]: I0517 00:35:37.655215 2369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:35:37.659004 kubelet[2369]: I0517 00:35:37.658946 2369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:35:37.659223 kubelet[2369]: I0517 00:35:37.659204 2369 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:35:37.661752 kubelet[2369]: I0517 00:35:37.661734 2369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:35:37.664240 kubelet[2369]: I0517 00:35:37.664221 2369 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:35:37.664558 kubelet[2369]: E0517 00:35:37.664535 2369 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b02eecf252\" not found" May 17 00:35:37.666451 kubelet[2369]: I0517 00:35:37.666431 2369 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:35:37.666675 kubelet[2369]: I0517 00:35:37.666661 2369 reconciler.go:26] "Reconciler: start to sync state" May 17 00:35:37.668652 kubelet[2369]: I0517 00:35:37.668622 2369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:35:37.670114 kubelet[2369]: I0517 00:35:37.670096 2369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:35:37.670243 kubelet[2369]: I0517 00:35:37.670234 2369 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:35:37.670323 kubelet[2369]: I0517 00:35:37.670315 2369 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:35:37.670427 kubelet[2369]: E0517 00:35:37.670413 2369 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:35:37.688109 sudo[2398]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:35:37.688409 sudo[2398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:35:37.691088 kubelet[2369]: E0517 00:35:37.691065 2369 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:35:37.697362 kubelet[2369]: I0517 00:35:37.697342 2369 factory.go:221] Registration of the containerd container factory successfully May 17 00:35:37.697501 kubelet[2369]: I0517 00:35:37.697492 2369 factory.go:221] Registration of the systemd container factory successfully May 17 00:35:37.697780 kubelet[2369]: I0517 00:35:37.697758 2369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:35:37.759422 kubelet[2369]: I0517 00:35:37.759379 2369 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:35:37.759422 kubelet[2369]: I0517 00:35:37.759403 2369 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:35:37.759422 kubelet[2369]: I0517 00:35:37.759425 2369 state_mem.go:36] "Initialized new in-memory state store" May 17 00:35:37.759692 kubelet[2369]: I0517 00:35:37.759593 2369 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:35:37.759692 kubelet[2369]: I0517 00:35:37.759606 2369 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:35:37.759692 kubelet[2369]: I0517 00:35:37.759631 2369 policy_none.go:49] "None policy: Start" May 17 00:35:37.760658 kubelet[2369]: I0517 00:35:37.760636 2369 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:35:37.760658 kubelet[2369]: I0517 00:35:37.760663 2369 state_mem.go:35] "Initializing new in-memory state store" May 17 00:35:37.762369 kubelet[2369]: I0517 00:35:37.760830 2369 state_mem.go:75] "Updated machine memory state" May 17 00:35:37.768405 kubelet[2369]: I0517 00:35:37.766611 2369 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:35:37.771821 kubelet[2369]: E0517 00:35:37.771098 2369 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:35:37.771821 kubelet[2369]: I0517 00:35:37.771427 2369 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:35:37.771821 kubelet[2369]: I0517 00:35:37.771440 2369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:35:37.771821 kubelet[2369]: I0517 00:35:37.771782 2369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:35:37.897713 kubelet[2369]: I0517 00:35:37.897670 2369 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:37.909951 kubelet[2369]: I0517 00:35:37.909921 2369 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:37.910115 kubelet[2369]: I0517 00:35:37.910032 2369 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-b02eecf252" May 17 00:35:37.984901 kubelet[2369]: W0517 00:35:37.984860 2369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:37.986500 kubelet[2369]: W0517 00:35:37.986074 2369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:37.986500 kubelet[2369]: W0517 00:35:37.986356 2369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:38.068698 kubelet[2369]: I0517 00:35:38.068607 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/117681e3e062a563df96a432edf5456d-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" (UID: \"117681e3e062a563df96a432edf5456d\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.068872 kubelet[2369]: I0517 00:35:38.068717 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.068872 kubelet[2369]: I0517 00:35:38.068743 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.068872 kubelet[2369]: I0517 00:35:38.068793 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a7d4dc60bbf704a59ff1a97b8805a0a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-b02eecf252\" (UID: \"6a7d4dc60bbf704a59ff1a97b8805a0a\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.068872 kubelet[2369]: I0517 00:35:38.068814 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/117681e3e062a563df96a432edf5456d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" (UID: \"117681e3e062a563df96a432edf5456d\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.068872 kubelet[2369]: I0517 00:35:38.068834 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/117681e3e062a563df96a432edf5456d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" (UID: \"117681e3e062a563df96a432edf5456d\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.069124 kubelet[2369]: I0517 00:35:38.068884 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.069124 kubelet[2369]: I0517 00:35:38.068906 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.069124 kubelet[2369]: I0517 00:35:38.068951 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f24a88da43d9e543c73c59b93cda31a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-b02eecf252\" (UID: \"4f24a88da43d9e543c73c59b93cda31a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.288509 sudo[2398]: pam_unix(sudo:session): session closed for user root May 17 00:35:38.645007 kubelet[2369]: I0517 00:35:38.644874 2369 apiserver.go:52] "Watching apiserver" May 17 00:35:38.667703 kubelet[2369]: I0517 00:35:38.667672 2369 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:35:38.749789 kubelet[2369]: W0517 00:35:38.749755 2369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:38.750069 kubelet[2369]: E0517 00:35:38.750049 2369 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-b02eecf252\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" May 17 00:35:38.787744 kubelet[2369]: I0517 00:35:38.787684 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b02eecf252" podStartSLOduration=1.787660746 podStartE2EDuration="1.787660746s" podCreationTimestamp="2025-05-17 00:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:38.770973666 +0000 UTC m=+1.646402267" watchObservedRunningTime="2025-05-17 00:35:38.787660746 +0000 UTC m=+1.663089347" May 17 00:35:38.809435 kubelet[2369]: I0517 00:35:38.809379 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b02eecf252" podStartSLOduration=1.809356879 podStartE2EDuration="1.809356879s" podCreationTimestamp="2025-05-17 00:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:38.788537455 +0000 UTC m=+1.663966056" watchObservedRunningTime="2025-05-17 00:35:38.809356879 +0000 UTC m=+1.684785380" May 17 00:35:38.832068 kubelet[2369]: I0517 00:35:38.832014 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b02eecf252" podStartSLOduration=1.8319809230000001 podStartE2EDuration="1.831980923s" podCreationTimestamp="2025-05-17 00:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:38.810724894 +0000 UTC m=+1.686153495" watchObservedRunningTime="2025-05-17 00:35:38.831980923 +0000 UTC m=+1.707409424" May 17 00:35:39.903411 sudo[1765]: pam_unix(sudo:session): session closed for user root May 17 00:35:39.997946 sshd[1762]: pam_unix(sshd:session): session closed for user core May 17 00:35:40.000886 systemd[1]: sshd@4-10.200.4.42:22-10.200.16.10:47336.service: Deactivated successfully. May 17 00:35:40.001806 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:35:40.002014 systemd[1]: session-7.scope: Consumed 4.294s CPU time. May 17 00:35:40.002553 systemd-logind[1400]: Session 7 logged out. Waiting for processes to exit. May 17 00:35:40.003394 systemd-logind[1400]: Removed session 7. May 17 00:35:41.733361 kubelet[2369]: I0517 00:35:41.733320 2369 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:35:41.734014 env[1413]: time="2025-05-17T00:35:41.733936080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:35:41.734418 kubelet[2369]: I0517 00:35:41.734353 2369 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:35:42.314467 systemd[1]: Created slice kubepods-besteffort-pod79d5130b_db64_490c_9b84_a7aa0c96ec8b.slice. May 17 00:35:42.328365 systemd[1]: Created slice kubepods-burstable-podf40a8460_0b94_48e7_a405_13f4243542dd.slice. May 17 00:35:42.396603 kubelet[2369]: I0517 00:35:42.396570 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-cgroup\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.396799 kubelet[2369]: I0517 00:35:42.396621 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-config-path\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.396799 kubelet[2369]: I0517 00:35:42.396645 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-hubble-tls\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.396799 kubelet[2369]: I0517 00:35:42.396668 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79d5130b-db64-490c-9b84-a7aa0c96ec8b-lib-modules\") pod \"kube-proxy-vsvjt\" (UID: \"79d5130b-db64-490c-9b84-a7aa0c96ec8b\") " pod="kube-system/kube-proxy-vsvjt" May 17 00:35:42.396799 kubelet[2369]: I0517 00:35:42.396701 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-bpf-maps\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.396799 kubelet[2369]: I0517 00:35:42.396720 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79d5130b-db64-490c-9b84-a7aa0c96ec8b-kube-proxy\") pod \"kube-proxy-vsvjt\" (UID: \"79d5130b-db64-490c-9b84-a7aa0c96ec8b\") " pod="kube-system/kube-proxy-vsvjt" May 17 00:35:42.396799 kubelet[2369]: I0517 00:35:42.396742 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79d5130b-db64-490c-9b84-a7aa0c96ec8b-xtables-lock\") pod \"kube-proxy-vsvjt\" (UID: \"79d5130b-db64-490c-9b84-a7aa0c96ec8b\") " pod="kube-system/kube-proxy-vsvjt" May 17 00:35:42.397154 kubelet[2369]: I0517 00:35:42.396774 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cni-path\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397154 kubelet[2369]: I0517 00:35:42.396796 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-etc-cni-netd\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397154 kubelet[2369]: I0517 00:35:42.396817 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-lib-modules\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397154 kubelet[2369]: I0517 00:35:42.396865 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-hostproc\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397154 kubelet[2369]: I0517 00:35:42.396888 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-xtables-lock\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397154 kubelet[2369]: I0517 00:35:42.396910 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-kernel\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397369 kubelet[2369]: I0517 00:35:42.396950 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlxbj\" (UniqueName: \"kubernetes.io/projected/79d5130b-db64-490c-9b84-a7aa0c96ec8b-kube-api-access-zlxbj\") pod \"kube-proxy-vsvjt\" (UID: \"79d5130b-db64-490c-9b84-a7aa0c96ec8b\") " pod="kube-system/kube-proxy-vsvjt" May 17 00:35:42.397369 kubelet[2369]: I0517 00:35:42.396974 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-net\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397369 kubelet[2369]: I0517 00:35:42.397020 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-run\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397369 kubelet[2369]: I0517 00:35:42.397042 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40a8460-0b94-48e7-a405-13f4243542dd-clustermesh-secrets\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.397369 kubelet[2369]: I0517 00:35:42.397068 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpjd\" (UniqueName: \"kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-kube-api-access-jcpjd\") pod \"cilium-tw4bf\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " pod="kube-system/cilium-tw4bf" May 17 00:35:42.497936 kubelet[2369]: I0517 00:35:42.497894 2369 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:35:42.519523 kubelet[2369]: E0517 00:35:42.519178 2369 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:35:42.519523 kubelet[2369]: E0517 00:35:42.519207 2369 projected.go:194] Error preparing data for projected volume kube-api-access-zlxbj for pod kube-system/kube-proxy-vsvjt: configmap "kube-root-ca.crt" not found May 17 00:35:42.519523 kubelet[2369]: E0517 00:35:42.519271 2369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/79d5130b-db64-490c-9b84-a7aa0c96ec8b-kube-api-access-zlxbj podName:79d5130b-db64-490c-9b84-a7aa0c96ec8b nodeName:}" failed. No retries permitted until 2025-05-17 00:35:43.01924699 +0000 UTC m=+5.894675591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zlxbj" (UniqueName: "kubernetes.io/projected/79d5130b-db64-490c-9b84-a7aa0c96ec8b-kube-api-access-zlxbj") pod "kube-proxy-vsvjt" (UID: "79d5130b-db64-490c-9b84-a7aa0c96ec8b") : configmap "kube-root-ca.crt" not found May 17 00:35:42.520291 kubelet[2369]: E0517 00:35:42.520156 2369 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:35:42.520291 kubelet[2369]: E0517 00:35:42.520193 2369 projected.go:194] Error preparing data for projected volume kube-api-access-jcpjd for pod kube-system/cilium-tw4bf: configmap "kube-root-ca.crt" not found May 17 00:35:42.520291 kubelet[2369]: E0517 00:35:42.520232 2369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-kube-api-access-jcpjd podName:f40a8460-0b94-48e7-a405-13f4243542dd nodeName:}" failed. No retries permitted until 2025-05-17 00:35:43.020216699 +0000 UTC m=+5.895645200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jcpjd" (UniqueName: "kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-kube-api-access-jcpjd") pod "cilium-tw4bf" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd") : configmap "kube-root-ca.crt" not found May 17 00:35:42.820814 systemd[1]: Created slice kubepods-besteffort-pod1bb6bba3_d025_47c4_b3c6_8ebf34436dcc.slice. May 17 00:35:42.899935 kubelet[2369]: I0517 00:35:42.899877 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-cilium-config-path\") pod \"cilium-operator-5d85765b45-7rpph\" (UID: \"1bb6bba3-d025-47c4-b3c6-8ebf34436dcc\") " pod="kube-system/cilium-operator-5d85765b45-7rpph" May 17 00:35:42.900398 kubelet[2369]: I0517 00:35:42.899960 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76452\" (UniqueName: \"kubernetes.io/projected/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-kube-api-access-76452\") pod \"cilium-operator-5d85765b45-7rpph\" (UID: \"1bb6bba3-d025-47c4-b3c6-8ebf34436dcc\") " pod="kube-system/cilium-operator-5d85765b45-7rpph" May 17 00:35:43.126326 env[1413]: time="2025-05-17T00:35:43.126018665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7rpph,Uid:1bb6bba3-d025-47c4-b3c6-8ebf34436dcc,Namespace:kube-system,Attempt:0,}" May 17 00:35:43.158189 env[1413]: time="2025-05-17T00:35:43.158103969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:43.158405 env[1413]: time="2025-05-17T00:35:43.158153670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:43.158405 env[1413]: time="2025-05-17T00:35:43.158389672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:43.158867 env[1413]: time="2025-05-17T00:35:43.158776376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58 pid=2460 runtime=io.containerd.runc.v2 May 17 00:35:43.172493 systemd[1]: Started cri-containerd-c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58.scope. May 17 00:35:43.212983 env[1413]: time="2025-05-17T00:35:43.212940790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7rpph,Uid:1bb6bba3-d025-47c4-b3c6-8ebf34436dcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58\"" May 17 00:35:43.216411 env[1413]: time="2025-05-17T00:35:43.215193711Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:35:43.224690 env[1413]: time="2025-05-17T00:35:43.224652601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vsvjt,Uid:79d5130b-db64-490c-9b84-a7aa0c96ec8b,Namespace:kube-system,Attempt:0,}" May 17 00:35:43.231576 env[1413]: time="2025-05-17T00:35:43.231547766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tw4bf,Uid:f40a8460-0b94-48e7-a405-13f4243542dd,Namespace:kube-system,Attempt:0,}" May 17 00:35:43.279719 env[1413]: time="2025-05-17T00:35:43.273710467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:43.279719 env[1413]: time="2025-05-17T00:35:43.273762167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:43.279719 env[1413]: time="2025-05-17T00:35:43.273779467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:43.279719 env[1413]: time="2025-05-17T00:35:43.274587875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65992eebb89e548ffb489b472fb02ce9ab386f7a847ca3d9c8029b5300367293 pid=2500 runtime=io.containerd.runc.v2 May 17 00:35:43.295534 systemd[1]: Started cri-containerd-65992eebb89e548ffb489b472fb02ce9ab386f7a847ca3d9c8029b5300367293.scope. May 17 00:35:43.305074 env[1413]: time="2025-05-17T00:35:43.305006564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:43.305268 env[1413]: time="2025-05-17T00:35:43.305062564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:43.305383 env[1413]: time="2025-05-17T00:35:43.305343867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:43.305923 env[1413]: time="2025-05-17T00:35:43.305793171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412 pid=2528 runtime=io.containerd.runc.v2 May 17 00:35:43.323656 systemd[1]: Started cri-containerd-43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412.scope. May 17 00:35:43.344489 env[1413]: time="2025-05-17T00:35:43.344437738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vsvjt,Uid:79d5130b-db64-490c-9b84-a7aa0c96ec8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"65992eebb89e548ffb489b472fb02ce9ab386f7a847ca3d9c8029b5300367293\"" May 17 00:35:43.347682 env[1413]: time="2025-05-17T00:35:43.347373866Z" level=info msg="CreateContainer within sandbox \"65992eebb89e548ffb489b472fb02ce9ab386f7a847ca3d9c8029b5300367293\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:35:43.362810 env[1413]: time="2025-05-17T00:35:43.362770812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tw4bf,Uid:f40a8460-0b94-48e7-a405-13f4243542dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\"" May 17 00:35:43.393934 env[1413]: time="2025-05-17T00:35:43.393830007Z" level=info msg="CreateContainer within sandbox \"65992eebb89e548ffb489b472fb02ce9ab386f7a847ca3d9c8029b5300367293\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc144b84f76eeefa36c371104bc256148562770555c6ca7ce1166c0c7036e2ea\"" May 17 00:35:43.394959 env[1413]: time="2025-05-17T00:35:43.394876417Z" level=info msg="StartContainer for \"dc144b84f76eeefa36c371104bc256148562770555c6ca7ce1166c0c7036e2ea\"" May 17 00:35:43.412803 systemd[1]: Started cri-containerd-dc144b84f76eeefa36c371104bc256148562770555c6ca7ce1166c0c7036e2ea.scope. May 17 00:35:43.448384 env[1413]: time="2025-05-17T00:35:43.448341525Z" level=info msg="StartContainer for \"dc144b84f76eeefa36c371104bc256148562770555c6ca7ce1166c0c7036e2ea\" returns successfully" May 17 00:35:44.754306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47519988.mount: Deactivated successfully. May 17 00:35:45.540473 env[1413]: time="2025-05-17T00:35:45.540417209Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:45.548348 env[1413]: time="2025-05-17T00:35:45.548306480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:45.552089 env[1413]: time="2025-05-17T00:35:45.552048914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:45.552608 env[1413]: time="2025-05-17T00:35:45.552574319Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:35:45.555569 env[1413]: time="2025-05-17T00:35:45.555540945Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:35:45.556639 env[1413]: time="2025-05-17T00:35:45.556610355Z" level=info msg="CreateContainer within sandbox \"c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:35:45.586527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382661984.mount: Deactivated successfully. May 17 00:35:45.604377 env[1413]: time="2025-05-17T00:35:45.604322586Z" level=info msg="CreateContainer within sandbox \"c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\"" May 17 00:35:45.606622 env[1413]: time="2025-05-17T00:35:45.604974392Z" level=info msg="StartContainer for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\"" May 17 00:35:45.622062 systemd[1]: Started cri-containerd-a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66.scope. May 17 00:35:45.660023 env[1413]: time="2025-05-17T00:35:45.659952789Z" level=info msg="StartContainer for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" returns successfully" May 17 00:35:45.804770 kubelet[2369]: I0517 00:35:45.804611 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vsvjt" podStartSLOduration=3.804587896 podStartE2EDuration="3.804587896s" podCreationTimestamp="2025-05-17 00:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:43.761347096 +0000 UTC m=+6.636775697" watchObservedRunningTime="2025-05-17 00:35:45.804587896 +0000 UTC m=+8.680016397" May 17 00:35:45.908598 kubelet[2369]: I0517 00:35:45.908521 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7rpph" podStartSLOduration=1.569337413 podStartE2EDuration="3.908495636s" podCreationTimestamp="2025-05-17 00:35:42 +0000 UTC" firstStartedPulling="2025-05-17 00:35:43.214516405 +0000 UTC m=+6.089944906" lastFinishedPulling="2025-05-17 00:35:45.553674628 +0000 UTC m=+8.429103129" observedRunningTime="2025-05-17 00:35:45.806324912 +0000 UTC m=+8.681753413" watchObservedRunningTime="2025-05-17 00:35:45.908495636 +0000 UTC m=+8.783924137" May 17 00:35:51.214535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882982758.mount: Deactivated successfully. May 17 00:35:53.931829 env[1413]: time="2025-05-17T00:35:53.931774008Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:53.937539 env[1413]: time="2025-05-17T00:35:53.937492551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:53.942304 env[1413]: time="2025-05-17T00:35:53.942271486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:53.942854 env[1413]: time="2025-05-17T00:35:53.942806490Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:35:53.946764 env[1413]: time="2025-05-17T00:35:53.946729720Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:35:53.979131 env[1413]: time="2025-05-17T00:35:53.979086362Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\"" May 17 00:35:53.981208 env[1413]: time="2025-05-17T00:35:53.979876268Z" level=info msg="StartContainer for \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\"" May 17 00:35:54.007125 systemd[1]: Started cri-containerd-9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130.scope. May 17 00:35:54.034595 env[1413]: time="2025-05-17T00:35:54.034552171Z" level=info msg="StartContainer for \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\" returns successfully" May 17 00:35:54.047672 systemd[1]: cri-containerd-9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130.scope: Deactivated successfully. May 17 00:35:54.969226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130-rootfs.mount: Deactivated successfully. May 17 00:35:58.204523 env[1413]: time="2025-05-17T00:35:58.204462495Z" level=info msg="shim disconnected" id=9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130 May 17 00:35:58.204523 env[1413]: time="2025-05-17T00:35:58.204513896Z" level=warning msg="cleaning up after shim disconnected" id=9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130 namespace=k8s.io May 17 00:35:58.204523 env[1413]: time="2025-05-17T00:35:58.204525796Z" level=info msg="cleaning up dead shim" May 17 00:35:58.213186 env[1413]: time="2025-05-17T00:35:58.213140154Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2831 runtime=io.containerd.runc.v2\n" May 17 00:35:58.781888 env[1413]: time="2025-05-17T00:35:58.781820161Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:35:58.813284 env[1413]: time="2025-05-17T00:35:58.813193271Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\"" May 17 00:35:58.817019 env[1413]: time="2025-05-17T00:35:58.815835089Z" level=info msg="StartContainer for \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\"" May 17 00:35:58.850709 systemd[1]: Started cri-containerd-1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64.scope. May 17 00:35:58.886264 env[1413]: time="2025-05-17T00:35:58.886162159Z" level=info msg="StartContainer for \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\" returns successfully" May 17 00:35:58.890238 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:35:58.890540 systemd[1]: Stopped systemd-sysctl.service. May 17 00:35:58.890720 systemd[1]: Stopping systemd-sysctl.service... May 17 00:35:58.892735 systemd[1]: Starting systemd-sysctl.service... May 17 00:35:58.895642 systemd[1]: cri-containerd-1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64.scope: Deactivated successfully. May 17 00:35:58.909234 systemd[1]: Finished systemd-sysctl.service. May 17 00:35:58.933781 env[1413]: time="2025-05-17T00:35:58.933726978Z" level=info msg="shim disconnected" id=1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64 May 17 00:35:58.933781 env[1413]: time="2025-05-17T00:35:58.933782678Z" level=warning msg="cleaning up after shim disconnected" id=1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64 namespace=k8s.io May 17 00:35:58.934110 env[1413]: time="2025-05-17T00:35:58.933794978Z" level=info msg="cleaning up dead shim" May 17 00:35:58.942240 env[1413]: time="2025-05-17T00:35:58.942200534Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2893 runtime=io.containerd.runc.v2\n" May 17 00:35:59.786802 env[1413]: time="2025-05-17T00:35:59.786739277Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:35:59.800944 systemd[1]: run-containerd-runc-k8s.io-1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64-runc.yFzwfU.mount: Deactivated successfully. May 17 00:35:59.801098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64-rootfs.mount: Deactivated successfully. May 17 00:35:59.838976 env[1413]: time="2025-05-17T00:35:59.838923718Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\"" May 17 00:35:59.840304 env[1413]: time="2025-05-17T00:35:59.840262127Z" level=info msg="StartContainer for \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\"" May 17 00:35:59.871125 systemd[1]: Started cri-containerd-502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5.scope. May 17 00:35:59.907955 systemd[1]: cri-containerd-502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5.scope: Deactivated successfully. May 17 00:35:59.909213 env[1413]: time="2025-05-17T00:35:59.909168379Z" level=info msg="StartContainer for \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\" returns successfully" May 17 00:35:59.956678 env[1413]: time="2025-05-17T00:35:59.956620390Z" level=info msg="shim disconnected" id=502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5 May 17 00:35:59.956932 env[1413]: time="2025-05-17T00:35:59.956707890Z" level=warning msg="cleaning up after shim disconnected" id=502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5 namespace=k8s.io May 17 00:35:59.956932 env[1413]: time="2025-05-17T00:35:59.956723590Z" level=info msg="cleaning up dead shim" May 17 00:35:59.964855 env[1413]: time="2025-05-17T00:35:59.964810343Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2953 runtime=io.containerd.runc.v2\n" May 17 00:36:00.791082 env[1413]: time="2025-05-17T00:36:00.791028847Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:36:00.803135 systemd[1]: run-containerd-runc-k8s.io-502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5-runc.GEx1VO.mount: Deactivated successfully. May 17 00:36:00.803255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5-rootfs.mount: Deactivated successfully. May 17 00:36:00.832965 env[1413]: time="2025-05-17T00:36:00.832900215Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\"" May 17 00:36:00.833764 env[1413]: time="2025-05-17T00:36:00.833727920Z" level=info msg="StartContainer for \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\"" May 17 00:36:00.880470 systemd[1]: Started cri-containerd-278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e.scope. May 17 00:36:00.926483 systemd[1]: cri-containerd-278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e.scope: Deactivated successfully. May 17 00:36:00.927634 env[1413]: time="2025-05-17T00:36:00.927561622Z" level=info msg="StartContainer for \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\" returns successfully" May 17 00:36:00.961745 env[1413]: time="2025-05-17T00:36:00.961686741Z" level=info msg="shim disconnected" id=278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e May 17 00:36:00.961745 env[1413]: time="2025-05-17T00:36:00.961743541Z" level=warning msg="cleaning up after shim disconnected" id=278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e namespace=k8s.io May 17 00:36:00.962110 env[1413]: time="2025-05-17T00:36:00.961757141Z" level=info msg="cleaning up dead shim" May 17 00:36:00.969391 env[1413]: time="2025-05-17T00:36:00.969347590Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:36:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3008 runtime=io.containerd.runc.v2\n" May 17 00:36:01.796873 env[1413]: time="2025-05-17T00:36:01.794872677Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:36:01.803338 systemd[1]: run-containerd-runc-k8s.io-278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e-runc.S2OdyX.mount: Deactivated successfully. May 17 00:36:01.803466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e-rootfs.mount: Deactivated successfully. May 17 00:36:01.837439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642744092.mount: Deactivated successfully. May 17 00:36:01.861303 env[1413]: time="2025-05-17T00:36:01.861250594Z" level=info msg="CreateContainer within sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\"" May 17 00:36:01.861869 env[1413]: time="2025-05-17T00:36:01.861805197Z" level=info msg="StartContainer for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\"" May 17 00:36:01.882884 systemd[1]: Started cri-containerd-1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc.scope. May 17 00:36:01.916967 env[1413]: time="2025-05-17T00:36:01.916887143Z" level=info msg="StartContainer for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" returns successfully" May 17 00:36:02.032343 kubelet[2369]: I0517 00:36:02.031414 2369 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:36:02.092454 systemd[1]: Created slice kubepods-burstable-podddc871e0_3974_4c0d_8a48_b7511c0fe717.slice. May 17 00:36:02.106187 systemd[1]: Created slice kubepods-burstable-pod1acd0fdc_21fe_471e_8bdf_e4dd9764d0f9.slice. May 17 00:36:02.152182 kubelet[2369]: I0517 00:36:02.152134 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9-config-volume\") pod \"coredns-7c65d6cfc9-lvvvl\" (UID: \"1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9\") " pod="kube-system/coredns-7c65d6cfc9-lvvvl" May 17 00:36:02.152377 kubelet[2369]: I0517 00:36:02.152190 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrfp\" (UniqueName: \"kubernetes.io/projected/ddc871e0-3974-4c0d-8a48-b7511c0fe717-kube-api-access-dnrfp\") pod \"coredns-7c65d6cfc9-jcdnb\" (UID: \"ddc871e0-3974-4c0d-8a48-b7511c0fe717\") " pod="kube-system/coredns-7c65d6cfc9-jcdnb" May 17 00:36:02.152377 kubelet[2369]: I0517 00:36:02.152221 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddc871e0-3974-4c0d-8a48-b7511c0fe717-config-volume\") pod \"coredns-7c65d6cfc9-jcdnb\" (UID: \"ddc871e0-3974-4c0d-8a48-b7511c0fe717\") " pod="kube-system/coredns-7c65d6cfc9-jcdnb" May 17 00:36:02.152377 kubelet[2369]: I0517 00:36:02.152241 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7x6\" (UniqueName: \"kubernetes.io/projected/1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9-kube-api-access-xh7x6\") pod \"coredns-7c65d6cfc9-lvvvl\" (UID: \"1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9\") " pod="kube-system/coredns-7c65d6cfc9-lvvvl" May 17 00:36:02.398560 env[1413]: time="2025-05-17T00:36:02.398512815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jcdnb,Uid:ddc871e0-3974-4c0d-8a48-b7511c0fe717,Namespace:kube-system,Attempt:0,}" May 17 00:36:02.412159 env[1413]: time="2025-05-17T00:36:02.411741996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvvvl,Uid:1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9,Namespace:kube-system,Attempt:0,}" May 17 00:36:02.825637 kubelet[2369]: I0517 00:36:02.825321 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tw4bf" podStartSLOduration=10.245179863 podStartE2EDuration="20.825288839s" podCreationTimestamp="2025-05-17 00:35:42 +0000 UTC" firstStartedPulling="2025-05-17 00:35:43.364138525 +0000 UTC m=+6.239567126" lastFinishedPulling="2025-05-17 00:35:53.944247601 +0000 UTC m=+16.819676102" observedRunningTime="2025-05-17 00:36:02.824762635 +0000 UTC m=+25.700191236" watchObservedRunningTime="2025-05-17 00:36:02.825288839 +0000 UTC m=+25.700717440" May 17 00:36:04.263695 systemd-networkd[1559]: cilium_host: Link UP May 17 00:36:04.269807 systemd-networkd[1559]: cilium_net: Link UP May 17 00:36:04.270185 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:36:04.275564 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:36:04.279663 systemd-networkd[1559]: cilium_net: Gained carrier May 17 00:36:04.279883 systemd-networkd[1559]: cilium_host: Gained carrier May 17 00:36:04.280140 systemd-networkd[1559]: cilium_net: Gained IPv6LL May 17 00:36:04.470239 systemd-networkd[1559]: cilium_vxlan: Link UP May 17 00:36:04.470247 systemd-networkd[1559]: cilium_vxlan: Gained carrier May 17 00:36:04.711110 kernel: NET: Registered PF_ALG protocol family May 17 00:36:04.986125 systemd-networkd[1559]: cilium_host: Gained IPv6LL May 17 00:36:05.497146 systemd-networkd[1559]: lxc_health: Link UP May 17 00:36:05.525654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:36:05.524906 systemd-networkd[1559]: lxc_health: Gained carrier May 17 00:36:05.754186 systemd-networkd[1559]: cilium_vxlan: Gained IPv6LL May 17 00:36:05.987674 systemd-networkd[1559]: lxc87f8ef64649e: Link UP May 17 00:36:05.995044 kernel: eth0: renamed from tmp0b793 May 17 00:36:06.004525 systemd-networkd[1559]: lxc87f8ef64649e: Gained carrier May 17 00:36:06.005016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc87f8ef64649e: link becomes ready May 17 00:36:06.017094 systemd-networkd[1559]: lxc2d5b432cc68a: Link UP May 17 00:36:06.037020 kernel: eth0: renamed from tmp4726f May 17 00:36:06.047084 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2d5b432cc68a: link becomes ready May 17 00:36:06.047106 systemd-networkd[1559]: lxc2d5b432cc68a: Gained carrier May 17 00:36:07.482206 systemd-networkd[1559]: lxc_health: Gained IPv6LL May 17 00:36:07.546265 systemd-networkd[1559]: lxc2d5b432cc68a: Gained IPv6LL May 17 00:36:07.802276 systemd-networkd[1559]: lxc87f8ef64649e: Gained IPv6LL May 17 00:36:09.700817 env[1413]: time="2025-05-17T00:36:09.686180850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:09.700817 env[1413]: time="2025-05-17T00:36:09.686272150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:09.700817 env[1413]: time="2025-05-17T00:36:09.686304451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:09.700817 env[1413]: time="2025-05-17T00:36:09.686455451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4726ff60b5dbf6601ade703b5cc85abb9c7541dd8673a18d37ff404d8ef6f092 pid=3567 runtime=io.containerd.runc.v2 May 17 00:36:09.713164 systemd[1]: Started cri-containerd-4726ff60b5dbf6601ade703b5cc85abb9c7541dd8673a18d37ff404d8ef6f092.scope. May 17 00:36:09.730287 env[1413]: time="2025-05-17T00:36:09.730052284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:36:09.730287 env[1413]: time="2025-05-17T00:36:09.730112585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:36:09.730287 env[1413]: time="2025-05-17T00:36:09.730126785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:36:09.730287 env[1413]: time="2025-05-17T00:36:09.730252085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b793b7cc3d6f12421e5698c201194f791ff58d258088a30e5d11c9cf31b227c pid=3600 runtime=io.containerd.runc.v2 May 17 00:36:09.756952 systemd[1]: Started cri-containerd-0b793b7cc3d6f12421e5698c201194f791ff58d258088a30e5d11c9cf31b227c.scope. May 17 00:36:09.830551 env[1413]: time="2025-05-17T00:36:09.830500321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvvvl,Uid:1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4726ff60b5dbf6601ade703b5cc85abb9c7541dd8673a18d37ff404d8ef6f092\"" May 17 00:36:09.835288 env[1413]: time="2025-05-17T00:36:09.835249946Z" level=info msg="CreateContainer within sandbox \"4726ff60b5dbf6601ade703b5cc85abb9c7541dd8673a18d37ff404d8ef6f092\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:36:09.873188 env[1413]: time="2025-05-17T00:36:09.873143349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jcdnb,Uid:ddc871e0-3974-4c0d-8a48-b7511c0fe717,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b793b7cc3d6f12421e5698c201194f791ff58d258088a30e5d11c9cf31b227c\"" May 17 00:36:09.876352 env[1413]: time="2025-05-17T00:36:09.876320866Z" level=info msg="CreateContainer within sandbox \"0b793b7cc3d6f12421e5698c201194f791ff58d258088a30e5d11c9cf31b227c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:36:09.885534 env[1413]: time="2025-05-17T00:36:09.885500315Z" level=info msg="CreateContainer within sandbox \"4726ff60b5dbf6601ade703b5cc85abb9c7541dd8673a18d37ff404d8ef6f092\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59efcf0c6ee0c61aace66a096557c577d26767c841289fb1ea32c2f247e8acec\"" May 17 00:36:09.886346 env[1413]: time="2025-05-17T00:36:09.886315519Z" level=info msg="StartContainer for \"59efcf0c6ee0c61aace66a096557c577d26767c841289fb1ea32c2f247e8acec\"" May 17 00:36:09.912466 systemd[1]: Started cri-containerd-59efcf0c6ee0c61aace66a096557c577d26767c841289fb1ea32c2f247e8acec.scope. May 17 00:36:09.925516 env[1413]: time="2025-05-17T00:36:09.925466928Z" level=info msg="CreateContainer within sandbox \"0b793b7cc3d6f12421e5698c201194f791ff58d258088a30e5d11c9cf31b227c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f49a21a1e8c7aefdc78a2bfdbdd0c9c44acf5c9297102f2d002ed4e8472de347\"" May 17 00:36:09.926287 env[1413]: time="2025-05-17T00:36:09.926236133Z" level=info msg="StartContainer for \"f49a21a1e8c7aefdc78a2bfdbdd0c9c44acf5c9297102f2d002ed4e8472de347\"" May 17 00:36:09.955388 systemd[1]: Started cri-containerd-f49a21a1e8c7aefdc78a2bfdbdd0c9c44acf5c9297102f2d002ed4e8472de347.scope. May 17 00:36:09.982682 env[1413]: time="2025-05-17T00:36:09.982609434Z" level=info msg="StartContainer for \"59efcf0c6ee0c61aace66a096557c577d26767c841289fb1ea32c2f247e8acec\" returns successfully" May 17 00:36:10.003247 env[1413]: time="2025-05-17T00:36:10.003182843Z" level=info msg="StartContainer for \"f49a21a1e8c7aefdc78a2bfdbdd0c9c44acf5c9297102f2d002ed4e8472de347\" returns successfully" May 17 00:36:10.695517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454850260.mount: Deactivated successfully. May 17 00:36:10.839178 kubelet[2369]: I0517 00:36:10.839106 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lvvvl" podStartSLOduration=28.839081625 podStartE2EDuration="28.839081625s" podCreationTimestamp="2025-05-17 00:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:36:10.838422621 +0000 UTC m=+33.713851122" watchObservedRunningTime="2025-05-17 00:36:10.839081625 +0000 UTC m=+33.714510226" May 17 00:36:10.854048 kubelet[2369]: I0517 00:36:10.853429 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jcdnb" podStartSLOduration=28.8534042 podStartE2EDuration="28.8534042s" podCreationTimestamp="2025-05-17 00:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:36:10.853238499 +0000 UTC m=+33.728667000" watchObservedRunningTime="2025-05-17 00:36:10.8534042 +0000 UTC m=+33.728832801" May 17 00:37:45.586480 systemd[1]: Started sshd@5-10.200.4.42:22-10.200.16.10:47894.service. May 17 00:37:46.176456 sshd[3741]: Accepted publickey for core from 10.200.16.10 port 47894 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:46.177909 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:46.181819 systemd-logind[1400]: New session 8 of user core. May 17 00:37:46.183711 systemd[1]: Started session-8.scope. May 17 00:37:46.674612 sshd[3741]: pam_unix(sshd:session): session closed for user core May 17 00:37:46.677904 systemd[1]: sshd@5-10.200.4.42:22-10.200.16.10:47894.service: Deactivated successfully. May 17 00:37:46.678840 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:37:46.679523 systemd-logind[1400]: Session 8 logged out. Waiting for processes to exit. May 17 00:37:46.680328 systemd-logind[1400]: Removed session 8. May 17 00:37:51.776139 systemd[1]: Started sshd@6-10.200.4.42:22-10.200.16.10:48500.service. May 17 00:37:52.370649 sshd[3754]: Accepted publickey for core from 10.200.16.10 port 48500 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:52.372357 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:52.377369 systemd[1]: Started session-9.scope. May 17 00:37:52.377972 systemd-logind[1400]: New session 9 of user core. May 17 00:37:52.852761 sshd[3754]: pam_unix(sshd:session): session closed for user core May 17 00:37:52.855648 systemd[1]: sshd@6-10.200.4.42:22-10.200.16.10:48500.service: Deactivated successfully. May 17 00:37:52.856962 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:37:52.857159 systemd-logind[1400]: Session 9 logged out. Waiting for processes to exit. May 17 00:37:52.858139 systemd-logind[1400]: Removed session 9. May 17 00:37:57.953131 systemd[1]: Started sshd@7-10.200.4.42:22-10.200.16.10:48516.service. May 17 00:37:58.546923 sshd[3766]: Accepted publickey for core from 10.200.16.10 port 48516 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:58.548509 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:58.553917 systemd[1]: Started session-10.scope. May 17 00:37:58.554412 systemd-logind[1400]: New session 10 of user core. May 17 00:37:59.030050 sshd[3766]: pam_unix(sshd:session): session closed for user core May 17 00:37:59.033386 systemd[1]: sshd@7-10.200.4.42:22-10.200.16.10:48516.service: Deactivated successfully. May 17 00:37:59.034492 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:37:59.035349 systemd-logind[1400]: Session 10 logged out. Waiting for processes to exit. May 17 00:37:59.036114 systemd-logind[1400]: Removed session 10. May 17 00:38:04.133979 systemd[1]: Started sshd@8-10.200.4.42:22-10.200.16.10:37876.service. May 17 00:38:04.724325 sshd[3778]: Accepted publickey for core from 10.200.16.10 port 37876 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:04.725755 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:04.729548 systemd-logind[1400]: New session 11 of user core. May 17 00:38:04.731359 systemd[1]: Started session-11.scope. May 17 00:38:05.223257 sshd[3778]: pam_unix(sshd:session): session closed for user core May 17 00:38:05.226539 systemd[1]: sshd@8-10.200.4.42:22-10.200.16.10:37876.service: Deactivated successfully. May 17 00:38:05.227474 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:38:05.228195 systemd-logind[1400]: Session 11 logged out. Waiting for processes to exit. May 17 00:38:05.228963 systemd-logind[1400]: Removed session 11. May 17 00:38:05.322500 systemd[1]: Started sshd@9-10.200.4.42:22-10.200.16.10:37892.service. May 17 00:38:05.911403 sshd[3791]: Accepted publickey for core from 10.200.16.10 port 37892 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:05.912958 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:05.918071 systemd-logind[1400]: New session 12 of user core. May 17 00:38:05.918656 systemd[1]: Started session-12.scope. May 17 00:38:06.434278 sshd[3791]: pam_unix(sshd:session): session closed for user core May 17 00:38:06.437720 systemd[1]: sshd@9-10.200.4.42:22-10.200.16.10:37892.service: Deactivated successfully. May 17 00:38:06.438597 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:38:06.439321 systemd-logind[1400]: Session 12 logged out. Waiting for processes to exit. May 17 00:38:06.440109 systemd-logind[1400]: Removed session 12. May 17 00:38:06.534793 systemd[1]: Started sshd@10-10.200.4.42:22-10.200.16.10:37896.service. May 17 00:38:07.124253 sshd[3800]: Accepted publickey for core from 10.200.16.10 port 37896 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:07.125954 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:07.131191 systemd[1]: Started session-13.scope. May 17 00:38:07.132047 systemd-logind[1400]: New session 13 of user core. May 17 00:38:07.611631 sshd[3800]: pam_unix(sshd:session): session closed for user core May 17 00:38:07.614902 systemd[1]: sshd@10-10.200.4.42:22-10.200.16.10:37896.service: Deactivated successfully. May 17 00:38:07.616027 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:38:07.616837 systemd-logind[1400]: Session 13 logged out. Waiting for processes to exit. May 17 00:38:07.617821 systemd-logind[1400]: Removed session 13. May 17 00:38:12.715677 systemd[1]: Started sshd@11-10.200.4.42:22-10.200.16.10:49660.service. May 17 00:38:13.304715 sshd[3812]: Accepted publickey for core from 10.200.16.10 port 49660 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:13.306548 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:13.312442 systemd[1]: Started session-14.scope. May 17 00:38:13.314328 systemd-logind[1400]: New session 14 of user core. May 17 00:38:13.800329 sshd[3812]: pam_unix(sshd:session): session closed for user core May 17 00:38:13.803648 systemd[1]: sshd@11-10.200.4.42:22-10.200.16.10:49660.service: Deactivated successfully. May 17 00:38:13.804720 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:38:13.805540 systemd-logind[1400]: Session 14 logged out. Waiting for processes to exit. May 17 00:38:13.806526 systemd-logind[1400]: Removed session 14. May 17 00:38:18.900988 systemd[1]: Started sshd@12-10.200.4.42:22-10.200.16.10:44204.service. May 17 00:38:19.488177 sshd[3827]: Accepted publickey for core from 10.200.16.10 port 44204 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:19.489855 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:19.494595 systemd[1]: Started session-15.scope. May 17 00:38:19.495120 systemd-logind[1400]: New session 15 of user core. May 17 00:38:19.970303 sshd[3827]: pam_unix(sshd:session): session closed for user core May 17 00:38:19.973259 systemd[1]: sshd@12-10.200.4.42:22-10.200.16.10:44204.service: Deactivated successfully. May 17 00:38:19.974218 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:38:19.974862 systemd-logind[1400]: Session 15 logged out. Waiting for processes to exit. May 17 00:38:19.975685 systemd-logind[1400]: Removed session 15. May 17 00:38:20.071069 systemd[1]: Started sshd@13-10.200.4.42:22-10.200.16.10:44206.service. May 17 00:38:20.662424 sshd[3839]: Accepted publickey for core from 10.200.16.10 port 44206 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:20.664110 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:20.668162 systemd-logind[1400]: New session 16 of user core. May 17 00:38:20.670029 systemd[1]: Started session-16.scope. May 17 00:38:21.181405 sshd[3839]: pam_unix(sshd:session): session closed for user core May 17 00:38:21.184835 systemd[1]: sshd@13-10.200.4.42:22-10.200.16.10:44206.service: Deactivated successfully. May 17 00:38:21.185953 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:38:21.186839 systemd-logind[1400]: Session 16 logged out. Waiting for processes to exit. May 17 00:38:21.187670 systemd-logind[1400]: Removed session 16. May 17 00:38:21.281170 systemd[1]: Started sshd@14-10.200.4.42:22-10.200.16.10:44220.service. May 17 00:38:21.875459 sshd[3848]: Accepted publickey for core from 10.200.16.10 port 44220 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:21.879269 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:21.884291 systemd[1]: Started session-17.scope. May 17 00:38:21.884924 systemd-logind[1400]: New session 17 of user core. May 17 00:38:23.872238 sshd[3848]: pam_unix(sshd:session): session closed for user core May 17 00:38:23.875526 systemd[1]: sshd@14-10.200.4.42:22-10.200.16.10:44220.service: Deactivated successfully. May 17 00:38:23.876460 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:38:23.877158 systemd-logind[1400]: Session 17 logged out. Waiting for processes to exit. May 17 00:38:23.877989 systemd-logind[1400]: Removed session 17. May 17 00:38:23.972031 systemd[1]: Started sshd@15-10.200.4.42:22-10.200.16.10:44222.service. May 17 00:38:24.559579 sshd[3865]: Accepted publickey for core from 10.200.16.10 port 44222 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:24.561225 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:24.566025 systemd-logind[1400]: New session 18 of user core. May 17 00:38:24.566492 systemd[1]: Started session-18.scope. May 17 00:38:25.148204 sshd[3865]: pam_unix(sshd:session): session closed for user core May 17 00:38:25.151660 systemd[1]: sshd@15-10.200.4.42:22-10.200.16.10:44222.service: Deactivated successfully. May 17 00:38:25.152943 systemd-logind[1400]: Session 18 logged out. Waiting for processes to exit. May 17 00:38:25.153072 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:38:25.154264 systemd-logind[1400]: Removed session 18. May 17 00:38:25.250317 systemd[1]: Started sshd@16-10.200.4.42:22-10.200.16.10:44224.service. May 17 00:38:25.841537 sshd[3875]: Accepted publickey for core from 10.200.16.10 port 44224 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:25.842950 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:25.847716 systemd[1]: Started session-19.scope. May 17 00:38:25.848264 systemd-logind[1400]: New session 19 of user core. May 17 00:38:26.330285 sshd[3875]: pam_unix(sshd:session): session closed for user core May 17 00:38:26.334192 systemd[1]: sshd@16-10.200.4.42:22-10.200.16.10:44224.service: Deactivated successfully. May 17 00:38:26.335313 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:38:26.337437 systemd-logind[1400]: Session 19 logged out. Waiting for processes to exit. May 17 00:38:26.339899 systemd-logind[1400]: Removed session 19. May 17 00:38:31.431077 systemd[1]: Started sshd@17-10.200.4.42:22-10.200.16.10:42520.service. May 17 00:38:32.022273 sshd[3890]: Accepted publickey for core from 10.200.16.10 port 42520 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:32.023966 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:32.029515 systemd[1]: Started session-20.scope. May 17 00:38:32.029960 systemd-logind[1400]: New session 20 of user core. May 17 00:38:32.499168 sshd[3890]: pam_unix(sshd:session): session closed for user core May 17 00:38:32.502577 systemd[1]: sshd@17-10.200.4.42:22-10.200.16.10:42520.service: Deactivated successfully. May 17 00:38:32.503731 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:38:32.504659 systemd-logind[1400]: Session 20 logged out. Waiting for processes to exit. May 17 00:38:32.505723 systemd-logind[1400]: Removed session 20. May 17 00:38:37.600586 systemd[1]: Started sshd@18-10.200.4.42:22-10.200.16.10:42526.service. May 17 00:38:38.190795 sshd[3901]: Accepted publickey for core from 10.200.16.10 port 42526 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:38.192480 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:38.197332 systemd[1]: Started session-21.scope. May 17 00:38:38.197955 systemd-logind[1400]: New session 21 of user core. May 17 00:38:38.673587 sshd[3901]: pam_unix(sshd:session): session closed for user core May 17 00:38:38.676848 systemd[1]: sshd@18-10.200.4.42:22-10.200.16.10:42526.service: Deactivated successfully. May 17 00:38:38.677966 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:38:38.678856 systemd-logind[1400]: Session 21 logged out. Waiting for processes to exit. May 17 00:38:38.679815 systemd-logind[1400]: Removed session 21. May 17 00:38:43.774180 systemd[1]: Started sshd@19-10.200.4.42:22-10.200.16.10:48924.service. May 17 00:38:44.364433 sshd[3918]: Accepted publickey for core from 10.200.16.10 port 48924 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:44.366135 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:44.371698 systemd[1]: Started session-22.scope. May 17 00:38:44.372321 systemd-logind[1400]: New session 22 of user core. May 17 00:38:44.849102 sshd[3918]: pam_unix(sshd:session): session closed for user core May 17 00:38:44.852766 systemd-logind[1400]: Session 22 logged out. Waiting for processes to exit. May 17 00:38:44.853068 systemd[1]: sshd@19-10.200.4.42:22-10.200.16.10:48924.service: Deactivated successfully. May 17 00:38:44.854189 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:38:44.855193 systemd-logind[1400]: Removed session 22. May 17 00:38:44.948961 systemd[1]: Started sshd@20-10.200.4.42:22-10.200.16.10:48938.service. May 17 00:38:45.538443 sshd[3930]: Accepted publickey for core from 10.200.16.10 port 48938 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:45.539859 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:45.544842 systemd[1]: Started session-23.scope. May 17 00:38:45.545551 systemd-logind[1400]: New session 23 of user core. May 17 00:38:47.173054 env[1413]: time="2025-05-17T00:38:47.172979456Z" level=info msg="StopContainer for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" with timeout 30 (s)" May 17 00:38:47.174542 env[1413]: time="2025-05-17T00:38:47.174367473Z" level=info msg="Stop container \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" with signal terminated" May 17 00:38:47.189524 systemd[1]: run-containerd-runc-k8s.io-1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc-runc.V1KB8O.mount: Deactivated successfully. May 17 00:38:47.213907 env[1413]: time="2025-05-17T00:38:47.213840567Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:38:47.221296 env[1413]: time="2025-05-17T00:38:47.221228359Z" level=info msg="StopContainer for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" with timeout 2 (s)" May 17 00:38:47.221565 env[1413]: time="2025-05-17T00:38:47.221500163Z" level=info msg="Stop container \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" with signal terminated" May 17 00:38:47.225717 systemd[1]: cri-containerd-a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66.scope: Deactivated successfully. May 17 00:38:47.235961 systemd-networkd[1559]: lxc_health: Link DOWN May 17 00:38:47.235975 systemd-networkd[1559]: lxc_health: Lost carrier May 17 00:38:47.256440 systemd[1]: cri-containerd-1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc.scope: Deactivated successfully. May 17 00:38:47.256724 systemd[1]: cri-containerd-1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc.scope: Consumed 7.040s CPU time. May 17 00:38:47.261351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66-rootfs.mount: Deactivated successfully. May 17 00:38:47.279101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc-rootfs.mount: Deactivated successfully. May 17 00:38:47.296532 env[1413]: time="2025-05-17T00:38:47.296479600Z" level=info msg="shim disconnected" id=a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66 May 17 00:38:47.296734 env[1413]: time="2025-05-17T00:38:47.296536101Z" level=warning msg="cleaning up after shim disconnected" id=a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66 namespace=k8s.io May 17 00:38:47.296734 env[1413]: time="2025-05-17T00:38:47.296552801Z" level=info msg="cleaning up dead shim" May 17 00:38:47.306327 env[1413]: time="2025-05-17T00:38:47.306264023Z" level=info msg="shim disconnected" id=1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc May 17 00:38:47.306327 env[1413]: time="2025-05-17T00:38:47.306326323Z" level=warning msg="cleaning up after shim disconnected" id=1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc namespace=k8s.io May 17 00:38:47.306505 env[1413]: time="2025-05-17T00:38:47.306338924Z" level=info msg="cleaning up dead shim" May 17 00:38:47.308422 env[1413]: time="2025-05-17T00:38:47.308389749Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" May 17 00:38:47.313573 env[1413]: time="2025-05-17T00:38:47.313541914Z" level=info msg="StopContainer for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" returns successfully" May 17 00:38:47.314622 env[1413]: time="2025-05-17T00:38:47.314587227Z" level=info msg="StopPodSandbox for \"c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58\"" May 17 00:38:47.314714 env[1413]: time="2025-05-17T00:38:47.314681028Z" level=info msg="Container to stop \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:47.316730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58-shm.mount: Deactivated successfully. May 17 00:38:47.319925 env[1413]: time="2025-05-17T00:38:47.319895393Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" May 17 00:38:47.325033 env[1413]: time="2025-05-17T00:38:47.324979157Z" level=info msg="StopContainer for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" returns successfully" May 17 00:38:47.325545 env[1413]: time="2025-05-17T00:38:47.325516263Z" level=info msg="StopPodSandbox for \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\"" May 17 00:38:47.325637 env[1413]: time="2025-05-17T00:38:47.325584864Z" level=info msg="Container to stop \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:47.325637 env[1413]: time="2025-05-17T00:38:47.325605965Z" level=info msg="Container to stop \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:47.325637 env[1413]: time="2025-05-17T00:38:47.325621665Z" level=info msg="Container to stop \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:47.325823 env[1413]: time="2025-05-17T00:38:47.325637065Z" level=info msg="Container to stop \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:47.325823 env[1413]: time="2025-05-17T00:38:47.325651365Z" level=info msg="Container to stop \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:47.326846 systemd[1]: cri-containerd-c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58.scope: Deactivated successfully. May 17 00:38:47.336185 systemd[1]: cri-containerd-43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412.scope: Deactivated successfully. May 17 00:38:47.368506 env[1413]: time="2025-05-17T00:38:47.368452800Z" level=info msg="shim disconnected" id=c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58 May 17 00:38:47.368694 env[1413]: time="2025-05-17T00:38:47.368509401Z" level=warning msg="cleaning up after shim disconnected" id=c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58 namespace=k8s.io May 17 00:38:47.368694 env[1413]: time="2025-05-17T00:38:47.368522001Z" level=info msg="cleaning up dead shim" May 17 00:38:47.369103 env[1413]: time="2025-05-17T00:38:47.369063408Z" level=info msg="shim disconnected" id=43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412 May 17 00:38:47.369198 env[1413]: time="2025-05-17T00:38:47.369106908Z" level=warning msg="cleaning up after shim disconnected" id=43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412 namespace=k8s.io May 17 00:38:47.369198 env[1413]: time="2025-05-17T00:38:47.369118309Z" level=info msg="cleaning up dead shim" May 17 00:38:47.381304 env[1413]: time="2025-05-17T00:38:47.381270561Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4062 runtime=io.containerd.runc.v2\n" May 17 00:38:47.381639 env[1413]: time="2025-05-17T00:38:47.381600565Z" level=info msg="TearDown network for sandbox \"c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58\" successfully" May 17 00:38:47.381723 env[1413]: time="2025-05-17T00:38:47.381639765Z" level=info msg="StopPodSandbox for \"c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58\" returns successfully" May 17 00:38:47.384411 env[1413]: time="2025-05-17T00:38:47.384308999Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4063 runtime=io.containerd.runc.v2\n" May 17 00:38:47.384843 env[1413]: time="2025-05-17T00:38:47.384809705Z" level=info msg="TearDown network for sandbox \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" successfully" May 17 00:38:47.385059 env[1413]: time="2025-05-17T00:38:47.385030608Z" level=info msg="StopPodSandbox for \"43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412\" returns successfully" May 17 00:38:47.501978 kubelet[2369]: I0517 00:38:47.498830 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76452\" (UniqueName: \"kubernetes.io/projected/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-kube-api-access-76452\") pod \"1bb6bba3-d025-47c4-b3c6-8ebf34436dcc\" (UID: \"1bb6bba3-d025-47c4-b3c6-8ebf34436dcc\") " May 17 00:38:47.501978 kubelet[2369]: I0517 00:38:47.498883 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cni-path\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.501978 kubelet[2369]: I0517 00:38:47.498908 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-run\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.501978 kubelet[2369]: I0517 00:38:47.498933 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcpjd\" (UniqueName: \"kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-kube-api-access-jcpjd\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.501978 kubelet[2369]: I0517 00:38:47.498956 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-kernel\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.501978 kubelet[2369]: I0517 00:38:47.498978 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-hubble-tls\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.503701 kubelet[2369]: I0517 00:38:47.499015 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-bpf-maps\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.503701 kubelet[2369]: I0517 00:38:47.499034 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-etc-cni-netd\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.503701 kubelet[2369]: I0517 00:38:47.499054 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-cgroup\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.503701 kubelet[2369]: I0517 00:38:47.499078 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-hostproc\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.503701 kubelet[2369]: I0517 00:38:47.499114 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-config-path\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.503701 kubelet[2369]: I0517 00:38:47.499140 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-lib-modules\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.504074 kubelet[2369]: I0517 00:38:47.499161 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-xtables-lock\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.504074 kubelet[2369]: I0517 00:38:47.499182 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-net\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.504074 kubelet[2369]: I0517 00:38:47.499209 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40a8460-0b94-48e7-a405-13f4243542dd-clustermesh-secrets\") pod \"f40a8460-0b94-48e7-a405-13f4243542dd\" (UID: \"f40a8460-0b94-48e7-a405-13f4243542dd\") " May 17 00:38:47.504074 kubelet[2369]: I0517 00:38:47.499233 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-cilium-config-path\") pod \"1bb6bba3-d025-47c4-b3c6-8ebf34436dcc\" (UID: \"1bb6bba3-d025-47c4-b3c6-8ebf34436dcc\") " May 17 00:38:47.504074 kubelet[2369]: I0517 00:38:47.501863 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1bb6bba3-d025-47c4-b3c6-8ebf34436dcc" (UID: "1bb6bba3-d025-47c4-b3c6-8ebf34436dcc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:38:47.504299 kubelet[2369]: I0517 00:38:47.502110 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.504299 kubelet[2369]: I0517 00:38:47.502158 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cni-path" (OuterVolumeSpecName: "cni-path") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.504299 kubelet[2369]: I0517 00:38:47.502180 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.505288 kubelet[2369]: I0517 00:38:47.504577 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.505288 kubelet[2369]: I0517 00:38:47.504890 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.505288 kubelet[2369]: I0517 00:38:47.504922 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-hostproc" (OuterVolumeSpecName: "hostproc") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.505669 kubelet[2369]: I0517 00:38:47.505649 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.506258 kubelet[2369]: I0517 00:38:47.506231 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-kube-api-access-jcpjd" (OuterVolumeSpecName: "kube-api-access-jcpjd") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "kube-api-access-jcpjd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:47.508457 kubelet[2369]: I0517 00:38:47.508427 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:38:47.508556 kubelet[2369]: I0517 00:38:47.508480 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.508556 kubelet[2369]: I0517 00:38:47.508504 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.508556 kubelet[2369]: I0517 00:38:47.508525 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:47.509891 kubelet[2369]: I0517 00:38:47.509862 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:47.511297 kubelet[2369]: I0517 00:38:47.511268 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-kube-api-access-76452" (OuterVolumeSpecName: "kube-api-access-76452") pod "1bb6bba3-d025-47c4-b3c6-8ebf34436dcc" (UID: "1bb6bba3-d025-47c4-b3c6-8ebf34436dcc"). InnerVolumeSpecName "kube-api-access-76452". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:47.513007 kubelet[2369]: I0517 00:38:47.512970 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40a8460-0b94-48e7-a405-13f4243542dd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f40a8460-0b94-48e7-a405-13f4243542dd" (UID: "f40a8460-0b94-48e7-a405-13f4243542dd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:38:47.600468 kubelet[2369]: I0517 00:38:47.600412 2369 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76452\" (UniqueName: \"kubernetes.io/projected/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-kube-api-access-76452\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600468 kubelet[2369]: I0517 00:38:47.600455 2369 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cni-path\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600468 kubelet[2369]: I0517 00:38:47.600474 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-run\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600494 2369 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcpjd\" (UniqueName: \"kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-kube-api-access-jcpjd\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600508 2369 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40a8460-0b94-48e7-a405-13f4243542dd-hubble-tls\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600523 2369 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600535 2369 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-bpf-maps\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600549 2369 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-etc-cni-netd\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600563 2369 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-hostproc\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600576 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-cgroup\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.600774 kubelet[2369]: I0517 00:38:47.600590 2369 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-lib-modules\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.601070 kubelet[2369]: I0517 00:38:47.600603 2369 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-xtables-lock\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.601070 kubelet[2369]: I0517 00:38:47.600616 2369 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40a8460-0b94-48e7-a405-13f4243542dd-host-proc-sys-net\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.601070 kubelet[2369]: I0517 00:38:47.600632 2369 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40a8460-0b94-48e7-a405-13f4243542dd-clustermesh-secrets\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.601070 kubelet[2369]: I0517 00:38:47.600647 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40a8460-0b94-48e7-a405-13f4243542dd-cilium-config-path\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.601070 kubelet[2369]: I0517 00:38:47.600661 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc-cilium-config-path\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:47.678956 systemd[1]: Removed slice kubepods-besteffort-pod1bb6bba3_d025_47c4_b3c6_8ebf34436dcc.slice. May 17 00:38:47.680619 systemd[1]: Removed slice kubepods-burstable-podf40a8460_0b94_48e7_a405_13f4243542dd.slice. May 17 00:38:47.680738 systemd[1]: kubepods-burstable-podf40a8460_0b94_48e7_a405_13f4243542dd.slice: Consumed 7.143s CPU time. May 17 00:38:47.813404 kubelet[2369]: E0517 00:38:47.813290 2369 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:48.162561 kubelet[2369]: I0517 00:38:48.162529 2369 scope.go:117] "RemoveContainer" containerID="1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc" May 17 00:38:48.167638 env[1413]: time="2025-05-17T00:38:48.167596274Z" level=info msg="RemoveContainer for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\"" May 17 00:38:48.185140 env[1413]: time="2025-05-17T00:38:48.182675561Z" level=info msg="RemoveContainer for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" returns successfully" May 17 00:38:48.185140 env[1413]: time="2025-05-17T00:38:48.183923177Z" level=info msg="RemoveContainer for \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\"" May 17 00:38:48.185550 kubelet[2369]: I0517 00:38:48.182915 2369 scope.go:117] "RemoveContainer" containerID="278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e" May 17 00:38:48.186677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412-rootfs.mount: Deactivated successfully. May 17 00:38:48.186803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43ed26482a7510d70b77138399dae5153afaed3c07fd1b01bc2fa85a8bfc0412-shm.mount: Deactivated successfully. May 17 00:38:48.186883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1ed13a9aeedc0103986eea04b833caa5b6c4c5c5a65046d96f1adb306e33e58-rootfs.mount: Deactivated successfully. May 17 00:38:48.186959 systemd[1]: var-lib-kubelet-pods-f40a8460\x2d0b94\x2d48e7\x2da405\x2d13f4243542dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djcpjd.mount: Deactivated successfully. May 17 00:38:48.187077 systemd[1]: var-lib-kubelet-pods-1bb6bba3\x2dd025\x2d47c4\x2db3c6\x2d8ebf34436dcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d76452.mount: Deactivated successfully. May 17 00:38:48.187165 systemd[1]: var-lib-kubelet-pods-f40a8460\x2d0b94\x2d48e7\x2da405\x2d13f4243542dd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:38:48.187253 systemd[1]: var-lib-kubelet-pods-f40a8460\x2d0b94\x2d48e7\x2da405\x2d13f4243542dd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:38:48.200189 env[1413]: time="2025-05-17T00:38:48.200149578Z" level=info msg="RemoveContainer for \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\" returns successfully" May 17 00:38:48.200345 kubelet[2369]: I0517 00:38:48.200321 2369 scope.go:117] "RemoveContainer" containerID="502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5" May 17 00:38:48.201449 env[1413]: time="2025-05-17T00:38:48.201406893Z" level=info msg="RemoveContainer for \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\"" May 17 00:38:48.214032 env[1413]: time="2025-05-17T00:38:48.213979849Z" level=info msg="RemoveContainer for \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\" returns successfully" May 17 00:38:48.215247 kubelet[2369]: I0517 00:38:48.214182 2369 scope.go:117] "RemoveContainer" containerID="1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64" May 17 00:38:48.216981 env[1413]: time="2025-05-17T00:38:48.216950186Z" level=info msg="RemoveContainer for \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\"" May 17 00:38:48.231424 env[1413]: time="2025-05-17T00:38:48.231384065Z" level=info msg="RemoveContainer for \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\" returns successfully" May 17 00:38:48.231621 kubelet[2369]: I0517 00:38:48.231586 2369 scope.go:117] "RemoveContainer" containerID="9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130" May 17 00:38:48.232589 env[1413]: time="2025-05-17T00:38:48.232563279Z" level=info msg="RemoveContainer for \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\"" May 17 00:38:48.246144 env[1413]: time="2025-05-17T00:38:48.246115247Z" level=info msg="RemoveContainer for \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\" returns successfully" May 17 00:38:48.246309 kubelet[2369]: I0517 00:38:48.246268 2369 scope.go:117] "RemoveContainer" containerID="1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc" May 17 00:38:48.246585 env[1413]: time="2025-05-17T00:38:48.246511252Z" level=error msg="ContainerStatus for \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\": not found" May 17 00:38:48.246718 kubelet[2369]: E0517 00:38:48.246695 2369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\": not found" containerID="1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc" May 17 00:38:48.246824 kubelet[2369]: I0517 00:38:48.246727 2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc"} err="failed to get container status \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d6784c2444b05aab837cb5e0aa5a10010e773d1d75ed589dd5917466f4878cc\": not found" May 17 00:38:48.246893 kubelet[2369]: I0517 00:38:48.246826 2369 scope.go:117] "RemoveContainer" containerID="278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e" May 17 00:38:48.247102 env[1413]: time="2025-05-17T00:38:48.247055059Z" level=error msg="ContainerStatus for \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\": not found" May 17 00:38:48.247254 kubelet[2369]: E0517 00:38:48.247233 2369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\": not found" containerID="278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e" May 17 00:38:48.247331 kubelet[2369]: I0517 00:38:48.247260 2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e"} err="failed to get container status \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"278fbf617ee4acb992a756d19a8a3c111f1e357139f47c0322da8ac1cc090d0e\": not found" May 17 00:38:48.247331 kubelet[2369]: I0517 00:38:48.247281 2369 scope.go:117] "RemoveContainer" containerID="502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5" May 17 00:38:48.247512 env[1413]: time="2025-05-17T00:38:48.247464464Z" level=error msg="ContainerStatus for \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\": not found" May 17 00:38:48.247676 kubelet[2369]: E0517 00:38:48.247647 2369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\": not found" containerID="502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5" May 17 00:38:48.247755 kubelet[2369]: I0517 00:38:48.247672 2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5"} err="failed to get container status \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"502e51df42dc0790ee0ef0d027c330a72e764874602c7cce6aae6092094645b5\": not found" May 17 00:38:48.247755 kubelet[2369]: I0517 00:38:48.247694 2369 scope.go:117] "RemoveContainer" containerID="1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64" May 17 00:38:48.247953 env[1413]: time="2025-05-17T00:38:48.247868269Z" level=error msg="ContainerStatus for \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\": not found" May 17 00:38:48.248093 kubelet[2369]: E0517 00:38:48.248069 2369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\": not found" containerID="1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64" May 17 00:38:48.248175 kubelet[2369]: I0517 00:38:48.248100 2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64"} err="failed to get container status \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b482aad2836cea34d8fa24bc81947368b5d7d6e0fbb138f9a902218b0b85a64\": not found" May 17 00:38:48.248175 kubelet[2369]: I0517 00:38:48.248120 2369 scope.go:117] "RemoveContainer" containerID="9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130" May 17 00:38:48.248359 env[1413]: time="2025-05-17T00:38:48.248308275Z" level=error msg="ContainerStatus for \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\": not found" May 17 00:38:48.248521 kubelet[2369]: E0517 00:38:48.248498 2369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\": not found" containerID="9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130" May 17 00:38:48.248607 kubelet[2369]: I0517 00:38:48.248525 2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130"} err="failed to get container status \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ce01122f80e4aa8bb651cdd8df4076181015693696e7f09c146051c19bb4130\": not found" May 17 00:38:48.248607 kubelet[2369]: I0517 00:38:48.248546 2369 scope.go:117] "RemoveContainer" containerID="a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66" May 17 00:38:48.249483 env[1413]: time="2025-05-17T00:38:48.249456289Z" level=info msg="RemoveContainer for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\"" May 17 00:38:48.260908 env[1413]: time="2025-05-17T00:38:48.260874930Z" level=info msg="RemoveContainer for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" returns successfully" May 17 00:38:48.261074 kubelet[2369]: I0517 00:38:48.261054 2369 scope.go:117] "RemoveContainer" containerID="a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66" May 17 00:38:48.261288 env[1413]: time="2025-05-17T00:38:48.261239435Z" level=error msg="ContainerStatus for \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\": not found" May 17 00:38:48.261394 kubelet[2369]: E0517 00:38:48.261373 2369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\": not found" containerID="a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66" May 17 00:38:48.261483 kubelet[2369]: I0517 00:38:48.261406 2369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66"} err="failed to get container status \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1b7f68bfe8a58dae166e09467999df2a5e379fc02e6e9410212b7a0f84a0c66\": not found" May 17 00:38:49.208810 sshd[3930]: pam_unix(sshd:session): session closed for user core May 17 00:38:49.211885 systemd[1]: sshd@20-10.200.4.42:22-10.200.16.10:48938.service: Deactivated successfully. May 17 00:38:49.212791 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:38:49.213514 systemd-logind[1400]: Session 23 logged out. Waiting for processes to exit. May 17 00:38:49.214351 systemd-logind[1400]: Removed session 23. May 17 00:38:49.309790 systemd[1]: Started sshd@21-10.200.4.42:22-10.200.16.10:35022.service. May 17 00:38:49.674121 kubelet[2369]: I0517 00:38:49.674068 2369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb6bba3-d025-47c4-b3c6-8ebf34436dcc" path="/var/lib/kubelet/pods/1bb6bba3-d025-47c4-b3c6-8ebf34436dcc/volumes" May 17 00:38:49.674764 kubelet[2369]: I0517 00:38:49.674727 2369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" path="/var/lib/kubelet/pods/f40a8460-0b94-48e7-a405-13f4243542dd/volumes" May 17 00:38:49.899654 sshd[4095]: Accepted publickey for core from 10.200.16.10 port 35022 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:49.901371 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:49.906867 systemd[1]: Started session-24.scope. May 17 00:38:49.907468 systemd-logind[1400]: New session 24 of user core. May 17 00:38:50.768124 kubelet[2369]: E0517 00:38:50.768087 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" containerName="mount-cgroup" May 17 00:38:50.768656 kubelet[2369]: E0517 00:38:50.768635 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" containerName="clean-cilium-state" May 17 00:38:50.768751 kubelet[2369]: E0517 00:38:50.768739 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" containerName="cilium-agent" May 17 00:38:50.768835 kubelet[2369]: E0517 00:38:50.768823 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1bb6bba3-d025-47c4-b3c6-8ebf34436dcc" containerName="cilium-operator" May 17 00:38:50.768904 kubelet[2369]: E0517 00:38:50.768893 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" containerName="apply-sysctl-overwrites" May 17 00:38:50.768976 kubelet[2369]: E0517 00:38:50.768966 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" containerName="mount-bpf-fs" May 17 00:38:50.769120 kubelet[2369]: I0517 00:38:50.769105 2369 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb6bba3-d025-47c4-b3c6-8ebf34436dcc" containerName="cilium-operator" May 17 00:38:50.769212 kubelet[2369]: I0517 00:38:50.769201 2369 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40a8460-0b94-48e7-a405-13f4243542dd" containerName="cilium-agent" May 17 00:38:50.775874 systemd[1]: Created slice kubepods-burstable-pode2bc0352_e20c_43fd_be06_df7ca72fdefb.slice. May 17 00:38:50.821467 kubelet[2369]: I0517 00:38:50.821425 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-cgroup\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821467 kubelet[2369]: I0517 00:38:50.821474 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-xtables-lock\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821696 kubelet[2369]: I0517 00:38:50.821497 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-net\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821696 kubelet[2369]: I0517 00:38:50.821517 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-lib-modules\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821696 kubelet[2369]: I0517 00:38:50.821549 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-kernel\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821696 kubelet[2369]: I0517 00:38:50.821572 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-ipsec-secrets\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821696 kubelet[2369]: I0517 00:38:50.821595 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hubble-tls\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821696 kubelet[2369]: I0517 00:38:50.821628 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hostproc\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821952 kubelet[2369]: I0517 00:38:50.821648 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-etc-cni-netd\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821952 kubelet[2369]: I0517 00:38:50.821668 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7dh\" (UniqueName: \"kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-kube-api-access-mr7dh\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821952 kubelet[2369]: I0517 00:38:50.821704 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cni-path\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821952 kubelet[2369]: I0517 00:38:50.821728 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-clustermesh-secrets\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821952 kubelet[2369]: I0517 00:38:50.821754 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-run\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.821952 kubelet[2369]: I0517 00:38:50.821789 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-bpf-maps\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.822134 kubelet[2369]: I0517 00:38:50.821812 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-config-path\") pod \"cilium-9c725\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " pod="kube-system/cilium-9c725" May 17 00:38:50.831454 sshd[4095]: pam_unix(sshd:session): session closed for user core May 17 00:38:50.834597 systemd[1]: sshd@21-10.200.4.42:22-10.200.16.10:35022.service: Deactivated successfully. May 17 00:38:50.835455 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:38:50.836167 systemd-logind[1400]: Session 24 logged out. Waiting for processes to exit. May 17 00:38:50.837063 systemd-logind[1400]: Removed session 24. May 17 00:38:50.937311 systemd[1]: Started sshd@22-10.200.4.42:22-10.200.16.10:35032.service. May 17 00:38:51.083063 env[1413]: time="2025-05-17T00:38:51.081838537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9c725,Uid:e2bc0352-e20c-43fd-be06-df7ca72fdefb,Namespace:kube-system,Attempt:0,}" May 17 00:38:51.123362 env[1413]: time="2025-05-17T00:38:51.123294537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:51.123362 env[1413]: time="2025-05-17T00:38:51.123330937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:51.123362 env[1413]: time="2025-05-17T00:38:51.123345238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:51.123770 env[1413]: time="2025-05-17T00:38:51.123721942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90 pid=4118 runtime=io.containerd.runc.v2 May 17 00:38:51.135853 systemd[1]: Started cri-containerd-53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90.scope. May 17 00:38:51.165085 env[1413]: time="2025-05-17T00:38:51.165044141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9c725,Uid:e2bc0352-e20c-43fd-be06-df7ca72fdefb,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\"" May 17 00:38:51.169705 env[1413]: time="2025-05-17T00:38:51.169666497Z" level=info msg="CreateContainer within sandbox \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:38:51.206384 env[1413]: time="2025-05-17T00:38:51.206334839Z" level=info msg="CreateContainer within sandbox \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\"" May 17 00:38:51.206850 env[1413]: time="2025-05-17T00:38:51.206820745Z" level=info msg="StartContainer for \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\"" May 17 00:38:51.225004 systemd[1]: Started cri-containerd-4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84.scope. May 17 00:38:51.237386 systemd[1]: cri-containerd-4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84.scope: Deactivated successfully. May 17 00:38:51.323895 env[1413]: time="2025-05-17T00:38:51.323834257Z" level=info msg="shim disconnected" id=4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84 May 17 00:38:51.323895 env[1413]: time="2025-05-17T00:38:51.323893258Z" level=warning msg="cleaning up after shim disconnected" id=4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84 namespace=k8s.io May 17 00:38:51.323895 env[1413]: time="2025-05-17T00:38:51.323904258Z" level=info msg="cleaning up dead shim" May 17 00:38:51.332258 env[1413]: time="2025-05-17T00:38:51.332212558Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:38:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:38:51.332589 env[1413]: time="2025-05-17T00:38:51.332479861Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" May 17 00:38:51.332884 env[1413]: time="2025-05-17T00:38:51.332834366Z" level=error msg="Failed to pipe stdout of container \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\"" error="reading from a closed fifo" May 17 00:38:51.332960 env[1413]: time="2025-05-17T00:38:51.332910267Z" level=error msg="Failed to pipe stderr of container \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\"" error="reading from a closed fifo" May 17 00:38:51.343243 env[1413]: time="2025-05-17T00:38:51.342366681Z" level=error msg="StartContainer for \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:38:51.343356 kubelet[2369]: E0517 00:38:51.342618 2369 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84" May 17 00:38:51.343356 kubelet[2369]: E0517 00:38:51.342815 2369 kuberuntime_manager.go:1274] "Unhandled Error" err=< May 17 00:38:51.343356 kubelet[2369]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:38:51.343356 kubelet[2369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:38:51.343356 kubelet[2369]: rm /hostbin/cilium-mount May 17 00:38:51.343627 kubelet[2369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr7dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9c725_kube-system(e2bc0352-e20c-43fd-be06-df7ca72fdefb): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:38:51.343627 kubelet[2369]: > logger="UnhandledError" May 17 00:38:51.344297 kubelet[2369]: E0517 00:38:51.344217 2369 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9c725" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" May 17 00:38:51.547760 sshd[4108]: Accepted publickey for core from 10.200.16.10 port 35032 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:51.549529 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:51.555117 systemd[1]: Started session-25.scope. May 17 00:38:51.555772 systemd-logind[1400]: New session 25 of user core. May 17 00:38:52.041472 sshd[4108]: pam_unix(sshd:session): session closed for user core May 17 00:38:52.045113 systemd[1]: sshd@22-10.200.4.42:22-10.200.16.10:35032.service: Deactivated successfully. May 17 00:38:52.046916 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:38:52.047986 systemd-logind[1400]: Session 25 logged out. Waiting for processes to exit. May 17 00:38:52.048918 systemd-logind[1400]: Removed session 25. May 17 00:38:52.141972 systemd[1]: Started sshd@23-10.200.4.42:22-10.200.16.10:35040.service. May 17 00:38:52.182299 env[1413]: time="2025-05-17T00:38:52.182037895Z" level=info msg="CreateContainer within sandbox \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 17 00:38:52.218322 env[1413]: time="2025-05-17T00:38:52.218277828Z" level=info msg="CreateContainer within sandbox \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\"" May 17 00:38:52.218834 env[1413]: time="2025-05-17T00:38:52.218803835Z" level=info msg="StartContainer for \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\"" May 17 00:38:52.248441 systemd[1]: Started cri-containerd-e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318.scope. May 17 00:38:52.259297 systemd[1]: cri-containerd-e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318.scope: Deactivated successfully. May 17 00:38:52.280601 env[1413]: time="2025-05-17T00:38:52.280544773Z" level=info msg="shim disconnected" id=e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318 May 17 00:38:52.280601 env[1413]: time="2025-05-17T00:38:52.280600674Z" level=warning msg="cleaning up after shim disconnected" id=e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318 namespace=k8s.io May 17 00:38:52.280882 env[1413]: time="2025-05-17T00:38:52.280612074Z" level=info msg="cleaning up dead shim" May 17 00:38:52.288843 env[1413]: time="2025-05-17T00:38:52.288791972Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4226 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:38:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:38:52.289139 env[1413]: time="2025-05-17T00:38:52.289077575Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" May 17 00:38:52.291110 env[1413]: time="2025-05-17T00:38:52.291058999Z" level=error msg="Failed to pipe stdout of container \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\"" error="reading from a closed fifo" May 17 00:38:52.292625 env[1413]: time="2025-05-17T00:38:52.292054311Z" level=error msg="Failed to pipe stderr of container \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\"" error="reading from a closed fifo" May 17 00:38:52.297744 env[1413]: time="2025-05-17T00:38:52.297697878Z" level=error msg="StartContainer for \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:38:52.297984 kubelet[2369]: E0517 00:38:52.297941 2369 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318" May 17 00:38:52.298353 kubelet[2369]: E0517 00:38:52.298127 2369 kuberuntime_manager.go:1274] "Unhandled Error" err=< May 17 00:38:52.298353 kubelet[2369]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:38:52.298353 kubelet[2369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:38:52.298353 kubelet[2369]: rm /hostbin/cilium-mount May 17 00:38:52.298494 kubelet[2369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr7dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9c725_kube-system(e2bc0352-e20c-43fd-be06-df7ca72fdefb): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:38:52.298494 kubelet[2369]: > logger="UnhandledError" May 17 00:38:52.299521 kubelet[2369]: E0517 00:38:52.299486 2369 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9c725" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" May 17 00:38:52.619648 kubelet[2369]: I0517 00:38:52.619509 2369 setters.go:600] "Node became not ready" node="ci-3510.3.7-n-b02eecf252" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:38:52Z","lastTransitionTime":"2025-05-17T00:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:38:52.731901 sshd[4200]: Accepted publickey for core from 10.200.16.10 port 35040 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:52.733447 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:52.738287 systemd[1]: Started session-26.scope. May 17 00:38:52.738724 systemd-logind[1400]: New session 26 of user core. May 17 00:38:52.814724 kubelet[2369]: E0517 00:38:52.814671 2369 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:52.940739 systemd[1]: run-containerd-runc-k8s.io-e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318-runc.wBgiFm.mount: Deactivated successfully. May 17 00:38:52.940890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318-rootfs.mount: Deactivated successfully. May 17 00:38:53.183304 kubelet[2369]: I0517 00:38:53.183273 2369 scope.go:117] "RemoveContainer" containerID="4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84" May 17 00:38:53.183820 env[1413]: time="2025-05-17T00:38:53.183780761Z" level=info msg="StopPodSandbox for \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\"" May 17 00:38:53.184331 env[1413]: time="2025-05-17T00:38:53.184302167Z" level=info msg="Container to stop \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:53.184454 env[1413]: time="2025-05-17T00:38:53.184430769Z" level=info msg="Container to stop \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:53.191010 env[1413]: time="2025-05-17T00:38:53.184625671Z" level=info msg="RemoveContainer for \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\"" May 17 00:38:53.189440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90-shm.mount: Deactivated successfully. May 17 00:38:53.200324 env[1413]: time="2025-05-17T00:38:53.200290557Z" level=info msg="RemoveContainer for \"4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84\" returns successfully" May 17 00:38:53.201102 systemd[1]: cri-containerd-53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90.scope: Deactivated successfully. May 17 00:38:53.226449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90-rootfs.mount: Deactivated successfully. May 17 00:38:53.245043 env[1413]: time="2025-05-17T00:38:53.244969687Z" level=info msg="shim disconnected" id=53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90 May 17 00:38:53.245233 env[1413]: time="2025-05-17T00:38:53.245048988Z" level=warning msg="cleaning up after shim disconnected" id=53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90 namespace=k8s.io May 17 00:38:53.245233 env[1413]: time="2025-05-17T00:38:53.245062788Z" level=info msg="cleaning up dead shim" May 17 00:38:53.253108 env[1413]: time="2025-05-17T00:38:53.253072083Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4262 runtime=io.containerd.runc.v2\n" May 17 00:38:53.253389 env[1413]: time="2025-05-17T00:38:53.253354586Z" level=info msg="TearDown network for sandbox \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\" successfully" May 17 00:38:53.253389 env[1413]: time="2025-05-17T00:38:53.253383887Z" level=info msg="StopPodSandbox for \"53c59127e7abfa037f6ea54933208c4318a02a71a5c54aba9aa00a8bd7a21c90\" returns successfully" May 17 00:38:53.336670 kubelet[2369]: I0517 00:38:53.336619 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hubble-tls\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.336670 kubelet[2369]: I0517 00:38:53.336667 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-etc-cni-netd\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336693 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-kernel\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336721 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-config-path\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336744 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-xtables-lock\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336769 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hostproc\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336794 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-lib-modules\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336820 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-run\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336854 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-ipsec-secrets\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336885 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr7dh\" (UniqueName: \"kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-kube-api-access-mr7dh\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336910 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cni-path\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336937 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-clustermesh-secrets\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.336963 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-net\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.337033 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-cgroup\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.337064 2369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-bpf-maps\") pod \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\" (UID: \"e2bc0352-e20c-43fd-be06-df7ca72fdefb\") " May 17 00:38:53.337337 kubelet[2369]: I0517 00:38:53.337163 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.338508 kubelet[2369]: I0517 00:38:53.338196 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.338508 kubelet[2369]: I0517 00:38:53.338279 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.338508 kubelet[2369]: I0517 00:38:53.338308 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.343666 systemd[1]: var-lib-kubelet-pods-e2bc0352\x2de20c\x2d43fd\x2dbe06\x2ddf7ca72fdefb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:38:53.348932 systemd[1]: var-lib-kubelet-pods-e2bc0352\x2de20c\x2d43fd\x2dbe06\x2ddf7ca72fdefb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:38:53.350694 kubelet[2369]: I0517 00:38:53.350661 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.350796 kubelet[2369]: I0517 00:38:53.350703 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.350796 kubelet[2369]: I0517 00:38:53.350724 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.350954 kubelet[2369]: I0517 00:38:53.350798 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:38:53.350954 kubelet[2369]: I0517 00:38:53.350858 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:53.351160 kubelet[2369]: I0517 00:38:53.351062 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:38:53.351284 kubelet[2369]: I0517 00:38:53.351263 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.351414 kubelet[2369]: I0517 00:38:53.351397 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.351519 kubelet[2369]: I0517 00:38:53.351504 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:53.352429 kubelet[2369]: I0517 00:38:53.352401 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-kube-api-access-mr7dh" (OuterVolumeSpecName: "kube-api-access-mr7dh") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "kube-api-access-mr7dh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:53.354404 kubelet[2369]: I0517 00:38:53.354376 2369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2bc0352-e20c-43fd-be06-df7ca72fdefb" (UID: "e2bc0352-e20c-43fd-be06-df7ca72fdefb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:38:53.437854 kubelet[2369]: I0517 00:38:53.437811 2369 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hostproc\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.437854 kubelet[2369]: I0517 00:38:53.437846 2369 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-xtables-lock\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437876 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-run\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437888 2369 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-lib-modules\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437900 2369 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-clustermesh-secrets\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437910 2369 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-net\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437922 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437934 2369 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr7dh\" (UniqueName: \"kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-kube-api-access-mr7dh\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437948 2369 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cni-path\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437970 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-cgroup\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.437982 2369 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-bpf-maps\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.438010 2369 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-etc-cni-netd\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.438023 2369 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2bc0352-e20c-43fd-be06-df7ca72fdefb-hubble-tls\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.438034 2369 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2bc0352-e20c-43fd-be06-df7ca72fdefb-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.438156 kubelet[2369]: I0517 00:38:53.438046 2369 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2bc0352-e20c-43fd-be06-df7ca72fdefb-cilium-config-path\") on node \"ci-3510.3.7-n-b02eecf252\" DevicePath \"\"" May 17 00:38:53.677205 systemd[1]: Removed slice kubepods-burstable-pode2bc0352_e20c_43fd_be06_df7ca72fdefb.slice. May 17 00:38:53.940419 systemd[1]: var-lib-kubelet-pods-e2bc0352\x2de20c\x2d43fd\x2dbe06\x2ddf7ca72fdefb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmr7dh.mount: Deactivated successfully. May 17 00:38:53.940659 systemd[1]: var-lib-kubelet-pods-e2bc0352\x2de20c\x2d43fd\x2dbe06\x2ddf7ca72fdefb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:38:54.186836 kubelet[2369]: I0517 00:38:54.186800 2369 scope.go:117] "RemoveContainer" containerID="e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318" May 17 00:38:54.188689 env[1413]: time="2025-05-17T00:38:54.187965652Z" level=info msg="RemoveContainer for \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\"" May 17 00:38:54.203847 env[1413]: time="2025-05-17T00:38:54.203437234Z" level=info msg="RemoveContainer for \"e6c89e8fa20ac646f4f01a8ffdccf85e156d35017e2c475fefdb4245cdc52318\" returns successfully" May 17 00:38:54.234376 kubelet[2369]: E0517 00:38:54.234340 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" containerName="mount-cgroup" May 17 00:38:54.234376 kubelet[2369]: E0517 00:38:54.234373 2369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" containerName="mount-cgroup" May 17 00:38:54.234599 kubelet[2369]: I0517 00:38:54.234426 2369 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" containerName="mount-cgroup" May 17 00:38:54.234599 kubelet[2369]: I0517 00:38:54.234435 2369 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" containerName="mount-cgroup" May 17 00:38:54.241356 systemd[1]: Created slice kubepods-burstable-podbaabb8fe_16eb_477f_90c6_dc8aab65a46d.slice. May 17 00:38:54.345416 kubelet[2369]: I0517 00:38:54.345375 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-etc-cni-netd\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345460 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-xtables-lock\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345509 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/baabb8fe-16eb-477f-90c6-dc8aab65a46d-cilium-ipsec-secrets\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345554 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-hostproc\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345577 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-bpf-maps\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345603 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baabb8fe-16eb-477f-90c6-dc8aab65a46d-clustermesh-secrets\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345624 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baabb8fe-16eb-477f-90c6-dc8aab65a46d-hubble-tls\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345671 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baabb8fe-16eb-477f-90c6-dc8aab65a46d-cilium-config-path\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345697 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-host-proc-sys-net\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345721 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-cilium-run\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345742 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-cni-path\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345766 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-lib-modules\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345788 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-cilium-cgroup\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345808 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baabb8fe-16eb-477f-90c6-dc8aab65a46d-host-proc-sys-kernel\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.345895 kubelet[2369]: I0517 00:38:54.345828 2369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gdv\" (UniqueName: \"kubernetes.io/projected/baabb8fe-16eb-477f-90c6-dc8aab65a46d-kube-api-access-64gdv\") pod \"cilium-m6tqd\" (UID: \"baabb8fe-16eb-477f-90c6-dc8aab65a46d\") " pod="kube-system/cilium-m6tqd" May 17 00:38:54.429417 kubelet[2369]: W0517 00:38:54.429306 2369 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2bc0352_e20c_43fd_be06_df7ca72fdefb.slice/cri-containerd-4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84.scope WatchSource:0}: container "4e2fdd271a532aa8bba0411738f2655bd445cb631df901bb08f587dc2729de84" in namespace "k8s.io": not found May 17 00:38:54.548435 env[1413]: time="2025-05-17T00:38:54.547338877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6tqd,Uid:baabb8fe-16eb-477f-90c6-dc8aab65a46d,Namespace:kube-system,Attempt:0,}" May 17 00:38:54.586701 env[1413]: time="2025-05-17T00:38:54.586611539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:54.586701 env[1413]: time="2025-05-17T00:38:54.586661140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:54.586701 env[1413]: time="2025-05-17T00:38:54.586675040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:54.587235 env[1413]: time="2025-05-17T00:38:54.587181446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c pid=4290 runtime=io.containerd.runc.v2 May 17 00:38:54.600620 systemd[1]: Started cri-containerd-c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c.scope. May 17 00:38:54.625951 env[1413]: time="2025-05-17T00:38:54.625905801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6tqd,Uid:baabb8fe-16eb-477f-90c6-dc8aab65a46d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\"" May 17 00:38:54.629197 env[1413]: time="2025-05-17T00:38:54.629158839Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:38:54.667207 env[1413]: time="2025-05-17T00:38:54.667161586Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a\"" May 17 00:38:54.668753 env[1413]: time="2025-05-17T00:38:54.668719604Z" level=info msg="StartContainer for \"a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a\"" May 17 00:38:54.685286 systemd[1]: Started cri-containerd-a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a.scope. May 17 00:38:54.717172 env[1413]: time="2025-05-17T00:38:54.717114273Z" level=info msg="StartContainer for \"a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a\" returns successfully" May 17 00:38:54.720900 systemd[1]: cri-containerd-a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a.scope: Deactivated successfully. May 17 00:38:54.810541 env[1413]: time="2025-05-17T00:38:54.810391470Z" level=info msg="shim disconnected" id=a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a May 17 00:38:54.810870 env[1413]: time="2025-05-17T00:38:54.810837075Z" level=warning msg="cleaning up after shim disconnected" id=a539f80682d4d48f19a8611badeffc606c258f1d52239c2c8a3d5e56eb674a4a namespace=k8s.io May 17 00:38:54.810972 env[1413]: time="2025-05-17T00:38:54.810954377Z" level=info msg="cleaning up dead shim" May 17 00:38:54.820691 env[1413]: time="2025-05-17T00:38:54.820658091Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4374 runtime=io.containerd.runc.v2\n" May 17 00:38:55.193467 env[1413]: time="2025-05-17T00:38:55.193407355Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:38:55.238357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223721534.mount: Deactivated successfully. May 17 00:38:55.246372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587731077.mount: Deactivated successfully. May 17 00:38:55.261088 env[1413]: time="2025-05-17T00:38:55.261045644Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0\"" May 17 00:38:55.262967 env[1413]: time="2025-05-17T00:38:55.261828453Z" level=info msg="StartContainer for \"d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0\"" May 17 00:38:55.278676 systemd[1]: Started cri-containerd-d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0.scope. May 17 00:38:55.308222 env[1413]: time="2025-05-17T00:38:55.308171093Z" level=info msg="StartContainer for \"d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0\" returns successfully" May 17 00:38:55.314259 systemd[1]: cri-containerd-d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0.scope: Deactivated successfully. May 17 00:38:55.348517 env[1413]: time="2025-05-17T00:38:55.348465163Z" level=info msg="shim disconnected" id=d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0 May 17 00:38:55.348517 env[1413]: time="2025-05-17T00:38:55.348514463Z" level=warning msg="cleaning up after shim disconnected" id=d88b94b1b294de74790c9d591d71d7cb5151bb691e5289776ea96001cf61cbd0 namespace=k8s.io May 17 00:38:55.348846 env[1413]: time="2025-05-17T00:38:55.348525264Z" level=info msg="cleaning up dead shim" May 17 00:38:55.356086 env[1413]: time="2025-05-17T00:38:55.356047951Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4435 runtime=io.containerd.runc.v2\n" May 17 00:38:55.672029 kubelet[2369]: E0517 00:38:55.671447 2369 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-lvvvl" podUID="1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9" May 17 00:38:55.674726 kubelet[2369]: I0517 00:38:55.674687 2369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bc0352-e20c-43fd-be06-df7ca72fdefb" path="/var/lib/kubelet/pods/e2bc0352-e20c-43fd-be06-df7ca72fdefb/volumes" May 17 00:38:56.196599 env[1413]: time="2025-05-17T00:38:56.196552632Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:38:56.244894 env[1413]: time="2025-05-17T00:38:56.244845390Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124\"" May 17 00:38:56.245457 env[1413]: time="2025-05-17T00:38:56.245425596Z" level=info msg="StartContainer for \"1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124\"" May 17 00:38:56.274612 systemd[1]: Started cri-containerd-1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124.scope. May 17 00:38:56.308149 systemd[1]: cri-containerd-1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124.scope: Deactivated successfully. May 17 00:38:56.311656 env[1413]: time="2025-05-17T00:38:56.311616862Z" level=info msg="StartContainer for \"1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124\" returns successfully" May 17 00:38:56.345824 env[1413]: time="2025-05-17T00:38:56.345777557Z" level=info msg="shim disconnected" id=1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124 May 17 00:38:56.345824 env[1413]: time="2025-05-17T00:38:56.345821257Z" level=warning msg="cleaning up after shim disconnected" id=1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124 namespace=k8s.io May 17 00:38:56.346136 env[1413]: time="2025-05-17T00:38:56.345832757Z" level=info msg="cleaning up dead shim" May 17 00:38:56.353384 env[1413]: time="2025-05-17T00:38:56.353351444Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4492 runtime=io.containerd.runc.v2\n" May 17 00:38:56.941123 systemd[1]: run-containerd-runc-k8s.io-1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124-runc.WphMK1.mount: Deactivated successfully. May 17 00:38:56.941273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a26d56c17674d246df7ca2ac773622946dc450c82f3e2badd2ab8d7f0be9124-rootfs.mount: Deactivated successfully. May 17 00:38:57.202239 env[1413]: time="2025-05-17T00:38:57.201937635Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:38:57.246157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111672794.mount: Deactivated successfully. May 17 00:38:57.268279 env[1413]: time="2025-05-17T00:38:57.268232595Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd\"" May 17 00:38:57.269144 env[1413]: time="2025-05-17T00:38:57.269110605Z" level=info msg="StartContainer for \"2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd\"" May 17 00:38:57.290034 systemd[1]: Started cri-containerd-2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd.scope. May 17 00:38:57.319481 systemd[1]: cri-containerd-2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd.scope: Deactivated successfully. May 17 00:38:57.323876 env[1413]: time="2025-05-17T00:38:57.323840032Z" level=info msg="StartContainer for \"2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd\" returns successfully" May 17 00:38:57.364977 env[1413]: time="2025-05-17T00:38:57.364922203Z" level=info msg="shim disconnected" id=2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd May 17 00:38:57.365240 env[1413]: time="2025-05-17T00:38:57.364982804Z" level=warning msg="cleaning up after shim disconnected" id=2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd namespace=k8s.io May 17 00:38:57.365240 env[1413]: time="2025-05-17T00:38:57.365010604Z" level=info msg="cleaning up dead shim" May 17 00:38:57.372698 env[1413]: time="2025-05-17T00:38:57.372650692Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4547 runtime=io.containerd.runc.v2\n" May 17 00:38:57.671858 kubelet[2369]: E0517 00:38:57.671377 2369 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-lvvvl" podUID="1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9" May 17 00:38:57.815908 kubelet[2369]: E0517 00:38:57.815860 2369 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:57.941165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3fd9ad8fd3c8fc9b46f0af803334f9c5acfaad0679d7658a45cb38cbb566dd-rootfs.mount: Deactivated successfully. May 17 00:38:58.207900 env[1413]: time="2025-05-17T00:38:58.206247428Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:38:58.239606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493670038.mount: Deactivated successfully. May 17 00:38:58.257419 env[1413]: time="2025-05-17T00:38:58.257370909Z" level=info msg="CreateContainer within sandbox \"c6c0bb2cf1c65534a228d757b7845b14702c3c6680c946b3821b4896fbfeb28c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e486e91b38f5e932d4b02aa026fb7474cba71500156a92a365154678c738d299\"" May 17 00:38:58.259420 env[1413]: time="2025-05-17T00:38:58.258180618Z" level=info msg="StartContainer for \"e486e91b38f5e932d4b02aa026fb7474cba71500156a92a365154678c738d299\"" May 17 00:38:58.279121 systemd[1]: Started cri-containerd-e486e91b38f5e932d4b02aa026fb7474cba71500156a92a365154678c738d299.scope. May 17 00:38:58.315843 env[1413]: time="2025-05-17T00:38:58.315781773Z" level=info msg="StartContainer for \"e486e91b38f5e932d4b02aa026fb7474cba71500156a92a365154678c738d299\" returns successfully" May 17 00:38:58.668035 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:38:59.283445 kubelet[2369]: I0517 00:38:59.283388 2369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m6tqd" podStartSLOduration=5.283372946 podStartE2EDuration="5.283372946s" podCreationTimestamp="2025-05-17 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:38:59.275282154 +0000 UTC m=+202.150710655" watchObservedRunningTime="2025-05-17 00:38:59.283372946 +0000 UTC m=+202.158801447" May 17 00:38:59.670815 kubelet[2369]: E0517 00:38:59.670755 2369 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-lvvvl" podUID="1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9" May 17 00:39:01.299214 systemd-networkd[1559]: lxc_health: Link UP May 17 00:39:01.329046 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:39:01.329619 systemd-networkd[1559]: lxc_health: Gained carrier May 17 00:39:01.454117 systemd[1]: run-containerd-runc-k8s.io-e486e91b38f5e932d4b02aa026fb7474cba71500156a92a365154678c738d299-runc.2hw4ep.mount: Deactivated successfully. May 17 00:39:01.671416 kubelet[2369]: E0517 00:39:01.671352 2369 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-lvvvl" podUID="1acd0fdc-21fe-471e-8bdf-e4dd9764d0f9" May 17 00:39:03.098271 systemd-networkd[1559]: lxc_health: Gained IPv6LL May 17 00:39:03.673968 systemd[1]: run-containerd-runc-k8s.io-e486e91b38f5e932d4b02aa026fb7474cba71500156a92a365154678c738d299-runc.pwEOEp.mount: Deactivated successfully. May 17 00:39:08.099907 sshd[4200]: pam_unix(sshd:session): session closed for user core May 17 00:39:08.103572 systemd[1]: sshd@23-10.200.4.42:22-10.200.16.10:35040.service: Deactivated successfully. May 17 00:39:08.104433 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:39:08.105620 systemd-logind[1400]: Session 26 logged out. Waiting for processes to exit. May 17 00:39:08.106482 systemd-logind[1400]: Removed session 26.