May 10 00:44:41.048337 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:44:41.048371 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:44:41.048386 kernel: BIOS-provided physical RAM map: May 10 00:44:41.048396 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 10 00:44:41.048406 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 10 00:44:41.048416 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 10 00:44:41.048432 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 10 00:44:41.048443 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 10 00:44:41.048454 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 10 00:44:41.048465 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 10 00:44:41.048475 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 10 00:44:41.048486 kernel: printk: bootconsole [earlyser0] enabled May 10 00:44:41.048497 kernel: NX (Execute Disable) protection: active May 10 00:44:41.048508 kernel: efi: EFI v2.70 by Microsoft May 10 00:44:41.048524 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 May 10 00:44:41.048536 kernel: random: crng init done May 10 00:44:41.048547 kernel: SMBIOS 3.1.0 present. May 10 00:44:41.048559 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 10 00:44:41.048570 kernel: Hypervisor detected: Microsoft Hyper-V May 10 00:44:41.048581 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 10 00:44:41.048593 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 May 10 00:44:41.048604 kernel: Hyper-V: Nested features: 0x1e0101 May 10 00:44:41.048619 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 10 00:44:41.048631 kernel: Hyper-V: Using hypercall for remote TLB flush May 10 00:44:41.048642 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 10 00:44:41.048654 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 10 00:44:41.048666 kernel: tsc: Detected 2593.907 MHz processor May 10 00:44:41.048678 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:44:41.048691 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:44:41.048703 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 10 00:44:41.048715 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:44:41.048727 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 10 00:44:41.048741 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 10 00:44:41.048753 kernel: Using GB pages for direct mapping May 10 00:44:41.048765 kernel: Secure boot disabled May 10 00:44:41.048777 kernel: ACPI: Early table checksum verification disabled May 10 00:44:41.048788 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 10 00:44:41.048800 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048835 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048847 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 10 00:44:41.048867 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 10 00:44:41.048880 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048893 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048906 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048919 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048931 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048947 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048960 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 10 00:44:41.048973 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 10 00:44:41.048986 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 10 00:44:41.048999 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 10 00:44:41.049012 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 10 00:44:41.049024 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 10 00:44:41.049037 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 10 00:44:41.049052 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 10 00:44:41.049065 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 10 00:44:41.049078 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 10 00:44:41.049091 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 10 00:44:41.049103 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:44:41.049116 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:44:41.049129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 10 00:44:41.049142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 10 00:44:41.049155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 10 00:44:41.049170 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 10 00:44:41.049183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 10 00:44:41.049196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 10 00:44:41.049209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 10 00:44:41.049222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 10 00:44:41.049235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 10 00:44:41.049248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 10 00:44:41.049261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 10 00:44:41.049273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 10 00:44:41.049289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 10 00:44:41.049302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 10 00:44:41.049314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 10 00:44:41.049327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 10 00:44:41.049340 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 10 00:44:41.049353 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 10 00:44:41.049366 kernel: Zone ranges: May 10 00:44:41.049378 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:44:41.049391 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 10 00:44:41.049406 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 10 00:44:41.049419 kernel: Movable zone start for each node May 10 00:44:41.049432 kernel: Early memory node ranges May 10 00:44:41.049445 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 10 00:44:41.049458 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 10 00:44:41.049470 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 10 00:44:41.049483 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 10 00:44:41.049495 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 10 00:44:41.049509 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:44:41.049524 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 10 00:44:41.049537 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 10 00:44:41.049549 kernel: ACPI: PM-Timer IO Port: 0x408 May 10 00:44:41.049562 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 10 00:44:41.049575 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 10 00:44:41.049588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:44:41.049601 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:44:41.049614 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 10 00:44:41.049627 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 10 00:44:41.049642 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 10 00:44:41.049655 kernel: Booting paravirtualized kernel on Hyper-V May 10 00:44:41.049668 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:44:41.049681 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 10 00:44:41.049693 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 10 00:44:41.049706 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 10 00:44:41.049718 kernel: pcpu-alloc: [0] 0 1 May 10 00:44:41.049731 kernel: Hyper-V: PV spinlocks enabled May 10 00:44:41.049743 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:44:41.049758 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 10 00:44:41.049771 kernel: Policy zone: Normal May 10 00:44:41.049786 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:44:41.049800 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:44:41.052009 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 10 00:44:41.052033 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:44:41.052051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:44:41.052059 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 308056K reserved, 0K cma-reserved) May 10 00:44:41.052074 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:44:41.052084 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:44:41.052102 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:44:41.052114 kernel: rcu: Hierarchical RCU implementation. May 10 00:44:41.052123 kernel: rcu: RCU event tracing is enabled. May 10 00:44:41.052133 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:44:41.052144 kernel: Rude variant of Tasks RCU enabled. May 10 00:44:41.052152 kernel: Tracing variant of Tasks RCU enabled. May 10 00:44:41.052159 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:44:41.052170 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:44:41.052178 kernel: Using NULL legacy PIC May 10 00:44:41.052189 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 10 00:44:41.052200 kernel: Console: colour dummy device 80x25 May 10 00:44:41.052209 kernel: printk: console [tty1] enabled May 10 00:44:41.052220 kernel: printk: console [ttyS0] enabled May 10 00:44:41.052228 kernel: printk: bootconsole [earlyser0] disabled May 10 00:44:41.052241 kernel: ACPI: Core revision 20210730 May 10 00:44:41.052250 kernel: Failed to register legacy timer interrupt May 10 00:44:41.052259 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:44:41.052268 kernel: Hyper-V: Using IPI hypercalls May 10 00:44:41.052277 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) May 10 00:44:41.052286 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 10 00:44:41.052295 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 10 00:44:41.052305 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:44:41.052313 kernel: Spectre V2 : Mitigation: Retpolines May 10 00:44:41.052323 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:44:41.052334 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 10 00:44:41.052343 kernel: RETBleed: Vulnerable May 10 00:44:41.052354 kernel: Speculative Store Bypass: Vulnerable May 10 00:44:41.052364 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:44:41.052374 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 10 00:44:41.052385 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:44:41.052397 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:44:41.052409 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:44:41.052421 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 10 00:44:41.052432 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 10 00:44:41.052446 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 10 00:44:41.052457 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:44:41.052465 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 10 00:44:41.052476 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 10 00:44:41.052484 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 10 00:44:41.052494 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 10 00:44:41.052503 kernel: Freeing SMP alternatives memory: 32K May 10 00:44:41.052511 kernel: pid_max: default: 32768 minimum: 301 May 10 00:44:41.052518 kernel: LSM: Security Framework initializing May 10 00:44:41.052528 kernel: SELinux: Initializing. May 10 00:44:41.052536 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:44:41.052546 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:44:41.052556 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 10 00:44:41.052567 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 10 00:44:41.052574 kernel: signal: max sigframe size: 3632 May 10 00:44:41.052582 kernel: rcu: Hierarchical SRCU implementation. May 10 00:44:41.052591 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:44:41.052600 kernel: smp: Bringing up secondary CPUs ... May 10 00:44:41.052607 kernel: x86: Booting SMP configuration: May 10 00:44:41.052618 kernel: .... node #0, CPUs: #1 May 10 00:44:41.052626 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 10 00:44:41.052639 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 10 00:44:41.052646 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:44:41.052653 kernel: smpboot: Max logical packages: 1 May 10 00:44:41.052661 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 10 00:44:41.052668 kernel: devtmpfs: initialized May 10 00:44:41.052676 kernel: x86/mm: Memory block size: 128MB May 10 00:44:41.052686 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 10 00:44:41.052693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:44:41.052701 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:44:41.052714 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:44:41.052721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:44:41.052732 kernel: audit: initializing netlink subsys (disabled) May 10 00:44:41.052739 kernel: audit: type=2000 audit(1746837879.024:1): state=initialized audit_enabled=0 res=1 May 10 00:44:41.052746 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:44:41.052756 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:44:41.052764 kernel: cpuidle: using governor menu May 10 00:44:41.052771 kernel: ACPI: bus type PCI registered May 10 00:44:41.052782 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:44:41.052792 kernel: dca service started, version 1.12.1 May 10 00:44:41.052801 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:44:41.052817 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:44:41.052826 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:44:41.052836 kernel: ACPI: Added _OSI(Module Device) May 10 00:44:41.052845 kernel: ACPI: Added _OSI(Processor Device) May 10 00:44:41.052853 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:44:41.052861 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:44:41.052871 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:44:41.052882 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:44:41.052891 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:44:41.052898 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:44:41.052909 kernel: ACPI: Interpreter enabled May 10 00:44:41.052917 kernel: ACPI: PM: (supports S0 S5) May 10 00:44:41.052927 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:44:41.052935 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:44:41.052944 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 10 00:44:41.052952 kernel: iommu: Default domain type: Translated May 10 00:44:41.052965 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:44:41.052973 kernel: vgaarb: loaded May 10 00:44:41.052980 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:44:41.052988 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:44:41.052998 kernel: PTP clock support registered May 10 00:44:41.053007 kernel: Registered efivars operations May 10 00:44:41.053016 kernel: PCI: Using ACPI for IRQ routing May 10 00:44:41.053023 kernel: PCI: System does not support PCI May 10 00:44:41.053031 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 10 00:44:41.053043 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:44:41.053053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:44:41.053061 kernel: pnp: PnP ACPI init May 10 00:44:41.053068 kernel: pnp: PnP ACPI: found 3 devices May 10 00:44:41.053078 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:44:41.053086 kernel: NET: Registered PF_INET protocol family May 10 00:44:41.053096 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:44:41.053104 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 10 00:44:41.053111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:44:41.053123 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:44:41.053131 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 10 00:44:41.053142 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 10 00:44:41.053149 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 10 00:44:41.053158 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 10 00:44:41.053166 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:44:41.053176 kernel: NET: Registered PF_XDP protocol family May 10 00:44:41.053184 kernel: PCI: CLS 0 bytes, default 64 May 10 00:44:41.053191 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 00:44:41.053203 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) May 10 00:44:41.053212 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:44:41.053221 kernel: Initialise system trusted keyrings May 10 00:44:41.053229 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 10 00:44:41.053239 kernel: Key type asymmetric registered May 10 00:44:41.053246 kernel: Asymmetric key parser 'x509' registered May 10 00:44:41.053253 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:44:41.053260 kernel: io scheduler mq-deadline registered May 10 00:44:41.053268 kernel: io scheduler kyber registered May 10 00:44:41.053277 kernel: io scheduler bfq registered May 10 00:44:41.053284 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:44:41.053295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:44:41.053302 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:44:41.053310 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 10 00:44:41.053319 kernel: i8042: PNP: No PS/2 controller found. May 10 00:44:41.053458 kernel: rtc_cmos 00:02: registered as rtc0 May 10 00:44:41.053546 kernel: rtc_cmos 00:02: setting system clock to 2025-05-10T00:44:40 UTC (1746837880) May 10 00:44:41.053633 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 10 00:44:41.053644 kernel: intel_pstate: CPU model not supported May 10 00:44:41.053653 kernel: efifb: probing for efifb May 10 00:44:41.053661 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 10 00:44:41.053671 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 10 00:44:41.053679 kernel: efifb: scrolling: redraw May 10 00:44:41.053689 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 10 00:44:41.053697 kernel: Console: switching to colour frame buffer device 128x48 May 10 00:44:41.053710 kernel: fb0: EFI VGA frame buffer device May 10 00:44:41.053718 kernel: pstore: Registered efi as persistent store backend May 10 00:44:41.053728 kernel: NET: Registered PF_INET6 protocol family May 10 00:44:41.053735 kernel: Segment Routing with IPv6 May 10 00:44:41.053744 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:44:41.053753 kernel: NET: Registered PF_PACKET protocol family May 10 00:44:41.053764 kernel: Key type dns_resolver registered May 10 00:44:41.053771 kernel: IPI shorthand broadcast: enabled May 10 00:44:41.053779 kernel: sched_clock: Marking stable (931666100, 23460100)->(1128025200, -172899000) May 10 00:44:41.053789 kernel: registered taskstats version 1 May 10 00:44:41.053802 kernel: Loading compiled-in X.509 certificates May 10 00:44:41.053817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:44:41.053826 kernel: Key type .fscrypt registered May 10 00:44:41.053834 kernel: Key type fscrypt-provisioning registered May 10 00:44:41.053845 kernel: pstore: Using crash dump compression: deflate May 10 00:44:41.053853 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:44:41.053861 kernel: ima: Allocated hash algorithm: sha1 May 10 00:44:41.053871 kernel: ima: No architecture policies found May 10 00:44:41.053883 kernel: clk: Disabling unused clocks May 10 00:44:41.053891 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:44:41.053898 kernel: Write protecting the kernel read-only data: 28672k May 10 00:44:41.053908 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:44:41.053916 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:44:41.053926 kernel: Run /init as init process May 10 00:44:41.053934 kernel: with arguments: May 10 00:44:41.053944 kernel: /init May 10 00:44:41.053953 kernel: with environment: May 10 00:44:41.053965 kernel: HOME=/ May 10 00:44:41.053972 kernel: TERM=linux May 10 00:44:41.053981 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:44:41.053992 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:44:41.054004 systemd[1]: Detected virtualization microsoft. May 10 00:44:41.054012 systemd[1]: Detected architecture x86-64. May 10 00:44:41.054023 systemd[1]: Running in initrd. May 10 00:44:41.054031 systemd[1]: No hostname configured, using default hostname. May 10 00:44:41.054043 systemd[1]: Hostname set to . May 10 00:44:41.054051 systemd[1]: Initializing machine ID from random generator. May 10 00:44:41.054063 systemd[1]: Queued start job for default target initrd.target. May 10 00:44:41.054071 systemd[1]: Started systemd-ask-password-console.path. May 10 00:44:41.054082 systemd[1]: Reached target cryptsetup.target. May 10 00:44:41.054089 systemd[1]: Reached target paths.target. May 10 00:44:41.054099 systemd[1]: Reached target slices.target. May 10 00:44:41.054108 systemd[1]: Reached target swap.target. May 10 00:44:41.054121 systemd[1]: Reached target timers.target. May 10 00:44:41.054129 systemd[1]: Listening on iscsid.socket. May 10 00:44:41.054140 systemd[1]: Listening on iscsiuio.socket. May 10 00:44:41.054149 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:44:41.054159 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:44:41.054167 systemd[1]: Listening on systemd-journald.socket. May 10 00:44:41.054177 systemd[1]: Listening on systemd-networkd.socket. May 10 00:44:41.054186 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:44:41.054197 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:44:41.054207 systemd[1]: Reached target sockets.target. May 10 00:44:41.054215 systemd[1]: Starting kmod-static-nodes.service... May 10 00:44:41.054226 systemd[1]: Finished network-cleanup.service. May 10 00:44:41.054235 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:44:41.054246 systemd[1]: Starting systemd-journald.service... May 10 00:44:41.054254 systemd[1]: Starting systemd-modules-load.service... May 10 00:44:41.054263 systemd[1]: Starting systemd-resolved.service... May 10 00:44:41.054272 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:44:41.054284 systemd[1]: Finished kmod-static-nodes.service. May 10 00:44:41.054294 kernel: audit: type=1130 audit(1746837881.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.054307 systemd-journald[183]: Journal started May 10 00:44:41.054357 systemd-journald[183]: Runtime Journal (/run/log/journal/0bf088745eb3484d873d50681e0fd0fa) is 8.0M, max 159.0M, 151.0M free. May 10 00:44:41.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.039847 systemd-modules-load[184]: Inserted module 'overlay' May 10 00:44:41.073087 systemd[1]: Started systemd-journald.service. May 10 00:44:41.078686 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:44:41.083576 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:44:41.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.107825 kernel: audit: type=1130 audit(1746837881.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.108416 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:44:41.114085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:44:41.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.131313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:44:41.145641 kernel: audit: type=1130 audit(1746837881.080:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.156605 systemd-resolved[185]: Positive Trust Anchors: May 10 00:44:41.177520 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:44:41.177552 kernel: audit: type=1130 audit(1746837881.106:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.156800 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:44:41.156856 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:44:41.200051 kernel: Bridge firewalling registered May 10 00:44:41.160044 systemd-resolved[185]: Defaulting to hostname 'linux'. May 10 00:44:41.161257 systemd[1]: Started systemd-resolved.service. May 10 00:44:41.199665 systemd-modules-load[184]: Inserted module 'br_netfilter' May 10 00:44:41.211082 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:44:41.214326 systemd[1]: Reached target nss-lookup.target. May 10 00:44:41.218194 systemd[1]: Starting dracut-cmdline.service... May 10 00:44:41.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.256901 kernel: audit: type=1130 audit(1746837881.147:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.256988 kernel: audit: type=1130 audit(1746837881.210:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.257073 dracut-cmdline[201]: dracut-dracut-053 May 10 00:44:41.257073 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:44:41.290229 kernel: SCSI subsystem initialized May 10 00:44:41.290264 kernel: audit: type=1130 audit(1746837881.213:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.319088 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:44:41.319168 kernel: device-mapper: uevent: version 1.0.3 May 10 00:44:41.325875 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:44:41.330043 systemd-modules-load[184]: Inserted module 'dm_multipath' May 10 00:44:41.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.354872 kernel: audit: type=1130 audit(1746837881.336:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.333874 systemd[1]: Finished systemd-modules-load.service. May 10 00:44:41.338344 systemd[1]: Starting systemd-sysctl.service... May 10 00:44:41.366620 kernel: Loading iSCSI transport class v2.0-870. May 10 00:44:41.356257 systemd[1]: Finished systemd-sysctl.service. May 10 00:44:41.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.382901 kernel: audit: type=1130 audit(1746837881.368:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.396832 kernel: iscsi: registered transport (tcp) May 10 00:44:41.424493 kernel: iscsi: registered transport (qla4xxx) May 10 00:44:41.424568 kernel: QLogic iSCSI HBA Driver May 10 00:44:41.453995 systemd[1]: Finished dracut-cmdline.service. May 10 00:44:41.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.459584 systemd[1]: Starting dracut-pre-udev.service... May 10 00:44:41.510832 kernel: raid6: avx512x4 gen() 18332 MB/s May 10 00:44:41.530821 kernel: raid6: avx512x4 xor() 7859 MB/s May 10 00:44:41.551821 kernel: raid6: avx512x2 gen() 18336 MB/s May 10 00:44:41.572826 kernel: raid6: avx512x2 xor() 28448 MB/s May 10 00:44:41.592818 kernel: raid6: avx512x1 gen() 18294 MB/s May 10 00:44:41.612818 kernel: raid6: avx512x1 xor() 25099 MB/s May 10 00:44:41.633821 kernel: raid6: avx2x4 gen() 18196 MB/s May 10 00:44:41.653820 kernel: raid6: avx2x4 xor() 7327 MB/s May 10 00:44:41.673817 kernel: raid6: avx2x2 gen() 18243 MB/s May 10 00:44:41.694819 kernel: raid6: avx2x2 xor() 20690 MB/s May 10 00:44:41.714818 kernel: raid6: avx2x1 gen() 13781 MB/s May 10 00:44:41.734819 kernel: raid6: avx2x1 xor() 18269 MB/s May 10 00:44:41.755820 kernel: raid6: sse2x4 gen() 10696 MB/s May 10 00:44:41.775816 kernel: raid6: sse2x4 xor() 6991 MB/s May 10 00:44:41.795818 kernel: raid6: sse2x2 gen() 12287 MB/s May 10 00:44:41.816821 kernel: raid6: sse2x2 xor() 7543 MB/s May 10 00:44:41.836818 kernel: raid6: sse2x1 gen() 11330 MB/s May 10 00:44:41.860133 kernel: raid6: sse2x1 xor() 5759 MB/s May 10 00:44:41.860156 kernel: raid6: using algorithm avx512x2 gen() 18336 MB/s May 10 00:44:41.860189 kernel: raid6: .... xor() 28448 MB/s, rmw enabled May 10 00:44:41.868364 kernel: raid6: using avx512x2 recovery algorithm May 10 00:44:41.883833 kernel: xor: automatically using best checksumming function avx May 10 00:44:41.979834 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:44:41.988062 systemd[1]: Finished dracut-pre-udev.service. May 10 00:44:41.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:41.992000 audit: BPF prog-id=7 op=LOAD May 10 00:44:41.992000 audit: BPF prog-id=8 op=LOAD May 10 00:44:41.994247 systemd[1]: Starting systemd-udevd.service... May 10 00:44:42.008979 systemd-udevd[383]: Using default interface naming scheme 'v252'. May 10 00:44:42.013707 systemd[1]: Started systemd-udevd.service. May 10 00:44:42.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:42.023955 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:44:42.039638 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation May 10 00:44:42.072339 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:44:42.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:42.076229 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:44:42.112018 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:44:42.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:42.165834 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:44:42.194832 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:44:42.198827 kernel: AES CTR mode by8 optimization enabled May 10 00:44:42.198874 kernel: hv_vmbus: Vmbus version:5.2 May 10 00:44:42.224833 kernel: hv_vmbus: registering driver hv_storvsc May 10 00:44:42.224889 kernel: hv_vmbus: registering driver hyperv_keyboard May 10 00:44:42.228851 kernel: scsi host1: storvsc_host_t May 10 00:44:42.235186 kernel: scsi host0: storvsc_host_t May 10 00:44:42.242832 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 10 00:44:42.258736 kernel: hv_vmbus: registering driver hv_netvsc May 10 00:44:42.258821 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 10 00:44:42.270841 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 10 00:44:42.270895 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:44:42.290830 kernel: hv_vmbus: registering driver hid_hyperv May 10 00:44:42.306394 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 10 00:44:42.306464 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 10 00:44:42.323553 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 10 00:44:42.331021 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:44:42.331045 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 10 00:44:42.341820 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 10 00:44:42.367159 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 10 00:44:42.367341 kernel: sd 0:0:0:0: [sda] Write Protect is off May 10 00:44:42.367510 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 10 00:44:42.367680 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 10 00:44:42.367864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:44:42.367884 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 10 00:44:42.461450 kernel: hv_netvsc 7ced8d2e-95df-7ced-8d2e-95df7ced8d2e eth0: VF slot 1 added May 10 00:44:42.485392 kernel: hv_vmbus: registering driver hv_pci May 10 00:44:42.485469 kernel: hv_pci d8435e84-4aa1-483c-8b3e-31a9babcfaeb: PCI VMBus probing: Using version 0x10004 May 10 00:44:42.602978 kernel: hv_pci d8435e84-4aa1-483c-8b3e-31a9babcfaeb: PCI host bridge to bus 4aa1:00 May 10 00:44:42.603139 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (441) May 10 00:44:42.603158 kernel: pci_bus 4aa1:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 10 00:44:42.603322 kernel: pci_bus 4aa1:00: No busn resource found for root bus, will use [bus 00-ff] May 10 00:44:42.603461 kernel: pci 4aa1:00:02.0: [15b3:1016] type 00 class 0x020000 May 10 00:44:42.603625 kernel: pci 4aa1:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 10 00:44:42.603775 kernel: pci 4aa1:00:02.0: enabling Extended Tags May 10 00:44:42.603944 kernel: pci 4aa1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4aa1:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 10 00:44:42.604093 kernel: pci_bus 4aa1:00: busn_res: [bus 00-ff] end is updated to 00 May 10 00:44:42.604231 kernel: pci 4aa1:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 10 00:44:42.521190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:44:42.545793 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:44:42.567995 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:44:42.585708 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:44:42.596392 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:44:42.618471 systemd[1]: Starting disk-uuid.service... May 10 00:44:42.640832 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:44:42.650838 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:44:42.801837 kernel: mlx5_core 4aa1:00:02.0: firmware version: 14.30.5000 May 10 00:44:43.067386 kernel: mlx5_core 4aa1:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 10 00:44:43.067578 kernel: mlx5_core 4aa1:00:02.0: Supported tc offload range - chains: 1, prios: 1 May 10 00:44:43.067741 kernel: mlx5_core 4aa1:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing May 10 00:44:43.067925 kernel: hv_netvsc 7ced8d2e-95df-7ced-8d2e-95df7ced8d2e eth0: VF registering: eth1 May 10 00:44:43.068079 kernel: mlx5_core 4aa1:00:02.0 eth1: joined to eth0 May 10 00:44:43.075827 kernel: mlx5_core 4aa1:00:02.0 enP19105s1: renamed from eth1 May 10 00:44:43.658831 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:44:43.659610 disk-uuid[553]: The operation has completed successfully. May 10 00:44:43.731024 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:44:43.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:43.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:43.731133 systemd[1]: Finished disk-uuid.service. May 10 00:44:43.739160 systemd[1]: Starting verity-setup.service... May 10 00:44:43.764825 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 10 00:44:43.853509 systemd[1]: Found device dev-mapper-usr.device. May 10 00:44:43.857129 systemd[1]: Mounting sysusr-usr.mount... May 10 00:44:43.864367 systemd[1]: Finished verity-setup.service. May 10 00:44:43.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:43.936836 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:44:43.937183 systemd[1]: Mounted sysusr-usr.mount. May 10 00:44:43.941044 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:44:43.945297 systemd[1]: Starting ignition-setup.service... May 10 00:44:43.951418 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:44:43.972943 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:44:43.972996 kernel: BTRFS info (device sda6): using free space tree May 10 00:44:43.973009 kernel: BTRFS info (device sda6): has skinny extents May 10 00:44:44.009967 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:44:44.027693 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:44:44.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.032000 audit: BPF prog-id=9 op=LOAD May 10 00:44:44.033664 systemd[1]: Starting systemd-networkd.service... May 10 00:44:44.057254 systemd-networkd[810]: lo: Link UP May 10 00:44:44.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.057265 systemd-networkd[810]: lo: Gained carrier May 10 00:44:44.057916 systemd-networkd[810]: Enumeration completed May 10 00:44:44.058008 systemd[1]: Started systemd-networkd.service. May 10 00:44:44.061267 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:44:44.062176 systemd[1]: Reached target network.target. May 10 00:44:44.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.068753 systemd[1]: Starting iscsiuio.service... May 10 00:44:44.077926 systemd[1]: Started iscsiuio.service. May 10 00:44:44.085055 systemd[1]: Starting iscsid.service... May 10 00:44:44.091018 systemd[1]: Finished ignition-setup.service. May 10 00:44:44.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.099975 iscsid[816]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:44:44.099975 iscsid[816]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:44:44.099975 iscsid[816]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:44:44.099975 iscsid[816]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:44:44.099975 iscsid[816]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:44:44.099975 iscsid[816]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:44:44.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.095176 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:44:44.099082 systemd[1]: Started iscsid.service. May 10 00:44:44.143566 kernel: mlx5_core 4aa1:00:02.0 enP19105s1: Link up May 10 00:44:44.139460 systemd[1]: Starting dracut-initqueue.service... May 10 00:44:44.156055 systemd[1]: Finished dracut-initqueue.service. May 10 00:44:44.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.177601 kernel: hv_netvsc 7ced8d2e-95df-7ced-8d2e-95df7ced8d2e eth0: Data path switched to VF: enP19105s1 May 10 00:44:44.158509 systemd[1]: Reached target remote-fs-pre.target. May 10 00:44:44.187509 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:44:44.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:44.162867 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:44:44.165297 systemd[1]: Reached target remote-fs.target. May 10 00:44:44.168531 systemd[1]: Starting dracut-pre-mount.service... May 10 00:44:44.180401 systemd[1]: Finished dracut-pre-mount.service. May 10 00:44:44.180686 systemd-networkd[810]: enP19105s1: Link UP May 10 00:44:44.180789 systemd-networkd[810]: eth0: Link UP May 10 00:44:44.186351 systemd-networkd[810]: eth0: Gained carrier May 10 00:44:44.195393 systemd-networkd[810]: enP19105s1: Gained carrier May 10 00:44:44.216883 systemd-networkd[810]: eth0: DHCPv4 address 10.200.8.31/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 10 00:44:44.991992 ignition[817]: Ignition 2.14.0 May 10 00:44:44.992009 ignition[817]: Stage: fetch-offline May 10 00:44:44.992100 ignition[817]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:44.992152 ignition[817]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:45.010721 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:45.039889 ignition[817]: parsed url from cmdline: "" May 10 00:44:45.039921 ignition[817]: no config URL provided May 10 00:44:45.039960 ignition[817]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:44:45.039979 ignition[817]: no config at "/usr/lib/ignition/user.ign" May 10 00:44:45.039988 ignition[817]: failed to fetch config: resource requires networking May 10 00:44:45.040207 ignition[817]: Ignition finished successfully May 10 00:44:45.052634 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:44:45.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.055845 systemd[1]: Starting ignition-fetch.service... May 10 00:44:45.065461 ignition[836]: Ignition 2.14.0 May 10 00:44:45.065817 ignition[836]: Stage: fetch May 10 00:44:45.065937 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:45.065968 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:45.070053 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:45.071476 ignition[836]: parsed url from cmdline: "" May 10 00:44:45.071480 ignition[836]: no config URL provided May 10 00:44:45.071490 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:44:45.071507 ignition[836]: no config at "/usr/lib/ignition/user.ign" May 10 00:44:45.071556 ignition[836]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 10 00:44:45.174423 ignition[836]: GET result: OK May 10 00:44:45.174536 ignition[836]: config has been read from IMDS userdata May 10 00:44:45.174576 ignition[836]: parsing config with SHA512: 39b19e2348c9f292dae9497eb10fc9d70361b897c448c620e800e45da0e3518614a7e5440e63a17aa97ad899241270ccf4274063fce0171c51520304306cc774 May 10 00:44:45.179258 unknown[836]: fetched base config from "system" May 10 00:44:45.179270 unknown[836]: fetched base config from "system" May 10 00:44:45.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.179979 ignition[836]: fetch: fetch complete May 10 00:44:45.204696 kernel: kauditd_printk_skb: 19 callbacks suppressed May 10 00:44:45.204726 kernel: audit: type=1130 audit(1746837885.183:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.179279 unknown[836]: fetched user config from "azure" May 10 00:44:45.179985 ignition[836]: fetch: fetch passed May 10 00:44:45.181382 systemd[1]: Finished ignition-fetch.service. May 10 00:44:45.180030 ignition[836]: Ignition finished successfully May 10 00:44:45.184896 systemd[1]: Starting ignition-kargs.service... May 10 00:44:45.222139 ignition[842]: Ignition 2.14.0 May 10 00:44:45.222150 ignition[842]: Stage: kargs May 10 00:44:45.222286 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:45.222319 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:45.231887 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:45.236659 ignition[842]: kargs: kargs passed May 10 00:44:45.236713 ignition[842]: Ignition finished successfully May 10 00:44:45.240863 systemd[1]: Finished ignition-kargs.service. May 10 00:44:45.243884 systemd[1]: Starting ignition-disks.service... May 10 00:44:45.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.258998 ignition[848]: Ignition 2.14.0 May 10 00:44:45.266944 kernel: audit: type=1130 audit(1746837885.242:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.259008 ignition[848]: Stage: disks May 10 00:44:45.259137 ignition[848]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:45.259162 ignition[848]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:45.272095 systemd[1]: Finished ignition-disks.service. May 10 00:44:45.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.266018 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:45.293820 kernel: audit: type=1130 audit(1746837885.274:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.274254 systemd[1]: Reached target initrd-root-device.target. May 10 00:44:45.268232 ignition[848]: disks: disks passed May 10 00:44:45.288576 systemd[1]: Reached target local-fs-pre.target. May 10 00:44:45.268292 ignition[848]: Ignition finished successfully May 10 00:44:45.293786 systemd[1]: Reached target local-fs.target. May 10 00:44:45.296064 systemd[1]: Reached target sysinit.target. May 10 00:44:45.300416 systemd[1]: Reached target basic.target. May 10 00:44:45.303518 systemd[1]: Starting systemd-fsck-root.service... May 10 00:44:45.329691 systemd-fsck[856]: ROOT: clean, 623/7326000 files, 481079/7359488 blocks May 10 00:44:45.335001 systemd[1]: Finished systemd-fsck-root.service. May 10 00:44:45.354057 kernel: audit: type=1130 audit(1746837885.336:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.352169 systemd[1]: Mounting sysroot.mount... May 10 00:44:45.378830 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:44:45.374902 systemd[1]: Mounted sysroot.mount. May 10 00:44:45.377120 systemd[1]: Reached target initrd-root-fs.target. May 10 00:44:45.389421 systemd[1]: Mounting sysroot-usr.mount... May 10 00:44:45.394587 systemd[1]: Starting flatcar-metadata-hostname.service... May 10 00:44:45.399161 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:44:45.399280 systemd[1]: Reached target ignition-diskful.target. May 10 00:44:45.409519 systemd[1]: Mounted sysroot-usr.mount. May 10 00:44:45.425412 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:44:45.431290 systemd[1]: Starting initrd-setup-root.service... May 10 00:44:45.442031 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (866) May 10 00:44:45.442137 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:44:45.452476 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:44:45.452509 kernel: BTRFS info (device sda6): using free space tree May 10 00:44:45.452524 kernel: BTRFS info (device sda6): has skinny extents May 10 00:44:45.461327 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:44:45.465701 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory May 10 00:44:45.476988 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:44:45.482183 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:44:45.620824 systemd[1]: Finished initrd-setup-root.service. May 10 00:44:45.639633 kernel: audit: type=1130 audit(1746837885.623:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.636909 systemd[1]: Starting ignition-mount.service... May 10 00:44:45.645483 systemd[1]: Starting sysroot-boot.service... May 10 00:44:45.649531 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 10 00:44:45.649634 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 10 00:44:45.675085 systemd[1]: Finished sysroot-boot.service. May 10 00:44:45.696975 kernel: audit: type=1130 audit(1746837885.681:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.697059 ignition[934]: INFO : Ignition 2.14.0 May 10 00:44:45.697059 ignition[934]: INFO : Stage: mount May 10 00:44:45.697059 ignition[934]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:45.697059 ignition[934]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:45.697059 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:45.697059 ignition[934]: INFO : mount: mount passed May 10 00:44:45.697059 ignition[934]: INFO : Ignition finished successfully May 10 00:44:45.730540 kernel: audit: type=1130 audit(1746837885.703:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.700471 systemd[1]: Finished ignition-mount.service. May 10 00:44:45.825054 systemd-networkd[810]: eth0: Gained IPv6LL May 10 00:44:45.875394 coreos-metadata[865]: May 10 00:44:45.875 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 10 00:44:45.884095 coreos-metadata[865]: May 10 00:44:45.884 INFO Fetch successful May 10 00:44:45.917394 coreos-metadata[865]: May 10 00:44:45.917 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 10 00:44:45.931867 coreos-metadata[865]: May 10 00:44:45.931 INFO Fetch successful May 10 00:44:45.938734 coreos-metadata[865]: May 10 00:44:45.938 INFO wrote hostname ci-3510.3.7-n-8a4b3429d2 to /sysroot/etc/hostname May 10 00:44:45.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.940463 systemd[1]: Finished flatcar-metadata-hostname.service. May 10 00:44:45.964250 kernel: audit: type=1130 audit(1746837885.944:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:45.946373 systemd[1]: Starting ignition-files.service... May 10 00:44:45.967588 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:44:45.993551 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (944) May 10 00:44:45.993596 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:44:45.993608 kernel: BTRFS info (device sda6): using free space tree May 10 00:44:46.001606 kernel: BTRFS info (device sda6): has skinny extents May 10 00:44:46.007376 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:44:46.023772 ignition[963]: INFO : Ignition 2.14.0 May 10 00:44:46.023772 ignition[963]: INFO : Stage: files May 10 00:44:46.028243 ignition[963]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:46.028243 ignition[963]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:46.041733 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:46.079820 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 10 00:44:46.083769 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:44:46.083769 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:44:46.097603 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:44:46.101390 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:44:46.101390 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:44:46.101237 unknown[963]: wrote ssh authorized keys file for user: core May 10 00:44:46.114530 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:44:46.114530 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:44:46.437716 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:44:47.470045 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:44:47.476211 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:44:47.476211 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:44:47.975416 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:44:48.119207 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 10 00:44:48.126919 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800340198" May 10 00:44:48.196326 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800340198": device or resource busy May 10 00:44:48.196326 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3800340198", trying btrfs: device or resource busy May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800340198" May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800340198" May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3800340198" May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3800340198" May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66777574" May 10 00:44:48.196326 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66777574": device or resource busy May 10 00:44:48.196326 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem66777574", trying btrfs: device or resource busy May 10 00:44:48.196326 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66777574" May 10 00:44:48.139709 systemd[1]: mnt-oem3800340198.mount: Deactivated successfully. May 10 00:44:48.264371 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem66777574" May 10 00:44:48.264371 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem66777574" May 10 00:44:48.264371 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem66777574" May 10 00:44:48.264371 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 10 00:44:48.264371 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:44:48.264371 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 10 00:44:48.161181 systemd[1]: mnt-oem66777574.mount: Deactivated successfully. May 10 00:44:48.620729 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK May 10 00:44:48.960593 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:44:48.960593 ignition[963]: INFO : files: op(14): [started] processing unit "nvidia.service" May 10 00:44:48.960593 ignition[963]: INFO : files: op(14): [finished] processing unit "nvidia.service" May 10 00:44:48.960593 ignition[963]: INFO : files: op(15): [started] processing unit "waagent.service" May 10 00:44:48.960593 ignition[963]: INFO : files: op(15): [finished] processing unit "waagent.service" May 10 00:44:48.997859 kernel: audit: type=1130 audit(1746837888.974:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:48.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:48.997969 ignition[963]: INFO : files: op(16): [started] processing unit "prepare-helm.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" May 10 00:44:48.997969 ignition[963]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" May 10 00:44:48.997969 ignition[963]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:44:48.997969 ignition[963]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:44:48.997969 ignition[963]: INFO : files: files passed May 10 00:44:48.997969 ignition[963]: INFO : Ignition finished successfully May 10 00:44:48.972792 systemd[1]: Finished ignition-files.service. May 10 00:44:48.976786 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:44:49.019436 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:44:48.994708 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:44:49.014420 systemd[1]: Starting ignition-quench.service... May 10 00:44:49.061893 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:44:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.064610 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:44:49.084005 kernel: audit: type=1130 audit(1746837889.063:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.064694 systemd[1]: Finished ignition-quench.service. May 10 00:44:49.080027 systemd[1]: Reached target ignition-complete.target. May 10 00:44:49.084843 systemd[1]: Starting initrd-parse-etc.service... May 10 00:44:49.102356 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:44:49.102466 systemd[1]: Finished initrd-parse-etc.service. May 10 00:44:49.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.107385 systemd[1]: Reached target initrd-fs.target. May 10 00:44:49.111512 systemd[1]: Reached target initrd.target. May 10 00:44:49.113458 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:44:49.114294 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:44:49.129099 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:44:49.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.129961 systemd[1]: Starting initrd-cleanup.service... May 10 00:44:49.140504 systemd[1]: Stopped target nss-lookup.target. May 10 00:44:49.144779 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:44:49.149452 systemd[1]: Stopped target timers.target. May 10 00:44:49.153776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:44:49.156191 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:44:49.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.160454 systemd[1]: Stopped target initrd.target. May 10 00:44:49.165009 systemd[1]: Stopped target basic.target. May 10 00:44:49.169553 systemd[1]: Stopped target ignition-complete.target. May 10 00:44:49.171977 systemd[1]: Stopped target ignition-diskful.target. May 10 00:44:49.176452 systemd[1]: Stopped target initrd-root-device.target. May 10 00:44:49.181110 systemd[1]: Stopped target remote-fs.target. May 10 00:44:49.185745 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:44:49.189985 systemd[1]: Stopped target sysinit.target. May 10 00:44:49.194397 systemd[1]: Stopped target local-fs.target. May 10 00:44:49.198480 systemd[1]: Stopped target local-fs-pre.target. May 10 00:44:49.202683 systemd[1]: Stopped target swap.target. May 10 00:44:49.206648 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:44:49.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.206800 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:44:49.210834 systemd[1]: Stopped target cryptsetup.target. May 10 00:44:49.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.216211 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:44:49.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.216370 systemd[1]: Stopped dracut-initqueue.service. May 10 00:44:49.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.221050 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:44:49.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.221186 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:44:49.253229 ignition[1001]: INFO : Ignition 2.14.0 May 10 00:44:49.253229 ignition[1001]: INFO : Stage: umount May 10 00:44:49.253229 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:44:49.253229 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 10 00:44:49.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.272349 iscsid[816]: iscsid shutting down. May 10 00:44:49.226345 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:44:49.273131 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 10 00:44:49.273131 ignition[1001]: INFO : umount: umount passed May 10 00:44:49.273131 ignition[1001]: INFO : Ignition finished successfully May 10 00:44:49.226472 systemd[1]: Stopped ignition-files.service. May 10 00:44:49.231523 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 10 00:44:49.231652 systemd[1]: Stopped flatcar-metadata-hostname.service. May 10 00:44:49.237264 systemd[1]: Stopping ignition-mount.service... May 10 00:44:49.251164 systemd[1]: Stopping iscsid.service... May 10 00:44:49.255645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:44:49.255865 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:44:49.265452 systemd[1]: Stopping sysroot-boot.service... May 10 00:44:49.302376 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:44:49.302847 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:44:49.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.310511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:44:49.310840 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:44:49.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.321045 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:44:49.321296 systemd[1]: Stopped iscsid.service. May 10 00:44:49.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.327908 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:44:49.328113 systemd[1]: Stopped ignition-mount.service. May 10 00:44:49.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.336172 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:44:49.336388 systemd[1]: Finished initrd-cleanup.service. May 10 00:44:49.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.344185 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:44:49.344337 systemd[1]: Stopped ignition-disks.service. May 10 00:44:49.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.351631 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:44:49.351691 systemd[1]: Stopped ignition-kargs.service. May 10 00:44:49.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.358851 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:44:49.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.358911 systemd[1]: Stopped ignition-fetch.service. May 10 00:44:49.364490 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:44:49.364539 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:44:49.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.377543 systemd[1]: Stopped target paths.target. May 10 00:44:49.379700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:44:49.384860 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:44:49.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.387480 systemd[1]: Stopped target slices.target. May 10 00:44:49.389503 systemd[1]: Stopped target sockets.target. May 10 00:44:49.391668 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:44:49.391722 systemd[1]: Closed iscsid.socket. May 10 00:44:49.393598 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:44:49.393652 systemd[1]: Stopped ignition-setup.service. May 10 00:44:49.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.406914 systemd[1]: Stopping iscsiuio.service... May 10 00:44:49.411043 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:44:49.411496 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:44:49.411608 systemd[1]: Stopped iscsiuio.service. May 10 00:44:49.414652 systemd[1]: Stopped target network.target. May 10 00:44:49.428155 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:44:49.428205 systemd[1]: Closed iscsiuio.socket. May 10 00:44:49.431372 systemd[1]: Stopping systemd-networkd.service... May 10 00:44:49.437432 systemd[1]: Stopping systemd-resolved.service... May 10 00:44:49.447001 systemd-networkd[810]: eth0: DHCPv6 lease lost May 10 00:44:49.447368 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:44:49.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.447481 systemd[1]: Stopped systemd-resolved.service. May 10 00:44:49.457155 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:44:49.459901 systemd[1]: Stopped systemd-networkd.service. May 10 00:44:49.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.464000 audit: BPF prog-id=6 op=UNLOAD May 10 00:44:49.464000 audit: BPF prog-id=9 op=UNLOAD May 10 00:44:49.464965 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:44:49.465014 systemd[1]: Closed systemd-networkd.socket. May 10 00:44:49.472581 systemd[1]: Stopping network-cleanup.service... May 10 00:44:49.476800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:44:49.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.476895 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:44:49.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.482511 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:44:49.482573 systemd[1]: Stopped systemd-sysctl.service. May 10 00:44:49.487043 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:44:49.487090 systemd[1]: Stopped systemd-modules-load.service. May 10 00:44:49.489761 systemd[1]: Stopping systemd-udevd.service... May 10 00:44:49.504787 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:44:49.505409 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:44:49.505532 systemd[1]: Stopped systemd-udevd.service. May 10 00:44:49.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.516996 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:44:49.517089 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:44:49.519874 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:44:49.522625 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:44:49.532530 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:44:49.532607 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:44:49.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.539467 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:44:49.539528 systemd[1]: Stopped dracut-cmdline.service. May 10 00:44:49.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.546238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:44:49.546297 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:44:49.557821 kernel: hv_netvsc 7ced8d2e-95df-7ced-8d2e-95df7ced8d2e eth0: Data path switched from VF: enP19105s1 May 10 00:44:49.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.559063 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:44:49.564883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:44:49.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.564975 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:44:49.573201 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:44:49.576273 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:44:49.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:49.585895 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:44:49.588542 systemd[1]: Stopped network-cleanup.service. May 10 00:44:49.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:50.117547 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:44:50.117688 systemd[1]: Stopped sysroot-boot.service. May 10 00:44:50.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:50.122404 systemd[1]: Reached target initrd-switch-root.target. May 10 00:44:50.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:50.126676 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:44:50.126744 systemd[1]: Stopped initrd-setup-root.service. May 10 00:44:50.132373 systemd[1]: Starting initrd-switch-root.service... May 10 00:44:50.149369 systemd[1]: Switching root. May 10 00:44:50.170581 systemd-journald[183]: Journal stopped May 10 00:44:55.498575 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 10 00:44:55.498608 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:44:55.498622 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:44:55.498632 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:44:55.498641 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:44:55.498649 kernel: SELinux: policy capability open_perms=1 May 10 00:44:55.498663 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:44:55.498674 kernel: SELinux: policy capability always_check_network=0 May 10 00:44:55.498683 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:44:55.498691 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:44:55.498701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:44:55.498711 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:44:55.498720 kernel: kauditd_printk_skb: 41 callbacks suppressed May 10 00:44:55.498729 kernel: audit: type=1403 audit(1746837890.894:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:44:55.498744 systemd[1]: Successfully loaded SELinux policy in 170.745ms. May 10 00:44:55.498757 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.664ms. May 10 00:44:55.498768 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:44:55.498778 systemd[1]: Detected virtualization microsoft. May 10 00:44:55.498791 systemd[1]: Detected architecture x86-64. May 10 00:44:55.498812 systemd[1]: Detected first boot. May 10 00:44:55.498831 systemd[1]: Hostname set to . May 10 00:44:55.498840 systemd[1]: Initializing machine ID from random generator. May 10 00:44:55.498850 kernel: audit: type=1400 audit(1746837891.126:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:44:55.498862 kernel: audit: type=1400 audit(1746837891.144:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:44:55.498871 kernel: audit: type=1400 audit(1746837891.144:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:44:55.498884 kernel: audit: type=1334 audit(1746837891.157:85): prog-id=10 op=LOAD May 10 00:44:55.498892 kernel: audit: type=1334 audit(1746837891.157:86): prog-id=10 op=UNLOAD May 10 00:44:55.498903 kernel: audit: type=1334 audit(1746837891.171:87): prog-id=11 op=LOAD May 10 00:44:55.498912 kernel: audit: type=1334 audit(1746837891.171:88): prog-id=11 op=UNLOAD May 10 00:44:55.498921 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:44:55.498933 kernel: audit: type=1400 audit(1746837891.613:89): avc: denied { associate } for pid=1034 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:44:55.498944 kernel: audit: type=1300 audit(1746837891.613:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:55.498957 systemd[1]: Populated /etc with preset unit settings. May 10 00:44:55.498967 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:55.498978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:55.498992 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:55.499001 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:44:55.499013 systemd[1]: Stopped initrd-switch-root.service. May 10 00:44:55.499025 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:44:55.499037 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:44:55.499051 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:44:55.499062 systemd[1]: Created slice system-getty.slice. May 10 00:44:55.499075 systemd[1]: Created slice system-modprobe.slice. May 10 00:44:55.499084 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:44:55.499097 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:44:55.499109 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:44:55.499119 systemd[1]: Created slice user.slice. May 10 00:44:55.499130 systemd[1]: Started systemd-ask-password-console.path. May 10 00:44:55.499142 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:44:55.499155 systemd[1]: Set up automount boot.automount. May 10 00:44:55.499164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:44:55.499177 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:44:55.499190 systemd[1]: Stopped target initrd-fs.target. May 10 00:44:55.499199 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:44:55.499209 systemd[1]: Reached target integritysetup.target. May 10 00:44:55.499221 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:44:55.499233 systemd[1]: Reached target remote-fs.target. May 10 00:44:55.499245 systemd[1]: Reached target slices.target. May 10 00:44:55.499255 systemd[1]: Reached target swap.target. May 10 00:44:55.499267 systemd[1]: Reached target torcx.target. May 10 00:44:55.499279 systemd[1]: Reached target veritysetup.target. May 10 00:44:55.499289 systemd[1]: Listening on systemd-coredump.socket. May 10 00:44:55.499298 systemd[1]: Listening on systemd-initctl.socket. May 10 00:44:55.499311 systemd[1]: Listening on systemd-networkd.socket. May 10 00:44:55.499326 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:44:55.499337 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:44:55.499354 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:44:55.499371 systemd[1]: Mounting dev-hugepages.mount... May 10 00:44:55.499389 systemd[1]: Mounting dev-mqueue.mount... May 10 00:44:55.499410 systemd[1]: Mounting media.mount... May 10 00:44:55.499431 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:55.499451 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:44:55.499469 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:44:55.499487 systemd[1]: Mounting tmp.mount... May 10 00:44:55.499505 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:44:55.499525 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:55.499542 systemd[1]: Starting kmod-static-nodes.service... May 10 00:44:55.499563 systemd[1]: Starting modprobe@configfs.service... May 10 00:44:55.499581 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:55.499600 systemd[1]: Starting modprobe@drm.service... May 10 00:44:55.499617 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:55.499635 systemd[1]: Starting modprobe@fuse.service... May 10 00:44:55.499656 systemd[1]: Starting modprobe@loop.service... May 10 00:44:55.499676 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:44:55.499693 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:44:55.499711 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:44:55.499733 kernel: loop: module loaded May 10 00:44:55.499749 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:44:55.499768 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:44:55.499786 kernel: fuse: init (API version 7.34) May 10 00:44:55.499811 systemd[1]: Stopped systemd-journald.service. May 10 00:44:55.499829 systemd[1]: Starting systemd-journald.service... May 10 00:44:55.499849 systemd[1]: Starting systemd-modules-load.service... May 10 00:44:55.499867 systemd[1]: Starting systemd-network-generator.service... May 10 00:44:55.499886 systemd[1]: Starting systemd-remount-fs.service... May 10 00:44:55.499907 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:44:55.499925 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:44:55.499948 systemd-journald[1144]: Journal started May 10 00:44:55.500021 systemd-journald[1144]: Runtime Journal (/run/log/journal/f1507d09d80646f79d3ad81f1c9ff0d0) is 8.0M, max 159.0M, 151.0M free. May 10 00:44:50.894000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:44:51.126000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:44:51.144000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:44:51.144000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:44:51.157000 audit: BPF prog-id=10 op=LOAD May 10 00:44:51.157000 audit: BPF prog-id=10 op=UNLOAD May 10 00:44:51.171000 audit: BPF prog-id=11 op=LOAD May 10 00:44:51.171000 audit: BPF prog-id=11 op=UNLOAD May 10 00:44:51.613000 audit[1034]: AVC avc: denied { associate } for pid=1034 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:44:51.613000 audit[1034]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:51.613000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:44:51.619000 audit[1034]: AVC avc: denied { associate } for pid=1034 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:44:51.619000 audit[1034]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:51.619000 audit: CWD cwd="/" May 10 00:44:51.619000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:51.619000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:51.619000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:44:55.022000 audit: BPF prog-id=12 op=LOAD May 10 00:44:55.022000 audit: BPF prog-id=3 op=UNLOAD May 10 00:44:55.022000 audit: BPF prog-id=13 op=LOAD May 10 00:44:55.022000 audit: BPF prog-id=14 op=LOAD May 10 00:44:55.022000 audit: BPF prog-id=4 op=UNLOAD May 10 00:44:55.022000 audit: BPF prog-id=5 op=UNLOAD May 10 00:44:55.023000 audit: BPF prog-id=15 op=LOAD May 10 00:44:55.023000 audit: BPF prog-id=12 op=UNLOAD May 10 00:44:55.023000 audit: BPF prog-id=16 op=LOAD May 10 00:44:55.023000 audit: BPF prog-id=17 op=LOAD May 10 00:44:55.023000 audit: BPF prog-id=13 op=UNLOAD May 10 00:44:55.023000 audit: BPF prog-id=14 op=UNLOAD May 10 00:44:55.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.038000 audit: BPF prog-id=15 op=UNLOAD May 10 00:44:55.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.445000 audit: BPF prog-id=18 op=LOAD May 10 00:44:55.505555 systemd[1]: Stopped verity-setup.service. May 10 00:44:55.445000 audit: BPF prog-id=19 op=LOAD May 10 00:44:55.445000 audit: BPF prog-id=20 op=LOAD May 10 00:44:55.445000 audit: BPF prog-id=16 op=UNLOAD May 10 00:44:55.445000 audit: BPF prog-id=17 op=UNLOAD May 10 00:44:55.493000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:44:55.493000 audit[1144]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd9bee23f0 a2=4000 a3=7ffd9bee248c items=0 ppid=1 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:55.493000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:44:55.021410 systemd[1]: Queued start job for default target multi-user.target. May 10 00:44:51.600596 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:55.021423 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 10 00:44:51.604836 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:44:55.024711 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:44:51.604866 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:44:51.604904 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:44:55.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:51.604919 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:44:51.604966 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:44:51.604982 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:44:51.605196 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:44:51.605256 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:44:51.605274 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:44:51.609476 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:44:51.609516 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:44:51.609537 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:44:51.609552 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:44:51.609571 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:44:51.609585 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:44:54.467747 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:54.468009 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:54.468107 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:54.468273 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:44:54.468319 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:44:54.468373 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2025-05-10T00:44:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:44:55.515936 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:55.523149 systemd[1]: Started systemd-journald.service. May 10 00:44:55.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.523938 systemd[1]: Mounted dev-hugepages.mount. May 10 00:44:55.526605 systemd[1]: Mounted dev-mqueue.mount. May 10 00:44:55.528969 systemd[1]: Mounted media.mount. May 10 00:44:55.531383 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:44:55.533950 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:44:55.536526 systemd[1]: Mounted tmp.mount. May 10 00:44:55.539492 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:44:55.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.542112 systemd[1]: Finished kmod-static-nodes.service. May 10 00:44:55.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.544672 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:44:55.544897 systemd[1]: Finished modprobe@configfs.service. May 10 00:44:55.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.547489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:55.547682 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:55.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.550238 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:44:55.550437 systemd[1]: Finished modprobe@drm.service. May 10 00:44:55.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.552791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:55.553084 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:55.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.555493 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:44:55.555674 systemd[1]: Finished modprobe@fuse.service. May 10 00:44:55.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.558176 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:55.558384 systemd[1]: Finished modprobe@loop.service. May 10 00:44:55.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.560944 systemd[1]: Finished systemd-modules-load.service. May 10 00:44:55.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.563744 systemd[1]: Finished systemd-network-generator.service. May 10 00:44:55.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.566678 systemd[1]: Finished systemd-remount-fs.service. May 10 00:44:55.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.569777 systemd[1]: Reached target network-pre.target. May 10 00:44:55.573829 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:44:55.578339 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:44:55.580951 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:44:55.583063 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:44:55.586434 systemd[1]: Starting systemd-journal-flush.service... May 10 00:44:55.591168 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:55.592779 systemd[1]: Starting systemd-random-seed.service... May 10 00:44:55.595115 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:55.596519 systemd[1]: Starting systemd-sysctl.service... May 10 00:44:55.599985 systemd[1]: Starting systemd-sysusers.service... May 10 00:44:55.606696 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:44:55.611558 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:44:55.627925 systemd-journald[1144]: Time spent on flushing to /var/log/journal/f1507d09d80646f79d3ad81f1c9ff0d0 is 28.207ms for 1144 entries. May 10 00:44:55.627925 systemd-journald[1144]: System Journal (/var/log/journal/f1507d09d80646f79d3ad81f1c9ff0d0) is 8.0M, max 2.6G, 2.6G free. May 10 00:44:55.707176 systemd-journald[1144]: Received client request to flush runtime journal. May 10 00:44:55.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.620760 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:44:55.624874 systemd[1]: Starting systemd-udev-settle.service... May 10 00:44:55.708905 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:44:55.637645 systemd[1]: Finished systemd-random-seed.service. May 10 00:44:55.640257 systemd[1]: Reached target first-boot-complete.target. May 10 00:44:55.654180 systemd[1]: Finished systemd-sysctl.service. May 10 00:44:55.708316 systemd[1]: Finished systemd-journal-flush.service. May 10 00:44:55.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:55.765550 systemd[1]: Finished systemd-sysusers.service. May 10 00:44:55.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.295204 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:44:56.315657 kernel: kauditd_printk_skb: 59 callbacks suppressed May 10 00:44:56.315788 kernel: audit: type=1130 audit(1746837896.297:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.301000 audit: BPF prog-id=21 op=LOAD May 10 00:44:56.316454 systemd[1]: Starting systemd-udevd.service... May 10 00:44:56.325319 kernel: audit: type=1334 audit(1746837896.301:142): prog-id=21 op=LOAD May 10 00:44:56.325391 kernel: audit: type=1334 audit(1746837896.314:143): prog-id=22 op=LOAD May 10 00:44:56.325412 kernel: audit: type=1334 audit(1746837896.314:144): prog-id=7 op=UNLOAD May 10 00:44:56.325431 kernel: audit: type=1334 audit(1746837896.314:145): prog-id=8 op=UNLOAD May 10 00:44:56.314000 audit: BPF prog-id=22 op=LOAD May 10 00:44:56.314000 audit: BPF prog-id=7 op=UNLOAD May 10 00:44:56.314000 audit: BPF prog-id=8 op=UNLOAD May 10 00:44:56.351731 systemd-udevd[1161]: Using default interface naming scheme 'v252'. May 10 00:44:56.410190 systemd[1]: Started systemd-udevd.service. May 10 00:44:56.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.427942 kernel: audit: type=1130 audit(1746837896.412:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.433031 systemd[1]: Starting systemd-networkd.service... May 10 00:44:56.431000 audit: BPF prog-id=23 op=LOAD May 10 00:44:56.439897 kernel: audit: type=1334 audit(1746837896.431:147): prog-id=23 op=LOAD May 10 00:44:56.463995 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:44:56.505562 kernel: audit: type=1334 audit(1746837896.492:148): prog-id=24 op=LOAD May 10 00:44:56.505677 kernel: audit: type=1334 audit(1746837896.492:149): prog-id=25 op=LOAD May 10 00:44:56.492000 audit: BPF prog-id=24 op=LOAD May 10 00:44:56.492000 audit: BPF prog-id=25 op=LOAD May 10 00:44:56.494242 systemd[1]: Starting systemd-userdbd.service... May 10 00:44:56.511845 kernel: audit: type=1334 audit(1746837896.492:150): prog-id=26 op=LOAD May 10 00:44:56.492000 audit: BPF prog-id=26 op=LOAD May 10 00:44:56.531828 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:44:56.552000 audit[1164]: AVC avc: denied { confidentiality } for pid=1164 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:44:56.559890 kernel: hv_vmbus: registering driver hv_balloon May 10 00:44:56.560407 systemd[1]: Started systemd-userdbd.service. May 10 00:44:56.566829 kernel: hv_vmbus: registering driver hyperv_fb May 10 00:44:56.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.604252 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 10 00:44:56.604364 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 10 00:44:56.610996 kernel: Console: switching to colour dummy device 80x25 May 10 00:44:56.621252 kernel: Console: switching to colour frame buffer device 128x48 May 10 00:44:56.632825 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 10 00:44:56.632916 kernel: hv_utils: Registering HyperV Utility Driver May 10 00:44:56.637813 kernel: hv_vmbus: registering driver hv_utils May 10 00:44:56.552000 audit[1164]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f752d64f80 a1=f884 a2=7f0c6b8a9bc5 a3=5 items=12 ppid=1161 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:56.552000 audit: CWD cwd="/" May 10 00:44:56.552000 audit: PATH item=0 name=(null) inode=1239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=1 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=2 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=3 name=(null) inode=14892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=4 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=5 name=(null) inode=14893 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=6 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=7 name=(null) inode=14894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=8 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=9 name=(null) inode=14895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=10 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PATH item=11 name=(null) inode=14896 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:44:56.552000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:44:56.708576 systemd-networkd[1182]: lo: Link UP May 10 00:44:56.708592 systemd-networkd[1182]: lo: Gained carrier May 10 00:44:56.709254 systemd-networkd[1182]: Enumeration completed May 10 00:44:56.709376 systemd[1]: Started systemd-networkd.service. May 10 00:44:56.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:56.713649 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:44:56.724921 systemd-networkd[1182]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:44:56.740259 kernel: hv_utils: Heartbeat IC version 3.0 May 10 00:44:56.740365 kernel: hv_utils: Shutdown IC version 3.2 May 10 00:44:56.742591 kernel: hv_utils: TimeSync IC version 4.0 May 10 00:44:57.566186 kernel: mlx5_core 4aa1:00:02.0 enP19105s1: Link up May 10 00:44:57.585298 kernel: hv_netvsc 7ced8d2e-95df-7ced-8d2e-95df7ced8d2e eth0: Data path switched to VF: enP19105s1 May 10 00:44:57.591330 systemd-networkd[1182]: enP19105s1: Link UP May 10 00:44:57.592031 systemd-networkd[1182]: eth0: Link UP May 10 00:44:57.592242 systemd-networkd[1182]: eth0: Gained carrier May 10 00:44:57.598497 systemd-networkd[1182]: enP19105s1: Gained carrier May 10 00:44:57.620311 systemd-networkd[1182]: eth0: DHCPv4 address 10.200.8.31/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 10 00:44:57.692000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:44:57.729201 kernel: KVM: vmx: using Hyper-V Enlightened VMCS May 10 00:44:57.758558 systemd[1]: Finished systemd-udev-settle.service. May 10 00:44:57.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:57.762646 systemd[1]: Starting lvm2-activation-early.service... May 10 00:44:57.880364 lvm[1239]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:44:57.909255 systemd[1]: Finished lvm2-activation-early.service. May 10 00:44:57.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:57.912461 systemd[1]: Reached target cryptsetup.target. May 10 00:44:57.916205 systemd[1]: Starting lvm2-activation.service... May 10 00:44:57.920889 lvm[1240]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:44:57.947240 systemd[1]: Finished lvm2-activation.service. May 10 00:44:57.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:57.950140 systemd[1]: Reached target local-fs-pre.target. May 10 00:44:57.952771 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:44:57.952813 systemd[1]: Reached target local-fs.target. May 10 00:44:57.955084 systemd[1]: Reached target machines.target. May 10 00:44:57.958715 systemd[1]: Starting ldconfig.service... May 10 00:44:57.961124 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:57.961238 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:57.962505 systemd[1]: Starting systemd-boot-update.service... May 10 00:44:57.965902 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:44:57.970273 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:44:57.974051 systemd[1]: Starting systemd-sysext.service... May 10 00:44:57.982207 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1242 (bootctl) May 10 00:44:57.983698 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:44:58.000530 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:44:58.180705 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:44:58.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.380469 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:44:58.381068 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:44:58.401182 kernel: loop0: detected capacity change from 0 to 205544 May 10 00:44:58.484183 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:44:58.503183 kernel: loop1: detected capacity change from 0 to 205544 May 10 00:44:58.510608 (sd-sysext)[1254]: Using extensions 'kubernetes'. May 10 00:44:58.511076 (sd-sysext)[1254]: Merged extensions into '/usr'. May 10 00:44:58.528865 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:58.531126 systemd[1]: Mounting usr-share-oem.mount... May 10 00:44:58.532010 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:58.535946 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:58.538833 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:58.542356 systemd[1]: Starting modprobe@loop.service... May 10 00:44:58.542503 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:58.542876 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:58.543011 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:58.544121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:58.544581 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:58.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.545665 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:58.545800 systemd[1]: Finished modprobe@loop.service. May 10 00:44:58.546429 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:58.551494 systemd[1]: Mounted usr-share-oem.mount. May 10 00:44:58.553805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:58.553953 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:58.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.556713 systemd[1]: Finished systemd-sysext.service. May 10 00:44:58.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.561192 systemd[1]: Starting ensure-sysext.service... May 10 00:44:58.563314 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:58.564693 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:44:58.573494 systemd[1]: Reloading. May 10 00:44:58.630702 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:44:58.643874 systemd-fsck[1250]: fsck.fat 4.2 (2021-01-31) May 10 00:44:58.643874 systemd-fsck[1250]: /dev/sda1: 790 files, 120688/258078 clusters May 10 00:44:58.660713 /usr/lib/systemd/system-generators/torcx-generator[1280]: time="2025-05-10T00:44:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:44:58.660753 /usr/lib/systemd/system-generators/torcx-generator[1280]: time="2025-05-10T00:44:58Z" level=info msg="torcx already run" May 10 00:44:58.680093 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:44:58.724421 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:44:58.766243 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:44:58.766263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:44:58.783888 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:44:58.860129 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:44:58.863000 audit: BPF prog-id=27 op=LOAD May 10 00:44:58.863000 audit: BPF prog-id=24 op=UNLOAD May 10 00:44:58.864000 audit: BPF prog-id=28 op=LOAD May 10 00:44:58.864000 audit: BPF prog-id=29 op=LOAD May 10 00:44:58.864000 audit: BPF prog-id=25 op=UNLOAD May 10 00:44:58.864000 audit: BPF prog-id=26 op=UNLOAD May 10 00:44:58.867000 audit: BPF prog-id=30 op=LOAD May 10 00:44:58.867000 audit: BPF prog-id=23 op=UNLOAD May 10 00:44:58.868000 audit: BPF prog-id=31 op=LOAD May 10 00:44:58.868000 audit: BPF prog-id=18 op=UNLOAD May 10 00:44:58.868000 audit: BPF prog-id=32 op=LOAD May 10 00:44:58.868000 audit: BPF prog-id=33 op=LOAD May 10 00:44:58.868000 audit: BPF prog-id=19 op=UNLOAD May 10 00:44:58.868000 audit: BPF prog-id=20 op=UNLOAD May 10 00:44:58.869000 audit: BPF prog-id=34 op=LOAD May 10 00:44:58.869000 audit: BPF prog-id=35 op=LOAD May 10 00:44:58.869000 audit: BPF prog-id=21 op=UNLOAD May 10 00:44:58.869000 audit: BPF prog-id=22 op=UNLOAD May 10 00:44:58.875382 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:44:58.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.878368 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:44:58.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.889373 systemd[1]: Mounting boot.mount... May 10 00:44:58.896591 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:58.896925 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:58.898861 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:58.904028 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:58.907636 systemd[1]: Starting modprobe@loop.service... May 10 00:44:58.909617 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:58.909844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:58.910019 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:58.912979 systemd[1]: Mounted boot.mount. May 10 00:44:58.915759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:58.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.916197 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:58.918874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:58.919022 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:58.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.923610 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:58.923747 systemd[1]: Finished modprobe@loop.service. May 10 00:44:58.926827 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:58.926953 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:58.929856 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:58.931642 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:58.935723 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:58.939935 systemd[1]: Starting modprobe@loop.service... May 10 00:44:58.942329 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:58.942501 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:58.943669 systemd[1]: Finished systemd-boot-update.service. May 10 00:44:58.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.947201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:58.947361 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:58.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.950720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:58.950886 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:58.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.954310 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:58.954474 systemd[1]: Finished modprobe@loop.service. May 10 00:44:58.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.963262 systemd[1]: Finished ensure-sysext.service. May 10 00:44:58.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.967433 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:44:58.969146 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:44:58.974544 systemd[1]: Starting modprobe@drm.service... May 10 00:44:58.978175 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:44:58.983645 systemd[1]: Starting modprobe@loop.service... May 10 00:44:58.986473 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:44:58.986546 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:44:58.987308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:44:58.987472 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:44:58.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.990512 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:44:58.990685 systemd[1]: Finished modprobe@drm.service. May 10 00:44:58.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.993502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:44:58.993676 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:44:58.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.996763 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:44:58.996920 systemd[1]: Finished modprobe@loop.service. May 10 00:44:58.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:58.999434 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:44:58.999488 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:44:59.050569 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:44:59.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:59.055592 systemd[1]: Starting audit-rules.service... May 10 00:44:59.059731 systemd[1]: Starting clean-ca-certificates.service... May 10 00:44:59.063814 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:44:59.067000 audit: BPF prog-id=36 op=LOAD May 10 00:44:59.070600 systemd[1]: Starting systemd-resolved.service... May 10 00:44:59.072000 audit: BPF prog-id=37 op=LOAD May 10 00:44:59.075120 systemd[1]: Starting systemd-timesyncd.service... May 10 00:44:59.079413 systemd[1]: Starting systemd-update-utmp.service... May 10 00:44:59.085249 systemd[1]: Finished clean-ca-certificates.service. May 10 00:44:59.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:59.087883 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:44:59.103000 audit[1365]: SYSTEM_BOOT pid=1365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:44:59.106827 systemd[1]: Finished systemd-update-utmp.service. May 10 00:44:59.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:59.140813 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:44:59.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:59.191742 systemd[1]: Started systemd-timesyncd.service. May 10 00:44:59.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:44:59.194266 systemd[1]: Reached target time-set.target. May 10 00:44:59.207000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:44:59.207000 audit[1378]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcada7ec50 a2=420 a3=0 items=0 ppid=1358 pid=1378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:44:59.207000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:44:59.208699 augenrules[1378]: No rules May 10 00:44:59.209406 systemd[1]: Finished audit-rules.service. May 10 00:44:59.222523 systemd-resolved[1362]: Positive Trust Anchors: May 10 00:44:59.222774 systemd-resolved[1362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:44:59.222844 systemd-resolved[1362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:44:59.236283 systemd-timesyncd[1363]: Contacted time server 85.91.1.164:123 (0.flatcar.pool.ntp.org). May 10 00:44:59.236688 systemd-timesyncd[1363]: Initial clock synchronization to Sat 2025-05-10 00:44:59.237394 UTC. May 10 00:44:59.255433 systemd-resolved[1362]: Using system hostname 'ci-3510.3.7-n-8a4b3429d2'. May 10 00:44:59.257413 systemd[1]: Started systemd-resolved.service. May 10 00:44:59.259973 systemd[1]: Reached target network.target. May 10 00:44:59.262238 systemd[1]: Reached target nss-lookup.target. May 10 00:44:59.378551 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:59.378582 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:44:59.493372 systemd-networkd[1182]: eth0: Gained IPv6LL May 10 00:44:59.495422 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:44:59.498247 systemd[1]: Reached target network-online.target. May 10 00:45:00.452054 ldconfig[1241]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:45:00.465726 systemd[1]: Finished ldconfig.service. May 10 00:45:00.469275 systemd[1]: Starting systemd-update-done.service... May 10 00:45:00.477690 systemd[1]: Finished systemd-update-done.service. May 10 00:45:00.480523 systemd[1]: Reached target sysinit.target. May 10 00:45:00.482877 systemd[1]: Started motdgen.path. May 10 00:45:00.485027 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:45:00.490390 systemd[1]: Started logrotate.timer. May 10 00:45:00.492259 systemd[1]: Started mdadm.timer. May 10 00:45:00.494291 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:45:00.496671 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:45:00.496713 systemd[1]: Reached target paths.target. May 10 00:45:00.498813 systemd[1]: Reached target timers.target. May 10 00:45:00.501074 systemd[1]: Listening on dbus.socket. May 10 00:45:00.503943 systemd[1]: Starting docker.socket... May 10 00:45:00.512596 systemd[1]: Listening on sshd.socket. May 10 00:45:00.514907 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:45:00.515398 systemd[1]: Listening on docker.socket. May 10 00:45:00.517539 systemd[1]: Reached target sockets.target. May 10 00:45:00.519462 systemd[1]: Reached target basic.target. May 10 00:45:00.521468 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:45:00.521503 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:45:00.522531 systemd[1]: Starting containerd.service... May 10 00:45:00.525801 systemd[1]: Starting dbus.service... May 10 00:45:00.528471 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:45:00.531552 systemd[1]: Starting extend-filesystems.service... May 10 00:45:00.533763 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:45:00.535079 systemd[1]: Starting kubelet.service... May 10 00:45:00.539434 systemd[1]: Starting motdgen.service... May 10 00:45:00.542640 systemd[1]: Started nvidia.service. May 10 00:45:00.546409 systemd[1]: Starting prepare-helm.service... May 10 00:45:00.550552 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:45:00.553481 jq[1389]: false May 10 00:45:00.553947 systemd[1]: Starting sshd-keygen.service... May 10 00:45:00.559245 systemd[1]: Starting systemd-logind.service... May 10 00:45:00.561124 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:45:00.561250 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:45:00.561779 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:45:00.562586 systemd[1]: Starting update-engine.service... May 10 00:45:00.565612 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:45:00.570051 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:45:00.570722 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:45:00.581121 jq[1401]: true May 10 00:45:00.601104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:45:00.601391 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:45:00.609020 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:45:00.609242 systemd[1]: Finished motdgen.service. May 10 00:45:00.624538 jq[1406]: true May 10 00:45:00.642877 tar[1404]: linux-amd64/helm May 10 00:45:00.644942 extend-filesystems[1390]: Found loop1 May 10 00:45:00.647580 extend-filesystems[1390]: Found sda May 10 00:45:00.649897 extend-filesystems[1390]: Found sda1 May 10 00:45:00.660265 extend-filesystems[1390]: Found sda2 May 10 00:45:00.662348 extend-filesystems[1390]: Found sda3 May 10 00:45:00.665256 extend-filesystems[1390]: Found usr May 10 00:45:00.667409 extend-filesystems[1390]: Found sda4 May 10 00:45:00.669522 extend-filesystems[1390]: Found sda6 May 10 00:45:00.671471 extend-filesystems[1390]: Found sda7 May 10 00:45:00.673359 extend-filesystems[1390]: Found sda9 May 10 00:45:00.680370 extend-filesystems[1390]: Checking size of /dev/sda9 May 10 00:45:00.683386 dbus-daemon[1388]: [system] SELinux support is enabled May 10 00:45:00.683578 systemd[1]: Started dbus.service. May 10 00:45:00.688233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:45:00.688269 systemd[1]: Reached target system-config.target. May 10 00:45:00.690627 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:45:00.690653 systemd[1]: Reached target user-config.target. May 10 00:45:00.724564 extend-filesystems[1390]: Old size kept for /dev/sda9 May 10 00:45:00.729553 extend-filesystems[1390]: Found sr0 May 10 00:45:00.725105 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:45:00.725307 systemd[1]: Finished extend-filesystems.service. May 10 00:45:00.732720 systemd-logind[1399]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:45:00.740217 systemd-logind[1399]: New seat seat0. May 10 00:45:00.746926 systemd[1]: Started systemd-logind.service. May 10 00:45:00.782434 env[1415]: time="2025-05-10T00:45:00.782316987Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:45:00.787282 bash[1442]: Updated "/home/core/.ssh/authorized_keys" May 10 00:45:00.788085 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:45:00.893579 systemd[1]: nvidia.service: Deactivated successfully. May 10 00:45:00.910641 env[1415]: time="2025-05-10T00:45:00.910585491Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:45:00.918187 env[1415]: time="2025-05-10T00:45:00.918138756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:45:00.924815 env[1415]: time="2025-05-10T00:45:00.924772053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:45:00.924983 env[1415]: time="2025-05-10T00:45:00.924964267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:45:00.925390 env[1415]: time="2025-05-10T00:45:00.925359897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:45:00.925518 env[1415]: time="2025-05-10T00:45:00.925502507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:45:00.925604 env[1415]: time="2025-05-10T00:45:00.925587614Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:45:00.925695 env[1415]: time="2025-05-10T00:45:00.925680421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:45:00.925900 env[1415]: time="2025-05-10T00:45:00.925878236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:45:00.926334 env[1415]: time="2025-05-10T00:45:00.926307568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:45:00.926708 env[1415]: time="2025-05-10T00:45:00.926646393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:45:00.928147 env[1415]: time="2025-05-10T00:45:00.928121904Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:45:00.928373 env[1415]: time="2025-05-10T00:45:00.928352521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:45:00.928476 env[1415]: time="2025-05-10T00:45:00.928461429Z" level=info msg="metadata content store policy set" policy=shared May 10 00:45:00.942441 update_engine[1400]: I0510 00:45:00.942097 1400 main.cc:92] Flatcar Update Engine starting May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944326717Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944381821Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944403623Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944457527Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944479828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944548733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944568235Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944587536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944606238Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944625639Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944643641Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944662542Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944797252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:45:00.946755 env[1415]: time="2025-05-10T00:45:00.944902060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945330092Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945368595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945389396Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945461602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945483203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945500205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945588311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945607513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945625014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945641315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945656916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945683618Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945830929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945853231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947428 env[1415]: time="2025-05-10T00:45:00.945870432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:45:00.947954 env[1415]: time="2025-05-10T00:45:00.945885834Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:45:00.947954 env[1415]: time="2025-05-10T00:45:00.945905735Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:45:00.947954 env[1415]: time="2025-05-10T00:45:00.945923736Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:45:00.947954 env[1415]: time="2025-05-10T00:45:00.945947838Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:45:00.947954 env[1415]: time="2025-05-10T00:45:00.945986041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:45:00.948134 env[1415]: time="2025-05-10T00:45:00.946273563Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:45:00.948134 env[1415]: time="2025-05-10T00:45:00.946357569Z" level=info msg="Connect containerd service" May 10 00:45:00.948134 env[1415]: time="2025-05-10T00:45:00.946406773Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.948698744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.949021468Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.949070272Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.952258511Z" level=info msg="containerd successfully booted in 0.177624s" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.955097023Z" level=info msg="Start subscribing containerd event" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.955177029Z" level=info msg="Start recovering state" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.955294738Z" level=info msg="Start event monitor" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.955314439Z" level=info msg="Start snapshots syncer" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.955329541Z" level=info msg="Start cni network conf syncer for default" May 10 00:45:00.959812 env[1415]: time="2025-05-10T00:45:00.955347542Z" level=info msg="Start streaming server" May 10 00:45:00.949218 systemd[1]: Started containerd.service. May 10 00:45:00.960186 systemd[1]: Started update-engine.service. May 10 00:45:00.962366 update_engine[1400]: I0510 00:45:00.960438 1400 update_check_scheduler.cc:74] Next update check in 8m0s May 10 00:45:00.964541 systemd[1]: Started locksmithd.service. May 10 00:45:01.577056 tar[1404]: linux-amd64/LICENSE May 10 00:45:01.577358 tar[1404]: linux-amd64/README.md May 10 00:45:01.584035 systemd[1]: Finished prepare-helm.service. May 10 00:45:01.964813 systemd[1]: Started kubelet.service. May 10 00:45:02.127223 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:45:02.663506 kubelet[1501]: E0510 00:45:02.663447 1501 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:45:02.665267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:45:02.665382 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:45:02.665643 systemd[1]: kubelet.service: Consumed 1.107s CPU time. May 10 00:45:04.234108 sshd_keygen[1416]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:45:04.254422 systemd[1]: Finished sshd-keygen.service. May 10 00:45:04.259081 systemd[1]: Starting issuegen.service... May 10 00:45:04.262728 systemd[1]: Started waagent.service. May 10 00:45:04.266254 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:45:04.266472 systemd[1]: Finished issuegen.service. May 10 00:45:04.270045 systemd[1]: Starting systemd-user-sessions.service... May 10 00:45:04.278815 systemd[1]: Finished systemd-user-sessions.service. May 10 00:45:04.282559 systemd[1]: Started getty@tty1.service. May 10 00:45:04.286481 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:45:04.295541 systemd[1]: Reached target getty.target. May 10 00:45:04.297825 systemd[1]: Reached target multi-user.target. May 10 00:45:04.301891 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:45:04.314525 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:45:04.314714 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:45:04.317881 systemd[1]: Startup finished in 673ms (firmware) + 8.316s (loader) + 1.090s (kernel) + 9.944s (initrd) + 12.853s (userspace) = 32.878s. May 10 00:45:04.429727 login[1521]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 10 00:45:04.431098 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 10 00:45:04.445276 systemd[1]: Created slice user-500.slice. May 10 00:45:04.446738 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:45:04.454018 systemd-logind[1399]: New session 1 of user core. May 10 00:45:04.457465 systemd-logind[1399]: New session 2 of user core. May 10 00:45:04.461685 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:45:04.463504 systemd[1]: Starting user@500.service... May 10 00:45:04.471406 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:04.593936 systemd[1525]: Queued start job for default target default.target. May 10 00:45:04.595025 systemd[1525]: Reached target paths.target. May 10 00:45:04.595058 systemd[1525]: Reached target sockets.target. May 10 00:45:04.595075 systemd[1525]: Reached target timers.target. May 10 00:45:04.595090 systemd[1525]: Reached target basic.target. May 10 00:45:04.595232 systemd[1]: Started user@500.service. May 10 00:45:04.596364 systemd[1]: Started session-1.scope. May 10 00:45:04.596974 systemd[1]: Started session-2.scope. May 10 00:45:04.598695 systemd[1525]: Reached target default.target. May 10 00:45:04.598936 systemd[1525]: Startup finished in 120ms. May 10 00:45:07.134184 waagent[1516]: 2025-05-10T00:45:07.134046Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 10 00:45:07.148031 waagent[1516]: 2025-05-10T00:45:07.136449Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 10 00:45:07.148031 waagent[1516]: 2025-05-10T00:45:07.137434Z INFO Daemon Daemon Python: 3.9.16 May 10 00:45:07.148031 waagent[1516]: 2025-05-10T00:45:07.138775Z INFO Daemon Daemon Run daemon May 10 00:45:07.148031 waagent[1516]: 2025-05-10T00:45:07.140053Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 10 00:45:07.152028 waagent[1516]: 2025-05-10T00:45:07.151899Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 10 00:45:07.160549 waagent[1516]: 2025-05-10T00:45:07.160430Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 10 00:45:07.165849 waagent[1516]: 2025-05-10T00:45:07.165773Z INFO Daemon Daemon cloud-init is enabled: False May 10 00:45:07.176806 waagent[1516]: 2025-05-10T00:45:07.166934Z INFO Daemon Daemon Using waagent for provisioning May 10 00:45:07.176806 waagent[1516]: 2025-05-10T00:45:07.168469Z INFO Daemon Daemon Activate resource disk May 10 00:45:07.176806 waagent[1516]: 2025-05-10T00:45:07.168968Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 10 00:45:07.177003 waagent[1516]: 2025-05-10T00:45:07.176822Z INFO Daemon Daemon Found device: None May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.178355Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.179280Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.180775Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.181982Z INFO Daemon Daemon Running default provisioning handler May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.191779Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.194937Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.196155Z INFO Daemon Daemon cloud-init is enabled: False May 10 00:45:07.210749 waagent[1516]: 2025-05-10T00:45:07.197186Z INFO Daemon Daemon Copying ovf-env.xml May 10 00:45:07.238243 waagent[1516]: 2025-05-10T00:45:07.234103Z INFO Daemon Daemon Successfully mounted dvd May 10 00:45:07.266725 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 10 00:45:07.275690 waagent[1516]: 2025-05-10T00:45:07.275559Z INFO Daemon Daemon Detect protocol endpoint May 10 00:45:07.291660 waagent[1516]: 2025-05-10T00:45:07.277078Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 10 00:45:07.291660 waagent[1516]: 2025-05-10T00:45:07.278120Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 10 00:45:07.291660 waagent[1516]: 2025-05-10T00:45:07.278963Z INFO Daemon Daemon Test for route to 168.63.129.16 May 10 00:45:07.291660 waagent[1516]: 2025-05-10T00:45:07.280170Z INFO Daemon Daemon Route to 168.63.129.16 exists May 10 00:45:07.291660 waagent[1516]: 2025-05-10T00:45:07.281006Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 10 00:45:07.315367 waagent[1516]: 2025-05-10T00:45:07.315295Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 10 00:45:07.324174 waagent[1516]: 2025-05-10T00:45:07.317219Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 10 00:45:07.324174 waagent[1516]: 2025-05-10T00:45:07.318224Z INFO Daemon Daemon Server preferred version:2015-04-05 May 10 00:45:07.633404 waagent[1516]: 2025-05-10T00:45:07.633252Z INFO Daemon Daemon Initializing goal state during protocol detection May 10 00:45:07.642909 waagent[1516]: 2025-05-10T00:45:07.642817Z INFO Daemon Daemon Forcing an update of the goal state.. May 10 00:45:07.648472 waagent[1516]: 2025-05-10T00:45:07.644597Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 10 00:45:07.718220 waagent[1516]: 2025-05-10T00:45:07.718051Z INFO Daemon Daemon Found private key matching thumbprint FDA9A69EDEA39DBDA88831DB274B4B4E8823676D May 10 00:45:07.728952 waagent[1516]: 2025-05-10T00:45:07.719581Z INFO Daemon Daemon Certificate with thumbprint F5A7CE878FEEC59629B3986A8D259261C51F7795 has no matching private key. May 10 00:45:07.728952 waagent[1516]: 2025-05-10T00:45:07.720573Z INFO Daemon Daemon Fetch goal state completed May 10 00:45:07.742668 waagent[1516]: 2025-05-10T00:45:07.742597Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: d28348f5-9f31-461c-9deb-607163fc6a08 New eTag: 14072872316938330239] May 10 00:45:07.751515 waagent[1516]: 2025-05-10T00:45:07.744311Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 10 00:45:07.753322 waagent[1516]: 2025-05-10T00:45:07.753259Z INFO Daemon Daemon Starting provisioning May 10 00:45:07.760531 waagent[1516]: 2025-05-10T00:45:07.754649Z INFO Daemon Daemon Handle ovf-env.xml. May 10 00:45:07.760531 waagent[1516]: 2025-05-10T00:45:07.755548Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-8a4b3429d2] May 10 00:45:07.764215 waagent[1516]: 2025-05-10T00:45:07.764086Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-8a4b3429d2] May 10 00:45:07.772573 waagent[1516]: 2025-05-10T00:45:07.765823Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 10 00:45:07.772573 waagent[1516]: 2025-05-10T00:45:07.766455Z INFO Daemon Daemon Primary interface is [eth0] May 10 00:45:07.779976 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 10 00:45:07.780262 systemd[1]: Stopped systemd-networkd-wait-online.service. May 10 00:45:07.780342 systemd[1]: Stopping systemd-networkd-wait-online.service... May 10 00:45:07.780669 systemd[1]: Stopping systemd-networkd.service... May 10 00:45:07.784212 systemd-networkd[1182]: eth0: DHCPv6 lease lost May 10 00:45:07.785559 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:45:07.785761 systemd[1]: Stopped systemd-networkd.service. May 10 00:45:07.788137 systemd[1]: Starting systemd-networkd.service... May 10 00:45:07.820764 systemd-networkd[1568]: enP19105s1: Link UP May 10 00:45:07.820775 systemd-networkd[1568]: enP19105s1: Gained carrier May 10 00:45:07.822150 systemd-networkd[1568]: eth0: Link UP May 10 00:45:07.822171 systemd-networkd[1568]: eth0: Gained carrier May 10 00:45:07.822606 systemd-networkd[1568]: lo: Link UP May 10 00:45:07.822616 systemd-networkd[1568]: lo: Gained carrier May 10 00:45:07.822932 systemd-networkd[1568]: eth0: Gained IPv6LL May 10 00:45:07.823226 systemd-networkd[1568]: Enumeration completed May 10 00:45:07.823365 systemd[1]: Started systemd-networkd.service. May 10 00:45:07.825931 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:45:07.831271 waagent[1516]: 2025-05-10T00:45:07.826625Z INFO Daemon Daemon Create user account if not exists May 10 00:45:07.833697 waagent[1516]: 2025-05-10T00:45:07.831972Z INFO Daemon Daemon User core already exists, skip useradd May 10 00:45:07.834191 systemd-networkd[1568]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:45:07.835907 waagent[1516]: 2025-05-10T00:45:07.835785Z INFO Daemon Daemon Configure sudoer May 10 00:45:07.840173 waagent[1516]: 2025-05-10T00:45:07.837493Z INFO Daemon Daemon Configure sshd May 10 00:45:07.840173 waagent[1516]: 2025-05-10T00:45:07.838022Z INFO Daemon Daemon Deploy ssh public key. May 10 00:45:07.877291 systemd-networkd[1568]: eth0: DHCPv4 address 10.200.8.31/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 10 00:45:07.880716 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:45:08.961176 waagent[1516]: 2025-05-10T00:45:08.961081Z INFO Daemon Daemon Provisioning complete May 10 00:45:08.976967 waagent[1516]: 2025-05-10T00:45:08.976884Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 10 00:45:08.986242 waagent[1516]: 2025-05-10T00:45:08.978462Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 10 00:45:08.986242 waagent[1516]: 2025-05-10T00:45:08.980318Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 10 00:45:09.254515 waagent[1577]: 2025-05-10T00:45:09.254403Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 10 00:45:09.255326 waagent[1577]: 2025-05-10T00:45:09.255258Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:45:09.255479 waagent[1577]: 2025-05-10T00:45:09.255426Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:45:09.266703 waagent[1577]: 2025-05-10T00:45:09.266613Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 10 00:45:09.266887 waagent[1577]: 2025-05-10T00:45:09.266830Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 10 00:45:09.332493 waagent[1577]: 2025-05-10T00:45:09.332361Z INFO ExtHandler ExtHandler Found private key matching thumbprint FDA9A69EDEA39DBDA88831DB274B4B4E8823676D May 10 00:45:09.332744 waagent[1577]: 2025-05-10T00:45:09.332681Z INFO ExtHandler ExtHandler Certificate with thumbprint F5A7CE878FEEC59629B3986A8D259261C51F7795 has no matching private key. May 10 00:45:09.332986 waagent[1577]: 2025-05-10T00:45:09.332934Z INFO ExtHandler ExtHandler Fetch goal state completed May 10 00:45:09.348135 waagent[1577]: 2025-05-10T00:45:09.348067Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 4d73f30a-a019-430b-b69e-e1fdc0760974 New eTag: 14072872316938330239] May 10 00:45:09.348718 waagent[1577]: 2025-05-10T00:45:09.348656Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 10 00:45:09.386711 waagent[1577]: 2025-05-10T00:45:09.386573Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 10 00:45:09.398536 waagent[1577]: 2025-05-10T00:45:09.398436Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1577 May 10 00:45:09.402008 waagent[1577]: 2025-05-10T00:45:09.401935Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 10 00:45:09.403245 waagent[1577]: 2025-05-10T00:45:09.403186Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 10 00:45:09.435611 waagent[1577]: 2025-05-10T00:45:09.435546Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 10 00:45:09.436023 waagent[1577]: 2025-05-10T00:45:09.435958Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 10 00:45:09.444838 waagent[1577]: 2025-05-10T00:45:09.444777Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 10 00:45:09.445395 waagent[1577]: 2025-05-10T00:45:09.445331Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 10 00:45:09.446486 waagent[1577]: 2025-05-10T00:45:09.446419Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 10 00:45:09.447800 waagent[1577]: 2025-05-10T00:45:09.447740Z INFO ExtHandler ExtHandler Starting env monitor service. May 10 00:45:09.448234 waagent[1577]: 2025-05-10T00:45:09.448148Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:45:09.448880 waagent[1577]: 2025-05-10T00:45:09.448825Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:45:09.449003 waagent[1577]: 2025-05-10T00:45:09.448933Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:45:09.449103 waagent[1577]: 2025-05-10T00:45:09.449036Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 10 00:45:09.449730 waagent[1577]: 2025-05-10T00:45:09.449673Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 10 00:45:09.450660 waagent[1577]: 2025-05-10T00:45:09.450603Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 10 00:45:09.450893 waagent[1577]: 2025-05-10T00:45:09.450841Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:45:09.451368 waagent[1577]: 2025-05-10T00:45:09.451309Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 10 00:45:09.452120 waagent[1577]: 2025-05-10T00:45:09.452066Z INFO EnvHandler ExtHandler Configure routes May 10 00:45:09.452313 waagent[1577]: 2025-05-10T00:45:09.452246Z INFO EnvHandler ExtHandler Gateway:None May 10 00:45:09.452522 waagent[1577]: 2025-05-10T00:45:09.452466Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 10 00:45:09.452522 waagent[1577]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 10 00:45:09.452522 waagent[1577]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 10 00:45:09.452522 waagent[1577]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 10 00:45:09.452522 waagent[1577]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 10 00:45:09.452522 waagent[1577]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 10 00:45:09.452522 waagent[1577]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 10 00:45:09.452960 waagent[1577]: 2025-05-10T00:45:09.452910Z INFO EnvHandler ExtHandler Routes:None May 10 00:45:09.455954 waagent[1577]: 2025-05-10T00:45:09.455728Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 10 00:45:09.456257 waagent[1577]: 2025-05-10T00:45:09.456188Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 10 00:45:09.459579 waagent[1577]: 2025-05-10T00:45:09.459517Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 10 00:45:09.468649 waagent[1577]: 2025-05-10T00:45:09.468588Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 10 00:45:09.470816 waagent[1577]: 2025-05-10T00:45:09.470758Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 10 00:45:09.472735 waagent[1577]: 2025-05-10T00:45:09.472678Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 10 00:45:09.496743 waagent[1577]: 2025-05-10T00:45:09.496649Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1568' May 10 00:45:09.514768 waagent[1577]: 2025-05-10T00:45:09.514611Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 10 00:45:09.525430 waagent[1577]: 2025-05-10T00:45:09.524211Z INFO MonitorHandler ExtHandler Network interfaces: May 10 00:45:09.525430 waagent[1577]: Executing ['ip', '-a', '-o', 'link']: May 10 00:45:09.525430 waagent[1577]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 10 00:45:09.525430 waagent[1577]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2e:95:df brd ff:ff:ff:ff:ff:ff May 10 00:45:09.525430 waagent[1577]: 3: enP19105s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2e:95:df brd ff:ff:ff:ff:ff:ff\ altname enP19105p0s2 May 10 00:45:09.525430 waagent[1577]: Executing ['ip', '-4', '-a', '-o', 'address']: May 10 00:45:09.525430 waagent[1577]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 10 00:45:09.525430 waagent[1577]: 2: eth0 inet 10.200.8.31/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 10 00:45:09.525430 waagent[1577]: Executing ['ip', '-6', '-a', '-o', 'address']: May 10 00:45:09.525430 waagent[1577]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 10 00:45:09.525430 waagent[1577]: 2: eth0 inet6 fe80::7eed:8dff:fe2e:95df/64 scope link \ valid_lft forever preferred_lft forever May 10 00:45:09.654052 waagent[1577]: 2025-05-10T00:45:09.653921Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules May 10 00:45:09.657256 waagent[1577]: 2025-05-10T00:45:09.657126Z INFO EnvHandler ExtHandler Firewall rules: May 10 00:45:09.657256 waagent[1577]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:09.657256 waagent[1577]: pkts bytes target prot opt in out source destination May 10 00:45:09.657256 waagent[1577]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:09.657256 waagent[1577]: pkts bytes target prot opt in out source destination May 10 00:45:09.657256 waagent[1577]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:09.657256 waagent[1577]: pkts bytes target prot opt in out source destination May 10 00:45:09.657256 waagent[1577]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 10 00:45:09.657256 waagent[1577]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 10 00:45:09.658675 waagent[1577]: 2025-05-10T00:45:09.658618Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 10 00:45:09.766958 waagent[1577]: 2025-05-10T00:45:09.766812Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 10 00:45:09.984133 waagent[1516]: 2025-05-10T00:45:09.983965Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 10 00:45:09.989192 waagent[1516]: 2025-05-10T00:45:09.989105Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 10 00:45:11.078215 waagent[1614]: 2025-05-10T00:45:11.078091Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 10 00:45:11.079642 waagent[1614]: 2025-05-10T00:45:11.079530Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 10 00:45:11.081944 waagent[1614]: 2025-05-10T00:45:11.081886Z INFO ExtHandler ExtHandler Python: 3.9.16 May 10 00:45:11.082117 waagent[1614]: 2025-05-10T00:45:11.082067Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 10 00:45:11.097946 waagent[1614]: 2025-05-10T00:45:11.097820Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 10 00:45:11.098413 waagent[1614]: 2025-05-10T00:45:11.098344Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:45:11.098589 waagent[1614]: 2025-05-10T00:45:11.098542Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:45:11.098807 waagent[1614]: 2025-05-10T00:45:11.098758Z INFO ExtHandler ExtHandler Initializing the goal state... May 10 00:45:11.112076 waagent[1614]: 2025-05-10T00:45:11.111981Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 10 00:45:11.120723 waagent[1614]: 2025-05-10T00:45:11.120649Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 10 00:45:11.121720 waagent[1614]: 2025-05-10T00:45:11.121657Z INFO ExtHandler May 10 00:45:11.121881 waagent[1614]: 2025-05-10T00:45:11.121826Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b1537c2f-5c92-45db-81af-6608a7cc4c20 eTag: 14072872316938330239 source: Fabric] May 10 00:45:11.122595 waagent[1614]: 2025-05-10T00:45:11.122537Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 10 00:45:11.123681 waagent[1614]: 2025-05-10T00:45:11.123622Z INFO ExtHandler May 10 00:45:11.123818 waagent[1614]: 2025-05-10T00:45:11.123767Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 10 00:45:11.131118 waagent[1614]: 2025-05-10T00:45:11.131057Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 10 00:45:11.131566 waagent[1614]: 2025-05-10T00:45:11.131518Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 10 00:45:11.152421 waagent[1614]: 2025-05-10T00:45:11.152352Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 10 00:45:11.220197 waagent[1614]: 2025-05-10T00:45:11.220049Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FDA9A69EDEA39DBDA88831DB274B4B4E8823676D', 'hasPrivateKey': True} May 10 00:45:11.221140 waagent[1614]: 2025-05-10T00:45:11.221074Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F5A7CE878FEEC59629B3986A8D259261C51F7795', 'hasPrivateKey': False} May 10 00:45:11.222115 waagent[1614]: 2025-05-10T00:45:11.222052Z INFO ExtHandler Fetch goal state from WireServer completed May 10 00:45:11.222918 waagent[1614]: 2025-05-10T00:45:11.222858Z INFO ExtHandler ExtHandler Goal state initialization completed. May 10 00:45:11.241947 waagent[1614]: 2025-05-10T00:45:11.241835Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 10 00:45:11.250508 waagent[1614]: 2025-05-10T00:45:11.250406Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 10 00:45:11.254244 waagent[1614]: 2025-05-10T00:45:11.254129Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 10 00:45:11.254461 waagent[1614]: 2025-05-10T00:45:11.254410Z INFO ExtHandler ExtHandler Checking state of the firewall May 10 00:45:11.286930 waagent[1614]: 2025-05-10T00:45:11.286804Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: May 10 00:45:11.286930 waagent[1614]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.286930 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.286930 waagent[1614]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.286930 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.286930 waagent[1614]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.286930 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.286930 waagent[1614]: 103 11053 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 10 00:45:11.286930 waagent[1614]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 10 00:45:11.288051 waagent[1614]: 2025-05-10T00:45:11.287981Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 10 00:45:11.290683 waagent[1614]: 2025-05-10T00:45:11.290577Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 10 00:45:11.290927 waagent[1614]: 2025-05-10T00:45:11.290876Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 10 00:45:11.291282 waagent[1614]: 2025-05-10T00:45:11.291228Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 10 00:45:11.299394 waagent[1614]: 2025-05-10T00:45:11.299339Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 10 00:45:11.299867 waagent[1614]: 2025-05-10T00:45:11.299811Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 10 00:45:11.307124 waagent[1614]: 2025-05-10T00:45:11.307057Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1614 May 10 00:45:11.310044 waagent[1614]: 2025-05-10T00:45:11.309981Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 10 00:45:11.310800 waagent[1614]: 2025-05-10T00:45:11.310741Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 10 00:45:11.311618 waagent[1614]: 2025-05-10T00:45:11.311561Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 10 00:45:11.314174 waagent[1614]: 2025-05-10T00:45:11.314101Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 10 00:45:11.315415 waagent[1614]: 2025-05-10T00:45:11.315358Z INFO ExtHandler ExtHandler Starting env monitor service. May 10 00:45:11.315876 waagent[1614]: 2025-05-10T00:45:11.315821Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:45:11.316220 waagent[1614]: 2025-05-10T00:45:11.316150Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:45:11.316711 waagent[1614]: 2025-05-10T00:45:11.316654Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 10 00:45:11.316985 waagent[1614]: 2025-05-10T00:45:11.316929Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 10 00:45:11.316985 waagent[1614]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 10 00:45:11.316985 waagent[1614]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 10 00:45:11.316985 waagent[1614]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 10 00:45:11.316985 waagent[1614]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 10 00:45:11.316985 waagent[1614]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 10 00:45:11.316985 waagent[1614]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 10 00:45:11.319454 waagent[1614]: 2025-05-10T00:45:11.319366Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 10 00:45:11.320819 waagent[1614]: 2025-05-10T00:45:11.320316Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 10 00:45:11.320968 waagent[1614]: 2025-05-10T00:45:11.320912Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 10 00:45:11.321988 waagent[1614]: 2025-05-10T00:45:11.321935Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 10 00:45:11.324442 waagent[1614]: 2025-05-10T00:45:11.324238Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 10 00:45:11.325150 waagent[1614]: 2025-05-10T00:45:11.325073Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 10 00:45:11.325704 waagent[1614]: 2025-05-10T00:45:11.325650Z INFO EnvHandler ExtHandler Configure routes May 10 00:45:11.325962 waagent[1614]: 2025-05-10T00:45:11.325884Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 10 00:45:11.326466 waagent[1614]: 2025-05-10T00:45:11.326405Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 10 00:45:11.326800 waagent[1614]: 2025-05-10T00:45:11.326737Z INFO EnvHandler ExtHandler Gateway:None May 10 00:45:11.327412 waagent[1614]: 2025-05-10T00:45:11.327358Z INFO EnvHandler ExtHandler Routes:None May 10 00:45:11.344945 waagent[1614]: 2025-05-10T00:45:11.344838Z INFO MonitorHandler ExtHandler Network interfaces: May 10 00:45:11.344945 waagent[1614]: Executing ['ip', '-a', '-o', 'link']: May 10 00:45:11.344945 waagent[1614]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 10 00:45:11.344945 waagent[1614]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2e:95:df brd ff:ff:ff:ff:ff:ff May 10 00:45:11.344945 waagent[1614]: 3: enP19105s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2e:95:df brd ff:ff:ff:ff:ff:ff\ altname enP19105p0s2 May 10 00:45:11.344945 waagent[1614]: Executing ['ip', '-4', '-a', '-o', 'address']: May 10 00:45:11.344945 waagent[1614]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 10 00:45:11.344945 waagent[1614]: 2: eth0 inet 10.200.8.31/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 10 00:45:11.344945 waagent[1614]: Executing ['ip', '-6', '-a', '-o', 'address']: May 10 00:45:11.344945 waagent[1614]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 10 00:45:11.344945 waagent[1614]: 2: eth0 inet6 fe80::7eed:8dff:fe2e:95df/64 scope link \ valid_lft forever preferred_lft forever May 10 00:45:11.355682 waagent[1614]: 2025-05-10T00:45:11.355596Z INFO ExtHandler ExtHandler Downloading agent manifest May 10 00:45:11.357654 waagent[1614]: 2025-05-10T00:45:11.357592Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 10 00:45:11.388966 waagent[1614]: 2025-05-10T00:45:11.388900Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: May 10 00:45:11.388966 waagent[1614]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.388966 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.388966 waagent[1614]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.388966 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.388966 waagent[1614]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.388966 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.388966 waagent[1614]: 130 14449 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 10 00:45:11.388966 waagent[1614]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 10 00:45:11.391626 waagent[1614]: 2025-05-10T00:45:11.391567Z INFO ExtHandler ExtHandler May 10 00:45:11.402214 waagent[1614]: 2025-05-10T00:45:11.402135Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4b8217c4-0cf6-40e9-b356-0e45081d62a2 correlation cef0f68a-32c2-4e30-bcc9-d6ff4a9b0f6d created: 2025-05-10T00:44:17.775116Z] May 10 00:45:11.410204 waagent[1614]: 2025-05-10T00:45:11.408355Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 10 00:45:11.419772 waagent[1614]: 2025-05-10T00:45:11.419701Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 28 ms] May 10 00:45:11.453094 waagent[1614]: 2025-05-10T00:45:11.452983Z INFO EnvHandler ExtHandler The firewall was setup successfully: May 10 00:45:11.453094 waagent[1614]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.453094 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.453094 waagent[1614]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.453094 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.453094 waagent[1614]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 10 00:45:11.453094 waagent[1614]: pkts bytes target prot opt in out source destination May 10 00:45:11.453094 waagent[1614]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 10 00:45:11.453094 waagent[1614]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 10 00:45:11.453094 waagent[1614]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 10 00:45:12.505049 waagent[1614]: 2025-05-10T00:45:12.504949Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 10 00:45:12.508138 waagent[1614]: 2025-05-10T00:45:12.508056Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D8D4EB09-281E-4DBF-BB76-F7765F57146A;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 10 00:45:12.724703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:45:12.725003 systemd[1]: Stopped kubelet.service. May 10 00:45:12.725064 systemd[1]: kubelet.service: Consumed 1.107s CPU time. May 10 00:45:12.727091 systemd[1]: Starting kubelet.service... May 10 00:45:12.810019 systemd[1]: Started kubelet.service. May 10 00:45:13.394031 kubelet[1669]: E0510 00:45:13.393979 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:45:13.396933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:45:13.397091 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:45:23.474608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:45:23.474934 systemd[1]: Stopped kubelet.service. May 10 00:45:23.477001 systemd[1]: Starting kubelet.service... May 10 00:45:23.559134 systemd[1]: Started kubelet.service. May 10 00:45:24.190323 kubelet[1678]: E0510 00:45:24.190269 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:45:24.192002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:45:24.192178 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:45:34.224610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:45:34.224932 systemd[1]: Stopped kubelet.service. May 10 00:45:34.226955 systemd[1]: Starting kubelet.service... May 10 00:45:34.309043 systemd[1]: Started kubelet.service. May 10 00:45:34.346231 kubelet[1687]: E0510 00:45:34.346175 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:45:34.347795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:45:34.347952 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:45:34.753756 systemd[1]: Created slice system-sshd.slice. May 10 00:45:34.755773 systemd[1]: Started sshd@0-10.200.8.31:22-10.200.16.10:50610.service. May 10 00:45:35.614695 sshd[1693]: Accepted publickey for core from 10.200.16.10 port 50610 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:45:35.616384 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:35.621132 systemd[1]: Started session-3.scope. May 10 00:45:35.621643 systemd-logind[1399]: New session 3 of user core. May 10 00:45:36.178873 systemd[1]: Started sshd@1-10.200.8.31:22-10.200.16.10:50614.service. May 10 00:45:36.812126 sshd[1698]: Accepted publickey for core from 10.200.16.10 port 50614 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:45:36.813871 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:36.819175 systemd-logind[1399]: New session 4 of user core. May 10 00:45:36.819790 systemd[1]: Started session-4.scope. May 10 00:45:37.263077 sshd[1698]: pam_unix(sshd:session): session closed for user core May 10 00:45:37.266466 systemd[1]: sshd@1-10.200.8.31:22-10.200.16.10:50614.service: Deactivated successfully. May 10 00:45:37.267454 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:45:37.268224 systemd-logind[1399]: Session 4 logged out. Waiting for processes to exit. May 10 00:45:37.269137 systemd-logind[1399]: Removed session 4. May 10 00:45:37.370111 systemd[1]: Started sshd@2-10.200.8.31:22-10.200.16.10:50624.service. May 10 00:45:38.008403 sshd[1704]: Accepted publickey for core from 10.200.16.10 port 50624 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:45:38.010142 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:38.015225 systemd-logind[1399]: New session 5 of user core. May 10 00:45:38.015828 systemd[1]: Started session-5.scope. May 10 00:45:38.457260 sshd[1704]: pam_unix(sshd:session): session closed for user core May 10 00:45:38.460854 systemd[1]: sshd@2-10.200.8.31:22-10.200.16.10:50624.service: Deactivated successfully. May 10 00:45:38.461805 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:45:38.462604 systemd-logind[1399]: Session 5 logged out. Waiting for processes to exit. May 10 00:45:38.463512 systemd-logind[1399]: Removed session 5. May 10 00:45:38.564227 systemd[1]: Started sshd@3-10.200.8.31:22-10.200.16.10:50640.service. May 10 00:45:39.202376 sshd[1710]: Accepted publickey for core from 10.200.16.10 port 50640 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:45:39.204057 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:39.208876 systemd[1]: Started session-6.scope. May 10 00:45:39.209363 systemd-logind[1399]: New session 6 of user core. May 10 00:45:39.661603 sshd[1710]: pam_unix(sshd:session): session closed for user core May 10 00:45:39.664802 systemd[1]: sshd@3-10.200.8.31:22-10.200.16.10:50640.service: Deactivated successfully. May 10 00:45:39.665754 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:45:39.666467 systemd-logind[1399]: Session 6 logged out. Waiting for processes to exit. May 10 00:45:39.667203 systemd-logind[1399]: Removed session 6. May 10 00:45:39.772462 systemd[1]: Started sshd@4-10.200.8.31:22-10.200.16.10:52922.service. May 10 00:45:40.407858 sshd[1716]: Accepted publickey for core from 10.200.16.10 port 52922 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:45:40.409543 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:45:40.415225 systemd[1]: Started session-7.scope. May 10 00:45:40.416089 systemd-logind[1399]: New session 7 of user core. May 10 00:45:40.825448 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:45:40.825750 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:45:40.849938 systemd[1]: Starting docker.service... May 10 00:45:40.886405 env[1729]: time="2025-05-10T00:45:40.886352264Z" level=info msg="Starting up" May 10 00:45:40.887682 env[1729]: time="2025-05-10T00:45:40.887655871Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:45:40.887811 env[1729]: time="2025-05-10T00:45:40.887797772Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:45:40.887877 env[1729]: time="2025-05-10T00:45:40.887864872Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:45:40.887927 env[1729]: time="2025-05-10T00:45:40.887919472Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:45:40.889648 env[1729]: time="2025-05-10T00:45:40.889628582Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:45:40.889737 env[1729]: time="2025-05-10T00:45:40.889727183Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:45:40.889790 env[1729]: time="2025-05-10T00:45:40.889778783Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:45:40.889828 env[1729]: time="2025-05-10T00:45:40.889820683Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:45:41.015453 env[1729]: time="2025-05-10T00:45:41.015409890Z" level=info msg="Loading containers: start." May 10 00:45:41.125189 kernel: Initializing XFRM netlink socket May 10 00:45:41.141823 env[1729]: time="2025-05-10T00:45:41.141779961Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:45:41.203890 systemd-networkd[1568]: docker0: Link UP May 10 00:45:41.248361 env[1729]: time="2025-05-10T00:45:41.248316927Z" level=info msg="Loading containers: done." May 10 00:45:41.264805 env[1729]: time="2025-05-10T00:45:41.264755714Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:45:41.264999 env[1729]: time="2025-05-10T00:45:41.264971015Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:45:41.265106 env[1729]: time="2025-05-10T00:45:41.265083716Z" level=info msg="Daemon has completed initialization" May 10 00:45:41.306014 systemd[1]: Started docker.service. May 10 00:45:41.311776 env[1729]: time="2025-05-10T00:45:41.311718463Z" level=info msg="API listen on /run/docker.sock" May 10 00:45:42.670529 env[1415]: time="2025-05-10T00:45:42.670482664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 00:45:43.497361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624397800.mount: Deactivated successfully. May 10 00:45:44.474489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:45:44.474778 systemd[1]: Stopped kubelet.service. May 10 00:45:44.476966 systemd[1]: Starting kubelet.service... May 10 00:45:44.565063 systemd[1]: Started kubelet.service. May 10 00:45:45.192568 kubelet[1850]: E0510 00:45:45.192515 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:45:45.194021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:45:45.194136 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:45:45.552184 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 10 00:45:46.185860 env[1415]: time="2025-05-10T00:45:46.185798168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:46.195711 env[1415]: time="2025-05-10T00:45:46.195642706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:46.203832 env[1415]: time="2025-05-10T00:45:46.203772337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:46.210549 env[1415]: time="2025-05-10T00:45:46.210486063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:46.211436 env[1415]: time="2025-05-10T00:45:46.211395667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 10 00:45:46.213274 env[1415]: time="2025-05-10T00:45:46.213243174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 00:45:46.435015 update_engine[1400]: I0510 00:45:46.434929 1400 update_attempter.cc:509] Updating boot flags... May 10 00:45:48.016242 env[1415]: time="2025-05-10T00:45:48.016187361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:48.023732 env[1415]: time="2025-05-10T00:45:48.023687486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:48.026869 env[1415]: time="2025-05-10T00:45:48.026830197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:48.031138 env[1415]: time="2025-05-10T00:45:48.031095911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:48.031832 env[1415]: time="2025-05-10T00:45:48.031793314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 10 00:45:48.032622 env[1415]: time="2025-05-10T00:45:48.032591217Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 00:45:49.590763 env[1415]: time="2025-05-10T00:45:49.590690359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:49.608498 env[1415]: time="2025-05-10T00:45:49.608444115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:49.613575 env[1415]: time="2025-05-10T00:45:49.613530731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:49.619329 env[1415]: time="2025-05-10T00:45:49.619288850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:49.619967 env[1415]: time="2025-05-10T00:45:49.619935752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 10 00:45:49.620714 env[1415]: time="2025-05-10T00:45:49.620686854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 00:45:50.999685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567970193.mount: Deactivated successfully. May 10 00:45:51.631031 env[1415]: time="2025-05-10T00:45:51.630974086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:51.637198 env[1415]: time="2025-05-10T00:45:51.637137003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:51.641263 env[1415]: time="2025-05-10T00:45:51.641221314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:51.645324 env[1415]: time="2025-05-10T00:45:51.645281526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:51.646080 env[1415]: time="2025-05-10T00:45:51.646042928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 10 00:45:51.650537 env[1415]: time="2025-05-10T00:45:51.650513040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:45:52.294459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760394290.mount: Deactivated successfully. May 10 00:45:53.695151 env[1415]: time="2025-05-10T00:45:53.695092127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:53.706997 env[1415]: time="2025-05-10T00:45:53.706944956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:53.712332 env[1415]: time="2025-05-10T00:45:53.712291769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:53.719659 env[1415]: time="2025-05-10T00:45:53.719619187Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:53.720310 env[1415]: time="2025-05-10T00:45:53.720272689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:45:53.720891 env[1415]: time="2025-05-10T00:45:53.720862590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:45:54.389105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228442757.mount: Deactivated successfully. May 10 00:45:54.412827 env[1415]: time="2025-05-10T00:45:54.412774026Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:54.422634 env[1415]: time="2025-05-10T00:45:54.422581948Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:54.426740 env[1415]: time="2025-05-10T00:45:54.426685858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:54.439515 env[1415]: time="2025-05-10T00:45:54.439471987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:54.441085 env[1415]: time="2025-05-10T00:45:54.441043590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 00:45:54.441760 env[1415]: time="2025-05-10T00:45:54.441731192Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 00:45:55.075321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913116255.mount: Deactivated successfully. May 10 00:45:55.224449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 00:45:55.224756 systemd[1]: Stopped kubelet.service. May 10 00:45:55.226704 systemd[1]: Starting kubelet.service... May 10 00:45:55.309299 systemd[1]: Started kubelet.service. May 10 00:45:55.875479 kubelet[1898]: E0510 00:45:55.875425 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:45:55.877145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:45:55.877334 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:45:58.697708 env[1415]: time="2025-05-10T00:45:58.697643971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:58.705967 env[1415]: time="2025-05-10T00:45:58.705906785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:58.709562 env[1415]: time="2025-05-10T00:45:58.709518492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:58.713192 env[1415]: time="2025-05-10T00:45:58.713142198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:45:58.714102 env[1415]: time="2025-05-10T00:45:58.714063900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 10 00:46:02.187966 systemd[1]: Stopped kubelet.service. May 10 00:46:02.190551 systemd[1]: Starting kubelet.service... May 10 00:46:02.227951 systemd[1]: Reloading. May 10 00:46:02.333804 /usr/lib/systemd/system-generators/torcx-generator[1950]: time="2025-05-10T00:46:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:46:02.336981 /usr/lib/systemd/system-generators/torcx-generator[1950]: time="2025-05-10T00:46:02Z" level=info msg="torcx already run" May 10 00:46:02.438085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:02.438105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:02.454353 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:02.553401 systemd[1]: Started kubelet.service. May 10 00:46:02.555439 systemd[1]: Stopping kubelet.service... May 10 00:46:02.555811 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:46:02.556043 systemd[1]: Stopped kubelet.service. May 10 00:46:02.557839 systemd[1]: Starting kubelet.service... May 10 00:46:02.817123 systemd[1]: Started kubelet.service. May 10 00:46:02.854774 kubelet[2019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:46:02.855231 kubelet[2019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:46:02.855301 kubelet[2019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:46:02.855542 kubelet[2019]: I0510 00:46:02.855508 2019 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:46:03.800796 kubelet[2019]: I0510 00:46:03.800755 2019 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:46:03.800796 kubelet[2019]: I0510 00:46:03.800785 2019 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:46:03.801127 kubelet[2019]: I0510 00:46:03.801104 2019 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:46:03.829753 kubelet[2019]: E0510 00:46:03.829708 2019 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:03.829963 kubelet[2019]: I0510 00:46:03.829895 2019 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:46:03.838971 kubelet[2019]: E0510 00:46:03.838928 2019 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:46:03.839154 kubelet[2019]: I0510 00:46:03.839143 2019 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:46:03.844074 kubelet[2019]: I0510 00:46:03.844052 2019 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:46:03.844274 kubelet[2019]: I0510 00:46:03.844263 2019 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:46:03.844530 kubelet[2019]: I0510 00:46:03.844501 2019 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:46:03.844761 kubelet[2019]: I0510 00:46:03.844600 2019 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-8a4b3429d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:46:03.844902 kubelet[2019]: I0510 00:46:03.844893 2019 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:46:03.844950 kubelet[2019]: I0510 00:46:03.844945 2019 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:46:03.845071 kubelet[2019]: I0510 00:46:03.845064 2019 state_mem.go:36] "Initialized new in-memory state store" May 10 00:46:03.852618 kubelet[2019]: I0510 00:46:03.852596 2019 kubelet.go:408] "Attempting to sync node with API server" May 10 00:46:03.852771 kubelet[2019]: I0510 00:46:03.852743 2019 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:46:03.852837 kubelet[2019]: I0510 00:46:03.852788 2019 kubelet.go:314] "Adding apiserver pod source" May 10 00:46:03.852837 kubelet[2019]: I0510 00:46:03.852807 2019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:46:03.867980 kubelet[2019]: W0510 00:46:03.867561 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8a4b3429d2&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:03.867980 kubelet[2019]: E0510 00:46:03.867646 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8a4b3429d2&limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:03.868404 kubelet[2019]: I0510 00:46:03.868115 2019 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:46:03.873509 kubelet[2019]: I0510 00:46:03.873488 2019 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:46:03.873698 kubelet[2019]: W0510 00:46:03.873685 2019 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:46:03.881129 kubelet[2019]: I0510 00:46:03.881108 2019 server.go:1269] "Started kubelet" May 10 00:46:03.886772 kubelet[2019]: W0510 00:46:03.884565 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:03.886772 kubelet[2019]: E0510 00:46:03.884632 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:03.886772 kubelet[2019]: I0510 00:46:03.884727 2019 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:46:03.892129 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:46:03.892258 kubelet[2019]: I0510 00:46:03.892012 2019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:46:03.892706 kubelet[2019]: I0510 00:46:03.892648 2019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:46:03.893134 kubelet[2019]: I0510 00:46:03.893115 2019 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:46:03.898031 kubelet[2019]: I0510 00:46:03.897979 2019 server.go:460] "Adding debug handlers to kubelet server" May 10 00:46:03.898251 kubelet[2019]: E0510 00:46:03.896039 2019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-8a4b3429d2.183e03df7d4c71dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-8a4b3429d2,UID:ci-3510.3.7-n-8a4b3429d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-8a4b3429d2,},FirstTimestamp:2025-05-10 00:46:03.881083357 +0000 UTC m=+1.057402641,LastTimestamp:2025-05-10 00:46:03.881083357 +0000 UTC m=+1.057402641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-8a4b3429d2,}" May 10 00:46:03.899123 kubelet[2019]: I0510 00:46:03.899100 2019 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:46:03.899353 kubelet[2019]: E0510 00:46:03.899336 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:03.899513 kubelet[2019]: I0510 00:46:03.899501 2019 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:46:03.900494 kubelet[2019]: I0510 00:46:03.900464 2019 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:46:03.900680 kubelet[2019]: I0510 00:46:03.900661 2019 reconciler.go:26] "Reconciler: start to sync state" May 10 00:46:03.901407 kubelet[2019]: W0510 00:46:03.901362 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:03.901576 kubelet[2019]: E0510 00:46:03.901555 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:03.901937 kubelet[2019]: I0510 00:46:03.901915 2019 factory.go:221] Registration of the systemd container factory successfully May 10 00:46:03.902038 kubelet[2019]: I0510 00:46:03.902017 2019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:46:03.903118 kubelet[2019]: E0510 00:46:03.902948 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8a4b3429d2?timeout=10s\": dial tcp 10.200.8.31:6443: connect: connection refused" interval="200ms" May 10 00:46:03.904050 kubelet[2019]: E0510 00:46:03.904021 2019 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:46:03.904328 kubelet[2019]: I0510 00:46:03.904312 2019 factory.go:221] Registration of the containerd container factory successfully May 10 00:46:03.951548 kubelet[2019]: I0510 00:46:03.951512 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:46:03.953254 kubelet[2019]: I0510 00:46:03.953224 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:46:03.953413 kubelet[2019]: I0510 00:46:03.953401 2019 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:46:03.953511 kubelet[2019]: I0510 00:46:03.953499 2019 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:46:03.953632 kubelet[2019]: E0510 00:46:03.953617 2019 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:46:03.965930 kubelet[2019]: W0510 00:46:03.965875 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:03.966055 kubelet[2019]: E0510 00:46:03.965939 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:03.976203 kubelet[2019]: I0510 00:46:03.975330 2019 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:46:03.976203 kubelet[2019]: I0510 00:46:03.975356 2019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:46:03.976203 kubelet[2019]: I0510 00:46:03.975375 2019 state_mem.go:36] "Initialized new in-memory state store" May 10 00:46:03.999779 kubelet[2019]: E0510 00:46:03.999738 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:04.015613 kubelet[2019]: I0510 00:46:04.015423 2019 policy_none.go:49] "None policy: Start" May 10 00:46:04.016606 kubelet[2019]: I0510 00:46:04.016589 2019 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:46:04.016721 kubelet[2019]: I0510 00:46:04.016629 2019 state_mem.go:35] "Initializing new in-memory state store" May 10 00:46:04.032045 systemd[1]: Created slice kubepods.slice. May 10 00:46:04.036879 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:46:04.039850 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:46:04.045879 kubelet[2019]: I0510 00:46:04.045851 2019 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:46:04.046052 kubelet[2019]: I0510 00:46:04.046035 2019 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:46:04.046128 kubelet[2019]: I0510 00:46:04.046057 2019 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:46:04.119450 kubelet[2019]: I0510 00:46:04.114470 2019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:46:04.120027 kubelet[2019]: I0510 00:46:04.118142 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69e33ff635b634e6aec701d4de04d300-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8a4b3429d2\" (UID: \"69e33ff635b634e6aec701d4de04d300\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.120227 kubelet[2019]: I0510 00:46:04.120209 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69e33ff635b634e6aec701d4de04d300-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-8a4b3429d2\" (UID: \"69e33ff635b634e6aec701d4de04d300\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.120352 kubelet[2019]: I0510 00:46:04.120333 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69e33ff635b634e6aec701d4de04d300-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8a4b3429d2\" (UID: \"69e33ff635b634e6aec701d4de04d300\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.120446 kubelet[2019]: E0510 00:46:04.119277 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8a4b3429d2?timeout=10s\": dial tcp 10.200.8.31:6443: connect: connection refused" interval="400ms" May 10 00:46:04.125832 kubelet[2019]: E0510 00:46:04.125808 2019 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:04.129179 systemd[1]: Created slice kubepods-burstable-pod69e33ff635b634e6aec701d4de04d300.slice. May 10 00:46:04.137015 systemd[1]: Created slice kubepods-burstable-pod7a996196c9b694e233d8b832ebc71efd.slice. May 10 00:46:04.141624 systemd[1]: Created slice kubepods-burstable-pod1aa3be9227c6a2e90f39f9bcf4efca8d.slice. May 10 00:46:04.147487 kubelet[2019]: I0510 00:46:04.147459 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.147796 kubelet[2019]: E0510 00:46:04.147771 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.221447 kubelet[2019]: I0510 00:46:04.221387 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.221741 kubelet[2019]: I0510 00:46:04.221712 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.221855 kubelet[2019]: I0510 00:46:04.221749 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.221855 kubelet[2019]: I0510 00:46:04.221810 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1aa3be9227c6a2e90f39f9bcf4efca8d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-8a4b3429d2\" (UID: \"1aa3be9227c6a2e90f39f9bcf4efca8d\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.221855 kubelet[2019]: I0510 00:46:04.221839 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.222025 kubelet[2019]: I0510 00:46:04.221866 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.350029 kubelet[2019]: I0510 00:46:04.349984 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.350712 kubelet[2019]: E0510 00:46:04.350674 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.437172 env[1415]: time="2025-05-10T00:46:04.436996355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-8a4b3429d2,Uid:69e33ff635b634e6aec701d4de04d300,Namespace:kube-system,Attempt:0,}" May 10 00:46:04.440754 env[1415]: time="2025-05-10T00:46:04.440704668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-8a4b3429d2,Uid:7a996196c9b694e233d8b832ebc71efd,Namespace:kube-system,Attempt:0,}" May 10 00:46:04.444702 env[1415]: time="2025-05-10T00:46:04.444661288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-8a4b3429d2,Uid:1aa3be9227c6a2e90f39f9bcf4efca8d,Namespace:kube-system,Attempt:0,}" May 10 00:46:04.521717 kubelet[2019]: E0510 00:46:04.521651 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8a4b3429d2?timeout=10s\": dial tcp 10.200.8.31:6443: connect: connection refused" interval="800ms" May 10 00:46:04.752758 kubelet[2019]: I0510 00:46:04.752722 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.753297 kubelet[2019]: E0510 00:46:04.753265 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:04.861743 kubelet[2019]: W0510 00:46:04.861671 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8a4b3429d2&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:04.861932 kubelet[2019]: E0510 00:46:04.861750 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8a4b3429d2&limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:04.927999 kubelet[2019]: W0510 00:46:04.927935 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:04.928388 kubelet[2019]: E0510 00:46:04.928006 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:05.103122 kubelet[2019]: W0510 00:46:05.102986 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:05.103122 kubelet[2019]: E0510 00:46:05.103043 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:05.322637 kubelet[2019]: E0510 00:46:05.322575 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8a4b3429d2?timeout=10s\": dial tcp 10.200.8.31:6443: connect: connection refused" interval="1.6s" May 10 00:46:05.362521 kubelet[2019]: W0510 00:46:05.362363 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:05.362521 kubelet[2019]: E0510 00:46:05.362447 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:05.555588 kubelet[2019]: I0510 00:46:05.555557 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:05.556080 kubelet[2019]: E0510 00:46:05.556044 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:06.019191 kubelet[2019]: E0510 00:46:06.019113 2019 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:06.510680 kubelet[2019]: E0510 00:46:06.510559 2019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-8a4b3429d2.183e03df7d4c71dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-8a4b3429d2,UID:ci-3510.3.7-n-8a4b3429d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-8a4b3429d2,},FirstTimestamp:2025-05-10 00:46:03.881083357 +0000 UTC m=+1.057402641,LastTimestamp:2025-05-10 00:46:03.881083357 +0000 UTC m=+1.057402641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-8a4b3429d2,}" May 10 00:46:06.602666 kubelet[2019]: W0510 00:46:06.602617 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:06.602848 kubelet[2019]: E0510 00:46:06.602673 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:06.811704 kubelet[2019]: W0510 00:46:06.811568 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8a4b3429d2&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:06.811704 kubelet[2019]: E0510 00:46:06.811627 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8a4b3429d2&limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:06.923577 kubelet[2019]: E0510 00:46:06.923516 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8a4b3429d2?timeout=10s\": dial tcp 10.200.8.31:6443: connect: connection refused" interval="3.2s" May 10 00:46:07.035074 kubelet[2019]: W0510 00:46:07.035026 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:07.035501 kubelet[2019]: E0510 00:46:07.035085 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:07.158641 kubelet[2019]: I0510 00:46:07.158243 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:07.158881 kubelet[2019]: E0510 00:46:07.158846 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:08.051442 kubelet[2019]: W0510 00:46:08.051401 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused May 10 00:46:08.051913 kubelet[2019]: E0510 00:46:08.051524 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.31:6443: connect: connection refused" logger="UnhandledError" May 10 00:46:08.336050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907071757.mount: Deactivated successfully. May 10 00:46:08.374779 env[1415]: time="2025-05-10T00:46:08.374712104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.378675 env[1415]: time="2025-05-10T00:46:08.378624510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.391049 env[1415]: time="2025-05-10T00:46:08.390999346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.394620 env[1415]: time="2025-05-10T00:46:08.394578043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.398551 env[1415]: time="2025-05-10T00:46:08.398515150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.406842 env[1415]: time="2025-05-10T00:46:08.406793775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.410724 env[1415]: time="2025-05-10T00:46:08.410688281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.415664 env[1415]: time="2025-05-10T00:46:08.415625315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.419820 env[1415]: time="2025-05-10T00:46:08.419784828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.426001 env[1415]: time="2025-05-10T00:46:08.425958796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.430770 env[1415]: time="2025-05-10T00:46:08.430723325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.434879 env[1415]: time="2025-05-10T00:46:08.434844437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:08.549828 env[1415]: time="2025-05-10T00:46:08.548819332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:08.549828 env[1415]: time="2025-05-10T00:46:08.548857933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:08.549828 env[1415]: time="2025-05-10T00:46:08.548871433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:08.549828 env[1415]: time="2025-05-10T00:46:08.549010437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d1bb0894ae006f4a4763387e7dd8f71fb5d479225c43b114e0f9725898493aa pid=2059 runtime=io.containerd.runc.v2 May 10 00:46:08.558927 env[1415]: time="2025-05-10T00:46:08.558669199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:08.558927 env[1415]: time="2025-05-10T00:46:08.558713601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:08.558927 env[1415]: time="2025-05-10T00:46:08.558728501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:08.559347 env[1415]: time="2025-05-10T00:46:08.559291916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56d329491eeb73bb3ccd4015634411e6ab663981625f3215aefb7f9b096e296b pid=2076 runtime=io.containerd.runc.v2 May 10 00:46:08.573605 systemd[1]: Started cri-containerd-1d1bb0894ae006f4a4763387e7dd8f71fb5d479225c43b114e0f9725898493aa.scope. May 10 00:46:08.593514 systemd[1]: Started cri-containerd-56d329491eeb73bb3ccd4015634411e6ab663981625f3215aefb7f9b096e296b.scope. May 10 00:46:08.605323 env[1415]: time="2025-05-10T00:46:08.603641521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:08.605323 env[1415]: time="2025-05-10T00:46:08.603723323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:08.605323 env[1415]: time="2025-05-10T00:46:08.603754224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:08.605323 env[1415]: time="2025-05-10T00:46:08.603925128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1c974cf11dbec553be5524bbb04ffda4b3db5182c60a9615e440657ed27458a pid=2122 runtime=io.containerd.runc.v2 May 10 00:46:08.624835 systemd[1]: Started cri-containerd-b1c974cf11dbec553be5524bbb04ffda4b3db5182c60a9615e440657ed27458a.scope. May 10 00:46:08.660553 env[1415]: time="2025-05-10T00:46:08.659634641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-8a4b3429d2,Uid:69e33ff635b634e6aec701d4de04d300,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d1bb0894ae006f4a4763387e7dd8f71fb5d479225c43b114e0f9725898493aa\"" May 10 00:46:08.670263 env[1415]: time="2025-05-10T00:46:08.670209828Z" level=info msg="CreateContainer within sandbox \"1d1bb0894ae006f4a4763387e7dd8f71fb5d479225c43b114e0f9725898493aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:46:08.680416 env[1415]: time="2025-05-10T00:46:08.680374504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-8a4b3429d2,Uid:7a996196c9b694e233d8b832ebc71efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d329491eeb73bb3ccd4015634411e6ab663981625f3215aefb7f9b096e296b\"" May 10 00:46:08.684563 env[1415]: time="2025-05-10T00:46:08.684524317Z" level=info msg="CreateContainer within sandbox \"56d329491eeb73bb3ccd4015634411e6ab663981625f3215aefb7f9b096e296b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:46:08.704805 env[1415]: time="2025-05-10T00:46:08.704759466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-8a4b3429d2,Uid:1aa3be9227c6a2e90f39f9bcf4efca8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c974cf11dbec553be5524bbb04ffda4b3db5182c60a9615e440657ed27458a\"" May 10 00:46:08.707465 env[1415]: time="2025-05-10T00:46:08.707430339Z" level=info msg="CreateContainer within sandbox \"b1c974cf11dbec553be5524bbb04ffda4b3db5182c60a9615e440657ed27458a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:46:08.766789 env[1415]: time="2025-05-10T00:46:08.766731149Z" level=info msg="CreateContainer within sandbox \"1d1bb0894ae006f4a4763387e7dd8f71fb5d479225c43b114e0f9725898493aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2faa2c96e32976d90784b496841a79ace2925cc3ee310c00453b9e8e496fa41d\"" May 10 00:46:08.767572 env[1415]: time="2025-05-10T00:46:08.767529971Z" level=info msg="StartContainer for \"2faa2c96e32976d90784b496841a79ace2925cc3ee310c00453b9e8e496fa41d\"" May 10 00:46:08.778483 env[1415]: time="2025-05-10T00:46:08.778439367Z" level=info msg="CreateContainer within sandbox \"56d329491eeb73bb3ccd4015634411e6ab663981625f3215aefb7f9b096e296b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65c4ff10552c40807cee67b3eaa867e889ec0895dd52c245758c655c0c493a63\"" May 10 00:46:08.779288 env[1415]: time="2025-05-10T00:46:08.779242789Z" level=info msg="StartContainer for \"65c4ff10552c40807cee67b3eaa867e889ec0895dd52c245758c655c0c493a63\"" May 10 00:46:08.786385 env[1415]: time="2025-05-10T00:46:08.784434530Z" level=info msg="CreateContainer within sandbox \"b1c974cf11dbec553be5524bbb04ffda4b3db5182c60a9615e440657ed27458a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9571a93c9f89c0ffb96a419f1ffa11ead95d42f8774814c584f739175f66c34d\"" May 10 00:46:08.785505 systemd[1]: Started cri-containerd-2faa2c96e32976d90784b496841a79ace2925cc3ee310c00453b9e8e496fa41d.scope. May 10 00:46:08.793179 env[1415]: time="2025-05-10T00:46:08.791886332Z" level=info msg="StartContainer for \"9571a93c9f89c0ffb96a419f1ffa11ead95d42f8774814c584f739175f66c34d\"" May 10 00:46:08.837207 systemd[1]: Started cri-containerd-9571a93c9f89c0ffb96a419f1ffa11ead95d42f8774814c584f739175f66c34d.scope. May 10 00:46:08.848446 systemd[1]: Started cri-containerd-65c4ff10552c40807cee67b3eaa867e889ec0895dd52c245758c655c0c493a63.scope. May 10 00:46:08.885120 env[1415]: time="2025-05-10T00:46:08.885047962Z" level=info msg="StartContainer for \"2faa2c96e32976d90784b496841a79ace2925cc3ee310c00453b9e8e496fa41d\" returns successfully" May 10 00:46:08.920984 env[1415]: time="2025-05-10T00:46:08.920931037Z" level=info msg="StartContainer for \"9571a93c9f89c0ffb96a419f1ffa11ead95d42f8774814c584f739175f66c34d\" returns successfully" May 10 00:46:09.023955 env[1415]: time="2025-05-10T00:46:09.023900215Z" level=info msg="StartContainer for \"65c4ff10552c40807cee67b3eaa867e889ec0895dd52c245758c655c0c493a63\" returns successfully" May 10 00:46:10.360959 kubelet[2019]: I0510 00:46:10.360918 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:11.292784 kubelet[2019]: E0510 00:46:11.292736 2019 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-8a4b3429d2\" not found" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:11.486853 kubelet[2019]: I0510 00:46:11.486815 2019 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:11.487388 kubelet[2019]: E0510 00:46:11.487366 2019 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-8a4b3429d2\": node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:11.518465 kubelet[2019]: E0510 00:46:11.518416 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:11.619051 kubelet[2019]: E0510 00:46:11.618937 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:11.719866 kubelet[2019]: E0510 00:46:11.719812 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:11.820746 kubelet[2019]: E0510 00:46:11.820684 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:11.921372 kubelet[2019]: E0510 00:46:11.921231 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:12.021543 kubelet[2019]: E0510 00:46:12.021471 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:12.122686 kubelet[2019]: E0510 00:46:12.122626 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:12.222860 kubelet[2019]: E0510 00:46:12.222724 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:12.323503 kubelet[2019]: E0510 00:46:12.323457 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8a4b3429d2\" not found" May 10 00:46:12.881986 kubelet[2019]: I0510 00:46:12.881945 2019 apiserver.go:52] "Watching apiserver" May 10 00:46:12.901748 kubelet[2019]: I0510 00:46:12.901712 2019 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:46:13.422690 systemd[1]: Reloading. May 10 00:46:13.495621 /usr/lib/systemd/system-generators/torcx-generator[2320]: time="2025-05-10T00:46:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:46:13.496126 /usr/lib/systemd/system-generators/torcx-generator[2320]: time="2025-05-10T00:46:13Z" level=info msg="torcx already run" May 10 00:46:13.595912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:13.595934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:13.612680 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:13.725229 systemd[1]: Stopping kubelet.service... May 10 00:46:13.740753 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:46:13.740980 systemd[1]: Stopped kubelet.service. May 10 00:46:13.743065 systemd[1]: Starting kubelet.service... May 10 00:46:13.860437 systemd[1]: Started kubelet.service. May 10 00:46:13.901350 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:46:13.901350 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:46:13.901350 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:46:13.901823 kubelet[2382]: I0510 00:46:13.901414 2382 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:46:13.908615 kubelet[2382]: I0510 00:46:13.908577 2382 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:46:13.908615 kubelet[2382]: I0510 00:46:13.908609 2382 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:46:13.909104 kubelet[2382]: I0510 00:46:13.909081 2382 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:46:13.911281 kubelet[2382]: I0510 00:46:13.911255 2382 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:46:13.913133 kubelet[2382]: I0510 00:46:13.913110 2382 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:46:13.916150 kubelet[2382]: E0510 00:46:13.916120 2382 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:46:13.916150 kubelet[2382]: I0510 00:46:13.916145 2382 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:46:13.919808 kubelet[2382]: I0510 00:46:13.919785 2382 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:46:13.919953 kubelet[2382]: I0510 00:46:13.919936 2382 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:46:13.920139 kubelet[2382]: I0510 00:46:13.920103 2382 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:46:13.920354 kubelet[2382]: I0510 00:46:13.920136 2382 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-8a4b3429d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:46:13.920498 kubelet[2382]: I0510 00:46:13.920362 2382 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:46:13.920498 kubelet[2382]: I0510 00:46:13.920378 2382 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:46:13.920498 kubelet[2382]: I0510 00:46:13.920437 2382 state_mem.go:36] "Initialized new in-memory state store" May 10 00:46:13.920629 kubelet[2382]: I0510 00:46:13.920613 2382 kubelet.go:408] "Attempting to sync node with API server" May 10 00:46:13.920629 kubelet[2382]: I0510 00:46:13.920628 2382 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:46:13.921180 kubelet[2382]: I0510 00:46:13.921063 2382 kubelet.go:314] "Adding apiserver pod source" May 10 00:46:13.922211 kubelet[2382]: I0510 00:46:13.922193 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:46:13.926982 kubelet[2382]: I0510 00:46:13.926961 2382 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:46:13.927487 kubelet[2382]: I0510 00:46:13.927468 2382 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:46:13.927953 kubelet[2382]: I0510 00:46:13.927935 2382 server.go:1269] "Started kubelet" May 10 00:46:13.930223 kubelet[2382]: I0510 00:46:13.930201 2382 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:46:13.931362 kubelet[2382]: I0510 00:46:13.931344 2382 server.go:460] "Adding debug handlers to kubelet server" May 10 00:46:13.933731 kubelet[2382]: I0510 00:46:13.933677 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:46:13.934126 kubelet[2382]: I0510 00:46:13.934108 2382 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:46:13.941139 kubelet[2382]: I0510 00:46:13.941114 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:46:13.943325 kubelet[2382]: E0510 00:46:13.943296 2382 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:46:13.944957 kubelet[2382]: I0510 00:46:13.944937 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:46:13.952594 kubelet[2382]: I0510 00:46:13.952575 2382 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:46:13.952803 kubelet[2382]: I0510 00:46:13.952784 2382 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:46:13.954399 kubelet[2382]: I0510 00:46:13.954375 2382 factory.go:221] Registration of the containerd container factory successfully May 10 00:46:13.954399 kubelet[2382]: I0510 00:46:13.954397 2382 factory.go:221] Registration of the systemd container factory successfully May 10 00:46:13.954525 kubelet[2382]: I0510 00:46:13.954464 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:46:13.954608 kubelet[2382]: I0510 00:46:13.954386 2382 reconciler.go:26] "Reconciler: start to sync state" May 10 00:46:13.956382 kubelet[2382]: I0510 00:46:13.956353 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:46:13.958522 kubelet[2382]: I0510 00:46:13.958502 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:46:13.958655 kubelet[2382]: I0510 00:46:13.958644 2382 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:46:13.958752 kubelet[2382]: I0510 00:46:13.958742 2382 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:46:13.958871 kubelet[2382]: E0510 00:46:13.958854 2382 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.000937 2382 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.000956 2382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.000988 2382 state_mem.go:36] "Initialized new in-memory state store" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.001176 2382 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.001191 2382 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.001238 2382 policy_none.go:49] "None policy: Start" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.001879 2382 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:46:14.366753 kubelet[2382]: I0510 00:46:14.001926 2382 state_mem.go:35] "Initializing new in-memory state store" May 10 00:46:14.366753 kubelet[2382]: E0510 00:46:14.059652 2382 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:46:14.366753 kubelet[2382]: E0510 00:46:14.259769 2382 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:46:14.367560 kubelet[2382]: I0510 00:46:14.367536 2382 state_mem.go:75] "Updated machine memory state" May 10 00:46:14.377092 kubelet[2382]: I0510 00:46:14.377053 2382 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:46:14.377303 kubelet[2382]: I0510 00:46:14.377287 2382 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:46:14.377384 kubelet[2382]: I0510 00:46:14.377307 2382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:46:14.378838 kubelet[2382]: I0510 00:46:14.378484 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:46:14.487821 sudo[2413]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:46:14.488215 sudo[2413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:46:14.493679 kubelet[2382]: I0510 00:46:14.493646 2382 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.513764 kubelet[2382]: I0510 00:46:14.513731 2382 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.513936 kubelet[2382]: I0510 00:46:14.513829 2382 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.674714 kubelet[2382]: W0510 00:46:14.674613 2382 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:46:14.679567 kubelet[2382]: W0510 00:46:14.679532 2382 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:46:14.679927 kubelet[2382]: W0510 00:46:14.679906 2382 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:46:14.759317 kubelet[2382]: I0510 00:46:14.759280 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69e33ff635b634e6aec701d4de04d300-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8a4b3429d2\" (UID: \"69e33ff635b634e6aec701d4de04d300\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759317 kubelet[2382]: I0510 00:46:14.759317 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759559 kubelet[2382]: I0510 00:46:14.759348 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759559 kubelet[2382]: I0510 00:46:14.759369 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759559 kubelet[2382]: I0510 00:46:14.759394 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1aa3be9227c6a2e90f39f9bcf4efca8d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-8a4b3429d2\" (UID: \"1aa3be9227c6a2e90f39f9bcf4efca8d\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759559 kubelet[2382]: I0510 00:46:14.759415 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69e33ff635b634e6aec701d4de04d300-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8a4b3429d2\" (UID: \"69e33ff635b634e6aec701d4de04d300\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759559 kubelet[2382]: I0510 00:46:14.759434 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69e33ff635b634e6aec701d4de04d300-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-8a4b3429d2\" (UID: \"69e33ff635b634e6aec701d4de04d300\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759764 kubelet[2382]: I0510 00:46:14.759454 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.759764 kubelet[2382]: I0510 00:46:14.759476 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a996196c9b694e233d8b832ebc71efd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-8a4b3429d2\" (UID: \"7a996196c9b694e233d8b832ebc71efd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" May 10 00:46:14.922989 kubelet[2382]: I0510 00:46:14.922903 2382 apiserver.go:52] "Watching apiserver" May 10 00:46:14.954187 kubelet[2382]: I0510 00:46:14.954066 2382 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:46:15.012649 kubelet[2382]: I0510 00:46:15.012572 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8a4b3429d2" podStartSLOduration=1.012547381 podStartE2EDuration="1.012547381s" podCreationTimestamp="2025-05-10 00:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:15.012198473 +0000 UTC m=+1.145825169" watchObservedRunningTime="2025-05-10 00:46:15.012547381 +0000 UTC m=+1.146174077" May 10 00:46:15.024369 kubelet[2382]: I0510 00:46:15.024310 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8a4b3429d2" podStartSLOduration=1.024288644 podStartE2EDuration="1.024288644s" podCreationTimestamp="2025-05-10 00:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:15.023812333 +0000 UTC m=+1.157438929" watchObservedRunningTime="2025-05-10 00:46:15.024288644 +0000 UTC m=+1.157915240" May 10 00:46:15.050468 sudo[2413]: pam_unix(sudo:session): session closed for user root May 10 00:46:15.054384 kubelet[2382]: I0510 00:46:15.054327 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8a4b3429d2" podStartSLOduration=1.054307617 podStartE2EDuration="1.054307617s" podCreationTimestamp="2025-05-10 00:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:15.039778691 +0000 UTC m=+1.173405387" watchObservedRunningTime="2025-05-10 00:46:15.054307617 +0000 UTC m=+1.187934313" May 10 00:46:16.656869 sudo[1719]: pam_unix(sudo:session): session closed for user root May 10 00:46:16.758513 sshd[1716]: pam_unix(sshd:session): session closed for user core May 10 00:46:16.762030 systemd[1]: sshd@4-10.200.8.31:22-10.200.16.10:52922.service: Deactivated successfully. May 10 00:46:16.763198 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:46:16.763419 systemd[1]: session-7.scope: Consumed 4.735s CPU time. May 10 00:46:16.764087 systemd-logind[1399]: Session 7 logged out. Waiting for processes to exit. May 10 00:46:16.765210 systemd-logind[1399]: Removed session 7. May 10 00:46:19.774974 kubelet[2382]: I0510 00:46:19.774927 2382 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:46:19.775609 env[1415]: time="2025-05-10T00:46:19.775550095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:46:19.776094 kubelet[2382]: I0510 00:46:19.776068 2382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:46:20.624495 systemd[1]: Created slice kubepods-besteffort-pod6aafa23b_4814_4e17_8201_a93da898bc0d.slice. May 10 00:46:20.638812 systemd[1]: Created slice kubepods-burstable-podd240e79c_1cc3_4700_812e_a7704ff947ac.slice. May 10 00:46:20.749946 systemd[1]: Created slice kubepods-besteffort-poda823b265_11be_4831_8e9d_241a2f61afed.slice. May 10 00:46:20.796655 kubelet[2382]: I0510 00:46:20.796607 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hlb\" (UniqueName: \"kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-kube-api-access-n6hlb\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.796655 kubelet[2382]: I0510 00:46:20.796658 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-bpf-maps\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797153 kubelet[2382]: I0510 00:46:20.796682 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-xtables-lock\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797153 kubelet[2382]: I0510 00:46:20.796701 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-lib-modules\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797153 kubelet[2382]: I0510 00:46:20.796722 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6aafa23b-4814-4e17-8201-a93da898bc0d-xtables-lock\") pod \"kube-proxy-bg49k\" (UID: \"6aafa23b-4814-4e17-8201-a93da898bc0d\") " pod="kube-system/kube-proxy-bg49k" May 10 00:46:20.797153 kubelet[2382]: I0510 00:46:20.796740 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-etc-cni-netd\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797153 kubelet[2382]: I0510 00:46:20.796760 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-net\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797153 kubelet[2382]: I0510 00:46:20.796783 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-hubble-tls\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797405 kubelet[2382]: I0510 00:46:20.796807 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-config-path\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797405 kubelet[2382]: I0510 00:46:20.796836 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-run\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797405 kubelet[2382]: I0510 00:46:20.796857 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6aafa23b-4814-4e17-8201-a93da898bc0d-kube-proxy\") pod \"kube-proxy-bg49k\" (UID: \"6aafa23b-4814-4e17-8201-a93da898bc0d\") " pod="kube-system/kube-proxy-bg49k" May 10 00:46:20.797405 kubelet[2382]: I0510 00:46:20.796878 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-hostproc\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797405 kubelet[2382]: I0510 00:46:20.796898 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cni-path\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797405 kubelet[2382]: I0510 00:46:20.796920 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6aafa23b-4814-4e17-8201-a93da898bc0d-lib-modules\") pod \"kube-proxy-bg49k\" (UID: \"6aafa23b-4814-4e17-8201-a93da898bc0d\") " pod="kube-system/kube-proxy-bg49k" May 10 00:46:20.797551 kubelet[2382]: I0510 00:46:20.796946 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-cgroup\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797551 kubelet[2382]: I0510 00:46:20.796973 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnl87\" (UniqueName: \"kubernetes.io/projected/6aafa23b-4814-4e17-8201-a93da898bc0d-kube-api-access-cnl87\") pod \"kube-proxy-bg49k\" (UID: \"6aafa23b-4814-4e17-8201-a93da898bc0d\") " pod="kube-system/kube-proxy-bg49k" May 10 00:46:20.797551 kubelet[2382]: I0510 00:46:20.796994 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d240e79c-1cc3-4700-812e-a7704ff947ac-clustermesh-secrets\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.797551 kubelet[2382]: I0510 00:46:20.797016 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-kernel\") pod \"cilium-22jf8\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " pod="kube-system/cilium-22jf8" May 10 00:46:20.898075 kubelet[2382]: I0510 00:46:20.897928 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a823b265-11be-4831-8e9d-241a2f61afed-cilium-config-path\") pod \"cilium-operator-5d85765b45-z8m4l\" (UID: \"a823b265-11be-4831-8e9d-241a2f61afed\") " pod="kube-system/cilium-operator-5d85765b45-z8m4l" May 10 00:46:20.898612 kubelet[2382]: I0510 00:46:20.898580 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cddk2\" (UniqueName: \"kubernetes.io/projected/a823b265-11be-4831-8e9d-241a2f61afed-kube-api-access-cddk2\") pod \"cilium-operator-5d85765b45-z8m4l\" (UID: \"a823b265-11be-4831-8e9d-241a2f61afed\") " pod="kube-system/cilium-operator-5d85765b45-z8m4l" May 10 00:46:20.899539 kubelet[2382]: I0510 00:46:20.899505 2382 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 10 00:46:20.939270 env[1415]: time="2025-05-10T00:46:20.938890639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bg49k,Uid:6aafa23b-4814-4e17-8201-a93da898bc0d,Namespace:kube-system,Attempt:0,}" May 10 00:46:20.943943 env[1415]: time="2025-05-10T00:46:20.943903338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22jf8,Uid:d240e79c-1cc3-4700-812e-a7704ff947ac,Namespace:kube-system,Attempt:0,}" May 10 00:46:21.008194 env[1415]: time="2025-05-10T00:46:21.008103094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:21.008431 env[1415]: time="2025-05-10T00:46:21.008400599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:21.008574 env[1415]: time="2025-05-10T00:46:21.008546102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:21.008917 env[1415]: time="2025-05-10T00:46:21.008872108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/833bf728ba2c4f1edd56af9e25ecfcb34beeacd5c1deecc643c2af0a3e403b61 pid=2464 runtime=io.containerd.runc.v2 May 10 00:46:21.015765 env[1415]: time="2025-05-10T00:46:21.015704339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:21.015915 env[1415]: time="2025-05-10T00:46:21.015889343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:21.016101 env[1415]: time="2025-05-10T00:46:21.016068746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:21.016394 env[1415]: time="2025-05-10T00:46:21.016324151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514 pid=2482 runtime=io.containerd.runc.v2 May 10 00:46:21.035703 systemd[1]: Started cri-containerd-833bf728ba2c4f1edd56af9e25ecfcb34beeacd5c1deecc643c2af0a3e403b61.scope. May 10 00:46:21.055621 systemd[1]: Started cri-containerd-34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514.scope. May 10 00:46:21.056575 env[1415]: time="2025-05-10T00:46:21.056536619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-z8m4l,Uid:a823b265-11be-4831-8e9d-241a2f61afed,Namespace:kube-system,Attempt:0,}" May 10 00:46:21.089322 env[1415]: time="2025-05-10T00:46:21.089273845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bg49k,Uid:6aafa23b-4814-4e17-8201-a93da898bc0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"833bf728ba2c4f1edd56af9e25ecfcb34beeacd5c1deecc643c2af0a3e403b61\"" May 10 00:46:21.093903 env[1415]: time="2025-05-10T00:46:21.093850933Z" level=info msg="CreateContainer within sandbox \"833bf728ba2c4f1edd56af9e25ecfcb34beeacd5c1deecc643c2af0a3e403b61\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:46:21.100911 env[1415]: time="2025-05-10T00:46:21.100868567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22jf8,Uid:d240e79c-1cc3-4700-812e-a7704ff947ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\"" May 10 00:46:21.103900 env[1415]: time="2025-05-10T00:46:21.103847724Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:46:21.132757 env[1415]: time="2025-05-10T00:46:21.132690075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:21.132972 env[1415]: time="2025-05-10T00:46:21.132725576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:21.132972 env[1415]: time="2025-05-10T00:46:21.132738876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:21.132972 env[1415]: time="2025-05-10T00:46:21.132866078Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d pid=2550 runtime=io.containerd.runc.v2 May 10 00:46:21.146110 systemd[1]: Started cri-containerd-87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d.scope. May 10 00:46:21.171405 env[1415]: time="2025-05-10T00:46:21.171296813Z" level=info msg="CreateContainer within sandbox \"833bf728ba2c4f1edd56af9e25ecfcb34beeacd5c1deecc643c2af0a3e403b61\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc3becf387637b8a386951e4b49da7826169d9fcce98314cae497b27aacdfdcd\"" May 10 00:46:21.172630 env[1415]: time="2025-05-10T00:46:21.172599038Z" level=info msg="StartContainer for \"dc3becf387637b8a386951e4b49da7826169d9fcce98314cae497b27aacdfdcd\"" May 10 00:46:21.196378 systemd[1]: Started cri-containerd-dc3becf387637b8a386951e4b49da7826169d9fcce98314cae497b27aacdfdcd.scope. May 10 00:46:21.203719 env[1415]: time="2025-05-10T00:46:21.203672532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-z8m4l,Uid:a823b265-11be-4831-8e9d-241a2f61afed,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d\"" May 10 00:46:21.240763 env[1415]: time="2025-05-10T00:46:21.240704940Z" level=info msg="StartContainer for \"dc3becf387637b8a386951e4b49da7826169d9fcce98314cae497b27aacdfdcd\" returns successfully" May 10 00:46:22.020786 kubelet[2382]: I0510 00:46:22.019072 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bg49k" podStartSLOduration=2.019047308 podStartE2EDuration="2.019047308s" podCreationTimestamp="2025-05-10 00:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:22.018878805 +0000 UTC m=+8.152505401" watchObservedRunningTime="2025-05-10 00:46:22.019047308 +0000 UTC m=+8.152673904" May 10 00:46:26.779810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713034050.mount: Deactivated successfully. May 10 00:46:29.639543 env[1415]: time="2025-05-10T00:46:29.639459076Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:29.644945 env[1415]: time="2025-05-10T00:46:29.644889560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:29.649718 env[1415]: time="2025-05-10T00:46:29.649680635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:29.650329 env[1415]: time="2025-05-10T00:46:29.650296845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:46:29.658518 env[1415]: time="2025-05-10T00:46:29.658099566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:46:29.659445 env[1415]: time="2025-05-10T00:46:29.659412387Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:46:29.698999 env[1415]: time="2025-05-10T00:46:29.698954803Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\"" May 10 00:46:29.699522 env[1415]: time="2025-05-10T00:46:29.699486111Z" level=info msg="StartContainer for \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\"" May 10 00:46:29.726169 systemd[1]: Started cri-containerd-31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa.scope. May 10 00:46:29.759277 env[1415]: time="2025-05-10T00:46:29.759232742Z" level=info msg="StartContainer for \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\" returns successfully" May 10 00:46:29.767090 systemd[1]: cri-containerd-31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa.scope: Deactivated successfully. May 10 00:46:30.686763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa-rootfs.mount: Deactivated successfully. May 10 00:46:33.487765 env[1415]: time="2025-05-10T00:46:33.487710101Z" level=info msg="shim disconnected" id=31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa May 10 00:46:33.488337 env[1415]: time="2025-05-10T00:46:33.487763102Z" level=warning msg="cleaning up after shim disconnected" id=31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa namespace=k8s.io May 10 00:46:33.488337 env[1415]: time="2025-05-10T00:46:33.487789102Z" level=info msg="cleaning up dead shim" May 10 00:46:33.495863 env[1415]: time="2025-05-10T00:46:33.495817416Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2801 runtime=io.containerd.runc.v2\n" May 10 00:46:34.020154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547798627.mount: Deactivated successfully. May 10 00:46:34.037729 env[1415]: time="2025-05-10T00:46:34.037668658Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:46:34.119551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97091624.mount: Deactivated successfully. May 10 00:46:34.180675 env[1415]: time="2025-05-10T00:46:34.180617430Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\"" May 10 00:46:34.217010 env[1415]: time="2025-05-10T00:46:34.216963731Z" level=info msg="StartContainer for \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\"" May 10 00:46:34.239559 systemd[1]: Started cri-containerd-9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434.scope. May 10 00:46:34.371021 env[1415]: time="2025-05-10T00:46:34.370913154Z" level=info msg="StartContainer for \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\" returns successfully" May 10 00:46:34.376483 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:46:34.376825 systemd[1]: Stopped systemd-sysctl.service. May 10 00:46:34.377597 systemd[1]: Stopping systemd-sysctl.service... May 10 00:46:34.379841 systemd[1]: Starting systemd-sysctl.service... May 10 00:46:34.380988 systemd[1]: cri-containerd-9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434.scope: Deactivated successfully. May 10 00:46:34.395699 systemd[1]: Finished systemd-sysctl.service. May 10 00:46:34.432690 env[1415]: time="2025-05-10T00:46:34.432635706Z" level=info msg="shim disconnected" id=9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434 May 10 00:46:34.433063 env[1415]: time="2025-05-10T00:46:34.433036311Z" level=warning msg="cleaning up after shim disconnected" id=9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434 namespace=k8s.io May 10 00:46:34.433220 env[1415]: time="2025-05-10T00:46:34.433201613Z" level=info msg="cleaning up dead shim" May 10 00:46:34.453134 env[1415]: time="2025-05-10T00:46:34.453090288Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2866 runtime=io.containerd.runc.v2\n" May 10 00:46:35.040926 env[1415]: time="2025-05-10T00:46:35.040878481Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:46:35.072937 env[1415]: time="2025-05-10T00:46:35.072873912Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:35.099816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049543649.mount: Deactivated successfully. May 10 00:46:35.107074 env[1415]: time="2025-05-10T00:46:35.107035772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:35.120633 env[1415]: time="2025-05-10T00:46:35.120586155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:46:35.120877 env[1415]: time="2025-05-10T00:46:35.120843858Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:46:35.124099 env[1415]: time="2025-05-10T00:46:35.123424193Z" level=info msg="CreateContainer within sandbox \"87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:46:35.129583 env[1415]: time="2025-05-10T00:46:35.129544975Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\"" May 10 00:46:35.130189 env[1415]: time="2025-05-10T00:46:35.130111883Z" level=info msg="StartContainer for \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\"" May 10 00:46:35.155143 systemd[1]: Started cri-containerd-82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5.scope. May 10 00:46:35.170528 env[1415]: time="2025-05-10T00:46:35.170473727Z" level=info msg="CreateContainer within sandbox \"87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\"" May 10 00:46:35.174274 env[1415]: time="2025-05-10T00:46:35.174232177Z" level=info msg="StartContainer for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\"" May 10 00:46:35.196210 systemd[1]: cri-containerd-82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5.scope: Deactivated successfully. May 10 00:46:35.205632 env[1415]: time="2025-05-10T00:46:35.205576099Z" level=info msg="StartContainer for \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\" returns successfully" May 10 00:46:35.210948 systemd[1]: Started cri-containerd-06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588.scope. May 10 00:46:35.675134 env[1415]: time="2025-05-10T00:46:35.675076722Z" level=info msg="StartContainer for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" returns successfully" May 10 00:46:35.683004 env[1415]: time="2025-05-10T00:46:35.682956328Z" level=info msg="shim disconnected" id=82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5 May 10 00:46:35.683316 env[1415]: time="2025-05-10T00:46:35.683283732Z" level=warning msg="cleaning up after shim disconnected" id=82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5 namespace=k8s.io May 10 00:46:35.683444 env[1415]: time="2025-05-10T00:46:35.683428034Z" level=info msg="cleaning up dead shim" May 10 00:46:35.698874 env[1415]: time="2025-05-10T00:46:35.698836342Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2962 runtime=io.containerd.runc.v2\n" May 10 00:46:36.010638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5-rootfs.mount: Deactivated successfully. May 10 00:46:36.048359 env[1415]: time="2025-05-10T00:46:36.048306333Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:46:36.099362 env[1415]: time="2025-05-10T00:46:36.099311303Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\"" May 10 00:46:36.100376 env[1415]: time="2025-05-10T00:46:36.100341717Z" level=info msg="StartContainer for \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\"" May 10 00:46:36.136004 systemd[1]: Started cri-containerd-9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd.scope. May 10 00:46:36.242443 env[1415]: time="2025-05-10T00:46:36.242384085Z" level=info msg="StartContainer for \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\" returns successfully" May 10 00:46:36.244837 systemd[1]: cri-containerd-9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd.scope: Deactivated successfully. May 10 00:46:36.305995 env[1415]: time="2025-05-10T00:46:36.305852520Z" level=info msg="shim disconnected" id=9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd May 10 00:46:36.305995 env[1415]: time="2025-05-10T00:46:36.305915120Z" level=warning msg="cleaning up after shim disconnected" id=9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd namespace=k8s.io May 10 00:46:36.305995 env[1415]: time="2025-05-10T00:46:36.305927221Z" level=info msg="cleaning up dead shim" May 10 00:46:36.318764 env[1415]: time="2025-05-10T00:46:36.318712989Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:46:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3017 runtime=io.containerd.runc.v2\n" May 10 00:46:36.333558 kubelet[2382]: I0510 00:46:36.333486 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-z8m4l" podStartSLOduration=2.419945832 podStartE2EDuration="16.333459383s" podCreationTimestamp="2025-05-10 00:46:20 +0000 UTC" firstStartedPulling="2025-05-10 00:46:21.208364721 +0000 UTC m=+7.341991417" lastFinishedPulling="2025-05-10 00:46:35.121878372 +0000 UTC m=+21.255504968" observedRunningTime="2025-05-10 00:46:36.127507374 +0000 UTC m=+22.261133970" watchObservedRunningTime="2025-05-10 00:46:36.333459383 +0000 UTC m=+22.467086479" May 10 00:46:37.010647 systemd[1]: run-containerd-runc-k8s.io-9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd-runc.Sz7zco.mount: Deactivated successfully. May 10 00:46:37.010787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd-rootfs.mount: Deactivated successfully. May 10 00:46:37.055006 env[1415]: time="2025-05-10T00:46:37.054944355Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:46:37.088686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171955513.mount: Deactivated successfully. May 10 00:46:37.107258 env[1415]: time="2025-05-10T00:46:37.107198826Z" level=info msg="CreateContainer within sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\"" May 10 00:46:37.107906 env[1415]: time="2025-05-10T00:46:37.107822534Z" level=info msg="StartContainer for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\"" May 10 00:46:37.138367 systemd[1]: Started cri-containerd-631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3.scope. May 10 00:46:37.190503 env[1415]: time="2025-05-10T00:46:37.190451696Z" level=info msg="StartContainer for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" returns successfully" May 10 00:46:37.359115 kubelet[2382]: I0510 00:46:37.358252 2382 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 00:46:37.412134 systemd[1]: Created slice kubepods-burstable-podc1df6ddf_ba71_4a54_9b76_83bbada5ee87.slice. May 10 00:46:37.422292 systemd[1]: Created slice kubepods-burstable-pod74cad95f_458c_487e_bfe9_f2ec230fb644.slice. May 10 00:46:37.423283 kubelet[2382]: I0510 00:46:37.423256 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1df6ddf-ba71-4a54-9b76-83bbada5ee87-config-volume\") pod \"coredns-6f6b679f8f-jfz9h\" (UID: \"c1df6ddf-ba71-4a54-9b76-83bbada5ee87\") " pod="kube-system/coredns-6f6b679f8f-jfz9h" May 10 00:46:37.423440 kubelet[2382]: I0510 00:46:37.423410 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltn7\" (UniqueName: \"kubernetes.io/projected/c1df6ddf-ba71-4a54-9b76-83bbada5ee87-kube-api-access-rltn7\") pod \"coredns-6f6b679f8f-jfz9h\" (UID: \"c1df6ddf-ba71-4a54-9b76-83bbada5ee87\") " pod="kube-system/coredns-6f6b679f8f-jfz9h" May 10 00:46:37.423523 kubelet[2382]: I0510 00:46:37.423514 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkdm\" (UniqueName: \"kubernetes.io/projected/74cad95f-458c-487e-bfe9-f2ec230fb644-kube-api-access-djkdm\") pod \"coredns-6f6b679f8f-tkq6h\" (UID: \"74cad95f-458c-487e-bfe9-f2ec230fb644\") " pod="kube-system/coredns-6f6b679f8f-tkq6h" May 10 00:46:37.423591 kubelet[2382]: I0510 00:46:37.423582 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cad95f-458c-487e-bfe9-f2ec230fb644-config-volume\") pod \"coredns-6f6b679f8f-tkq6h\" (UID: \"74cad95f-458c-487e-bfe9-f2ec230fb644\") " pod="kube-system/coredns-6f6b679f8f-tkq6h" May 10 00:46:37.716977 env[1415]: time="2025-05-10T00:46:37.716302050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfz9h,Uid:c1df6ddf-ba71-4a54-9b76-83bbada5ee87,Namespace:kube-system,Attempt:0,}" May 10 00:46:37.731868 env[1415]: time="2025-05-10T00:46:37.731491245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tkq6h,Uid:74cad95f-458c-487e-bfe9-f2ec230fb644,Namespace:kube-system,Attempt:0,}" May 10 00:46:38.094194 kubelet[2382]: I0510 00:46:38.094096 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-22jf8" podStartSLOduration=9.543720787 podStartE2EDuration="18.094066575s" podCreationTimestamp="2025-05-10 00:46:20 +0000 UTC" firstStartedPulling="2025-05-10 00:46:21.101971288 +0000 UTC m=+7.235597884" lastFinishedPulling="2025-05-10 00:46:29.652317076 +0000 UTC m=+15.785943672" observedRunningTime="2025-05-10 00:46:38.089960124 +0000 UTC m=+24.223586820" watchObservedRunningTime="2025-05-10 00:46:38.094066575 +0000 UTC m=+24.227693171" May 10 00:46:39.681409 systemd-networkd[1568]: cilium_host: Link UP May 10 00:46:39.685768 systemd-networkd[1568]: cilium_net: Link UP May 10 00:46:39.686185 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 10 00:46:39.686258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:46:39.689876 systemd-networkd[1568]: cilium_net: Gained carrier May 10 00:46:39.690700 systemd-networkd[1568]: cilium_host: Gained carrier May 10 00:46:39.810258 systemd-networkd[1568]: cilium_vxlan: Link UP May 10 00:46:39.810267 systemd-networkd[1568]: cilium_vxlan: Gained carrier May 10 00:46:40.077288 systemd-networkd[1568]: cilium_net: Gained IPv6LL May 10 00:46:40.099197 kernel: NET: Registered PF_ALG protocol family May 10 00:46:40.677336 systemd-networkd[1568]: cilium_host: Gained IPv6LL May 10 00:46:40.915334 systemd-networkd[1568]: lxc_health: Link UP May 10 00:46:40.937288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:46:40.937717 systemd-networkd[1568]: lxc_health: Gained carrier May 10 00:46:41.313100 systemd-networkd[1568]: lxc51a8ccee0e7f: Link UP May 10 00:46:41.327183 kernel: eth0: renamed from tmpd3e08 May 10 00:46:41.327313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc51a8ccee0e7f: link becomes ready May 10 00:46:41.329468 systemd-networkd[1568]: lxc51a8ccee0e7f: Gained carrier May 10 00:46:41.353560 systemd-networkd[1568]: lxc7a946e969906: Link UP May 10 00:46:41.361244 kernel: eth0: renamed from tmpf8323 May 10 00:46:41.373276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7a946e969906: link becomes ready May 10 00:46:41.373536 systemd-networkd[1568]: lxc7a946e969906: Gained carrier May 10 00:46:41.509324 systemd-networkd[1568]: cilium_vxlan: Gained IPv6LL May 10 00:46:42.213385 systemd-networkd[1568]: lxc_health: Gained IPv6LL May 10 00:46:43.109320 systemd-networkd[1568]: lxc51a8ccee0e7f: Gained IPv6LL May 10 00:46:43.109693 systemd-networkd[1568]: lxc7a946e969906: Gained IPv6LL May 10 00:46:45.063339 env[1415]: time="2025-05-10T00:46:45.057456443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:45.063339 env[1415]: time="2025-05-10T00:46:45.057590044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:45.063339 env[1415]: time="2025-05-10T00:46:45.057628644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:45.063339 env[1415]: time="2025-05-10T00:46:45.057841347Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f83234a6e47c0a7d84ed556ba01f0adb01ea7cb840dd42d5407d9a3b7787d954 pid=3572 runtime=io.containerd.runc.v2 May 10 00:46:45.096903 systemd[1]: run-containerd-runc-k8s.io-f83234a6e47c0a7d84ed556ba01f0adb01ea7cb840dd42d5407d9a3b7787d954-runc.LAH4qY.mount: Deactivated successfully. May 10 00:46:45.100695 env[1415]: time="2025-05-10T00:46:45.100446903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:46:45.104224 env[1415]: time="2025-05-10T00:46:45.100990209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:46:45.104224 env[1415]: time="2025-05-10T00:46:45.101077310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:46:45.104224 env[1415]: time="2025-05-10T00:46:45.101312213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3e08563c324fd324ffc01bdb8d52d2b05b8b13a91e27f92cdc9a637526de683 pid=3593 runtime=io.containerd.runc.v2 May 10 00:46:45.107004 systemd[1]: Started cri-containerd-f83234a6e47c0a7d84ed556ba01f0adb01ea7cb840dd42d5407d9a3b7787d954.scope. May 10 00:46:45.141524 systemd[1]: Started cri-containerd-d3e08563c324fd324ffc01bdb8d52d2b05b8b13a91e27f92cdc9a637526de683.scope. May 10 00:46:45.196796 env[1415]: time="2025-05-10T00:46:45.196743036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tkq6h,Uid:74cad95f-458c-487e-bfe9-f2ec230fb644,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83234a6e47c0a7d84ed556ba01f0adb01ea7cb840dd42d5407d9a3b7787d954\"" May 10 00:46:45.200613 env[1415]: time="2025-05-10T00:46:45.200575477Z" level=info msg="CreateContainer within sandbox \"f83234a6e47c0a7d84ed556ba01f0adb01ea7cb840dd42d5407d9a3b7787d954\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:46:45.245284 env[1415]: time="2025-05-10T00:46:45.245226156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfz9h,Uid:c1df6ddf-ba71-4a54-9b76-83bbada5ee87,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e08563c324fd324ffc01bdb8d52d2b05b8b13a91e27f92cdc9a637526de683\"" May 10 00:46:45.250087 env[1415]: time="2025-05-10T00:46:45.250046107Z" level=info msg="CreateContainer within sandbox \"d3e08563c324fd324ffc01bdb8d52d2b05b8b13a91e27f92cdc9a637526de683\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:46:45.292929 env[1415]: time="2025-05-10T00:46:45.292887367Z" level=info msg="CreateContainer within sandbox \"f83234a6e47c0a7d84ed556ba01f0adb01ea7cb840dd42d5407d9a3b7787d954\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41b8760fe890f7e47f536e67c5053de9d2eaf0e881692007235ec2e5fe9f3dbb\"" May 10 00:46:45.293830 env[1415]: time="2025-05-10T00:46:45.293788076Z" level=info msg="StartContainer for \"41b8760fe890f7e47f536e67c5053de9d2eaf0e881692007235ec2e5fe9f3dbb\"" May 10 00:46:45.306935 env[1415]: time="2025-05-10T00:46:45.306883217Z" level=info msg="CreateContainer within sandbox \"d3e08563c324fd324ffc01bdb8d52d2b05b8b13a91e27f92cdc9a637526de683\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97504a6be957eb130d5e01565790224af94081cf458273c8560f42e3a5f0da7a\"" May 10 00:46:45.309683 env[1415]: time="2025-05-10T00:46:45.309645646Z" level=info msg="StartContainer for \"97504a6be957eb130d5e01565790224af94081cf458273c8560f42e3a5f0da7a\"" May 10 00:46:45.312578 systemd[1]: Started cri-containerd-41b8760fe890f7e47f536e67c5053de9d2eaf0e881692007235ec2e5fe9f3dbb.scope. May 10 00:46:45.348196 systemd[1]: Started cri-containerd-97504a6be957eb130d5e01565790224af94081cf458273c8560f42e3a5f0da7a.scope. May 10 00:46:45.366575 env[1415]: time="2025-05-10T00:46:45.366514456Z" level=info msg="StartContainer for \"41b8760fe890f7e47f536e67c5053de9d2eaf0e881692007235ec2e5fe9f3dbb\" returns successfully" May 10 00:46:45.396976 env[1415]: time="2025-05-10T00:46:45.396902582Z" level=info msg="StartContainer for \"97504a6be957eb130d5e01565790224af94081cf458273c8560f42e3a5f0da7a\" returns successfully" May 10 00:46:46.116986 kubelet[2382]: I0510 00:46:46.116913 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jfz9h" podStartSLOduration=26.116886474 podStartE2EDuration="26.116886474s" podCreationTimestamp="2025-05-10 00:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:46.098806885 +0000 UTC m=+32.232433481" watchObservedRunningTime="2025-05-10 00:46:46.116886474 +0000 UTC m=+32.250513170" May 10 00:46:46.117891 kubelet[2382]: I0510 00:46:46.117802 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tkq6h" podStartSLOduration=26.117786184 podStartE2EDuration="26.117786184s" podCreationTimestamp="2025-05-10 00:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:46:46.11548546 +0000 UTC m=+32.249112156" watchObservedRunningTime="2025-05-10 00:46:46.117786184 +0000 UTC m=+32.251412780" May 10 00:49:21.291485 systemd[1]: Started sshd@5-10.200.8.31:22-10.200.16.10:40390.service. May 10 00:49:21.928860 sshd[3750]: Accepted publickey for core from 10.200.16.10 port 40390 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:21.930363 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:21.935263 systemd-logind[1399]: New session 8 of user core. May 10 00:49:21.935870 systemd[1]: Started session-8.scope. May 10 00:49:22.449744 sshd[3750]: pam_unix(sshd:session): session closed for user core May 10 00:49:22.452592 systemd[1]: sshd@5-10.200.8.31:22-10.200.16.10:40390.service: Deactivated successfully. May 10 00:49:22.453540 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:49:22.454294 systemd-logind[1399]: Session 8 logged out. Waiting for processes to exit. May 10 00:49:22.455078 systemd-logind[1399]: Removed session 8. May 10 00:49:27.558063 systemd[1]: Started sshd@6-10.200.8.31:22-10.200.16.10:40398.service. May 10 00:49:28.195006 sshd[3765]: Accepted publickey for core from 10.200.16.10 port 40398 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:28.196680 sshd[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:28.202198 systemd[1]: Started session-9.scope. May 10 00:49:28.202717 systemd-logind[1399]: New session 9 of user core. May 10 00:49:28.710121 sshd[3765]: pam_unix(sshd:session): session closed for user core May 10 00:49:28.713460 systemd[1]: sshd@6-10.200.8.31:22-10.200.16.10:40398.service: Deactivated successfully. May 10 00:49:28.714587 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:49:28.715297 systemd-logind[1399]: Session 9 logged out. Waiting for processes to exit. May 10 00:49:28.716132 systemd-logind[1399]: Removed session 9. May 10 00:49:33.817769 systemd[1]: Started sshd@7-10.200.8.31:22-10.200.16.10:37504.service. May 10 00:49:34.456174 sshd[3778]: Accepted publickey for core from 10.200.16.10 port 37504 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:34.457810 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:34.463203 systemd-logind[1399]: New session 10 of user core. May 10 00:49:34.463770 systemd[1]: Started session-10.scope. May 10 00:49:34.961670 sshd[3778]: pam_unix(sshd:session): session closed for user core May 10 00:49:34.964572 systemd[1]: sshd@7-10.200.8.31:22-10.200.16.10:37504.service: Deactivated successfully. May 10 00:49:34.965518 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:49:34.966262 systemd-logind[1399]: Session 10 logged out. Waiting for processes to exit. May 10 00:49:34.967027 systemd-logind[1399]: Removed session 10. May 10 00:49:40.069532 systemd[1]: Started sshd@8-10.200.8.31:22-10.200.16.10:47914.service. May 10 00:49:40.704128 sshd[3790]: Accepted publickey for core from 10.200.16.10 port 47914 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:40.705559 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:40.710671 systemd[1]: Started session-11.scope. May 10 00:49:40.711080 systemd-logind[1399]: New session 11 of user core. May 10 00:49:41.225367 sshd[3790]: pam_unix(sshd:session): session closed for user core May 10 00:49:41.228828 systemd[1]: sshd@8-10.200.8.31:22-10.200.16.10:47914.service: Deactivated successfully. May 10 00:49:41.229854 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:49:41.230718 systemd-logind[1399]: Session 11 logged out. Waiting for processes to exit. May 10 00:49:41.231607 systemd-logind[1399]: Removed session 11. May 10 00:49:46.333050 systemd[1]: Started sshd@9-10.200.8.31:22-10.200.16.10:47930.service. May 10 00:49:46.970023 sshd[3803]: Accepted publickey for core from 10.200.16.10 port 47930 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:46.971665 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:46.975670 systemd-logind[1399]: New session 12 of user core. May 10 00:49:46.977517 systemd[1]: Started session-12.scope. May 10 00:49:47.489000 sshd[3803]: pam_unix(sshd:session): session closed for user core May 10 00:49:47.491985 systemd[1]: sshd@9-10.200.8.31:22-10.200.16.10:47930.service: Deactivated successfully. May 10 00:49:47.492960 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:49:47.493699 systemd-logind[1399]: Session 12 logged out. Waiting for processes to exit. May 10 00:49:47.494571 systemd-logind[1399]: Removed session 12. May 10 00:49:52.596521 systemd[1]: Started sshd@10-10.200.8.31:22-10.200.16.10:54734.service. May 10 00:49:53.232328 sshd[3817]: Accepted publickey for core from 10.200.16.10 port 54734 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:53.233952 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:53.239805 systemd-logind[1399]: New session 13 of user core. May 10 00:49:53.240344 systemd[1]: Started session-13.scope. May 10 00:49:53.740659 sshd[3817]: pam_unix(sshd:session): session closed for user core May 10 00:49:53.743390 systemd[1]: sshd@10-10.200.8.31:22-10.200.16.10:54734.service: Deactivated successfully. May 10 00:49:53.744358 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:49:53.745084 systemd-logind[1399]: Session 13 logged out. Waiting for processes to exit. May 10 00:49:53.745934 systemd-logind[1399]: Removed session 13. May 10 00:49:58.849196 systemd[1]: Started sshd@11-10.200.8.31:22-10.200.16.10:54740.service. May 10 00:49:59.484722 sshd[3831]: Accepted publickey for core from 10.200.16.10 port 54740 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:49:59.486484 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:59.492848 systemd[1]: Started session-14.scope. May 10 00:49:59.494073 systemd-logind[1399]: New session 14 of user core. May 10 00:49:59.993452 sshd[3831]: pam_unix(sshd:session): session closed for user core May 10 00:49:59.996855 systemd[1]: sshd@11-10.200.8.31:22-10.200.16.10:54740.service: Deactivated successfully. May 10 00:49:59.997737 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:49:59.998448 systemd-logind[1399]: Session 14 logged out. Waiting for processes to exit. May 10 00:49:59.999319 systemd-logind[1399]: Removed session 14. May 10 00:50:00.100529 systemd[1]: Started sshd@12-10.200.8.31:22-10.200.16.10:33738.service. May 10 00:50:00.737352 sshd[3846]: Accepted publickey for core from 10.200.16.10 port 33738 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:00.738941 sshd[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:00.743991 systemd-logind[1399]: New session 15 of user core. May 10 00:50:00.744536 systemd[1]: Started session-15.scope. May 10 00:50:01.295428 sshd[3846]: pam_unix(sshd:session): session closed for user core May 10 00:50:01.298734 systemd[1]: sshd@12-10.200.8.31:22-10.200.16.10:33738.service: Deactivated successfully. May 10 00:50:01.300309 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:50:01.300315 systemd-logind[1399]: Session 15 logged out. Waiting for processes to exit. May 10 00:50:01.301710 systemd-logind[1399]: Removed session 15. May 10 00:50:01.404334 systemd[1]: Started sshd@13-10.200.8.31:22-10.200.16.10:33746.service. May 10 00:50:02.045343 sshd[3855]: Accepted publickey for core from 10.200.16.10 port 33746 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:02.047104 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:02.052900 systemd[1]: Started session-16.scope. May 10 00:50:02.053228 systemd-logind[1399]: New session 16 of user core. May 10 00:50:02.552225 sshd[3855]: pam_unix(sshd:session): session closed for user core May 10 00:50:02.555057 systemd[1]: sshd@13-10.200.8.31:22-10.200.16.10:33746.service: Deactivated successfully. May 10 00:50:02.556038 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:50:02.556884 systemd-logind[1399]: Session 16 logged out. Waiting for processes to exit. May 10 00:50:02.557824 systemd-logind[1399]: Removed session 16. May 10 00:50:07.664351 systemd[1]: Started sshd@14-10.200.8.31:22-10.200.16.10:33752.service. May 10 00:50:08.302394 sshd[3867]: Accepted publickey for core from 10.200.16.10 port 33752 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:08.303943 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:08.309092 systemd[1]: Started session-17.scope. May 10 00:50:08.309746 systemd-logind[1399]: New session 17 of user core. May 10 00:50:08.815924 sshd[3867]: pam_unix(sshd:session): session closed for user core May 10 00:50:08.819225 systemd[1]: sshd@14-10.200.8.31:22-10.200.16.10:33752.service: Deactivated successfully. May 10 00:50:08.820320 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:50:08.821131 systemd-logind[1399]: Session 17 logged out. Waiting for processes to exit. May 10 00:50:08.822145 systemd-logind[1399]: Removed session 17. May 10 00:50:13.924435 systemd[1]: Started sshd@15-10.200.8.31:22-10.200.16.10:35160.service. May 10 00:50:14.562695 sshd[3882]: Accepted publickey for core from 10.200.16.10 port 35160 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:14.564412 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:14.569980 systemd[1]: Started session-18.scope. May 10 00:50:14.570450 systemd-logind[1399]: New session 18 of user core. May 10 00:50:15.070229 sshd[3882]: pam_unix(sshd:session): session closed for user core May 10 00:50:15.073126 systemd[1]: sshd@15-10.200.8.31:22-10.200.16.10:35160.service: Deactivated successfully. May 10 00:50:15.074069 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:50:15.074741 systemd-logind[1399]: Session 18 logged out. Waiting for processes to exit. May 10 00:50:15.075628 systemd-logind[1399]: Removed session 18. May 10 00:50:15.176856 systemd[1]: Started sshd@16-10.200.8.31:22-10.200.16.10:35172.service. May 10 00:50:15.812098 sshd[3896]: Accepted publickey for core from 10.200.16.10 port 35172 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:15.813766 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:15.819103 systemd[1]: Started session-19.scope. May 10 00:50:15.819451 systemd-logind[1399]: New session 19 of user core. May 10 00:50:16.390196 sshd[3896]: pam_unix(sshd:session): session closed for user core May 10 00:50:16.393527 systemd[1]: sshd@16-10.200.8.31:22-10.200.16.10:35172.service: Deactivated successfully. May 10 00:50:16.394687 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:50:16.395639 systemd-logind[1399]: Session 19 logged out. Waiting for processes to exit. May 10 00:50:16.396682 systemd-logind[1399]: Removed session 19. May 10 00:50:16.497076 systemd[1]: Started sshd@17-10.200.8.31:22-10.200.16.10:35184.service. May 10 00:50:17.134425 sshd[3906]: Accepted publickey for core from 10.200.16.10 port 35184 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:17.136335 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:17.141299 systemd[1]: Started session-20.scope. May 10 00:50:17.141916 systemd-logind[1399]: New session 20 of user core. May 10 00:50:19.128309 sshd[3906]: pam_unix(sshd:session): session closed for user core May 10 00:50:19.132125 systemd[1]: sshd@17-10.200.8.31:22-10.200.16.10:35184.service: Deactivated successfully. May 10 00:50:19.133221 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:50:19.134132 systemd-logind[1399]: Session 20 logged out. Waiting for processes to exit. May 10 00:50:19.135034 systemd-logind[1399]: Removed session 20. May 10 00:50:19.234572 systemd[1]: Started sshd@18-10.200.8.31:22-10.200.16.10:44096.service. May 10 00:50:19.871258 sshd[3923]: Accepted publickey for core from 10.200.16.10 port 44096 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:19.872645 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:19.877919 systemd[1]: Started session-21.scope. May 10 00:50:19.878769 systemd-logind[1399]: New session 21 of user core. May 10 00:50:20.481074 sshd[3923]: pam_unix(sshd:session): session closed for user core May 10 00:50:20.484329 systemd[1]: sshd@18-10.200.8.31:22-10.200.16.10:44096.service: Deactivated successfully. May 10 00:50:20.485213 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:50:20.485883 systemd-logind[1399]: Session 21 logged out. Waiting for processes to exit. May 10 00:50:20.486699 systemd-logind[1399]: Removed session 21. May 10 00:50:20.588121 systemd[1]: Started sshd@19-10.200.8.31:22-10.200.16.10:44108.service. May 10 00:50:21.226637 sshd[3932]: Accepted publickey for core from 10.200.16.10 port 44108 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:21.228092 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:21.233022 systemd[1]: Started session-22.scope. May 10 00:50:21.233530 systemd-logind[1399]: New session 22 of user core. May 10 00:50:21.750789 sshd[3932]: pam_unix(sshd:session): session closed for user core May 10 00:50:21.754140 systemd[1]: sshd@19-10.200.8.31:22-10.200.16.10:44108.service: Deactivated successfully. May 10 00:50:21.755071 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:50:21.755828 systemd-logind[1399]: Session 22 logged out. Waiting for processes to exit. May 10 00:50:21.756712 systemd-logind[1399]: Removed session 22. May 10 00:50:26.857999 systemd[1]: Started sshd@20-10.200.8.31:22-10.200.16.10:44122.service. May 10 00:50:27.495693 sshd[3949]: Accepted publickey for core from 10.200.16.10 port 44122 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:27.497375 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:27.502228 systemd-logind[1399]: New session 23 of user core. May 10 00:50:27.502726 systemd[1]: Started session-23.scope. May 10 00:50:28.001603 sshd[3949]: pam_unix(sshd:session): session closed for user core May 10 00:50:28.004401 systemd[1]: sshd@20-10.200.8.31:22-10.200.16.10:44122.service: Deactivated successfully. May 10 00:50:28.005325 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:50:28.006040 systemd-logind[1399]: Session 23 logged out. Waiting for processes to exit. May 10 00:50:28.006868 systemd-logind[1399]: Removed session 23. May 10 00:50:33.110123 systemd[1]: Started sshd@21-10.200.8.31:22-10.200.16.10:56924.service. May 10 00:50:33.746744 sshd[3961]: Accepted publickey for core from 10.200.16.10 port 56924 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:33.748146 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:33.752877 systemd-logind[1399]: New session 24 of user core. May 10 00:50:33.753419 systemd[1]: Started session-24.scope. May 10 00:50:34.251823 sshd[3961]: pam_unix(sshd:session): session closed for user core May 10 00:50:34.255348 systemd[1]: sshd@21-10.200.8.31:22-10.200.16.10:56924.service: Deactivated successfully. May 10 00:50:34.256474 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:50:34.257352 systemd-logind[1399]: Session 24 logged out. Waiting for processes to exit. May 10 00:50:34.258338 systemd-logind[1399]: Removed session 24. May 10 00:50:39.361237 systemd[1]: Started sshd@22-10.200.8.31:22-10.200.16.10:39924.service. May 10 00:50:39.998573 sshd[3973]: Accepted publickey for core from 10.200.16.10 port 39924 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:39.999971 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:40.004774 systemd[1]: Started session-25.scope. May 10 00:50:40.005397 systemd-logind[1399]: New session 25 of user core. May 10 00:50:40.520959 sshd[3973]: pam_unix(sshd:session): session closed for user core May 10 00:50:40.524265 systemd[1]: sshd@22-10.200.8.31:22-10.200.16.10:39924.service: Deactivated successfully. May 10 00:50:40.525410 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:50:40.526421 systemd-logind[1399]: Session 25 logged out. Waiting for processes to exit. May 10 00:50:40.528041 systemd-logind[1399]: Removed session 25. May 10 00:50:40.627235 systemd[1]: Started sshd@23-10.200.8.31:22-10.200.16.10:39938.service. May 10 00:50:41.265294 sshd[3985]: Accepted publickey for core from 10.200.16.10 port 39938 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:41.266775 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:41.271724 systemd[1]: Started session-26.scope. May 10 00:50:41.272372 systemd-logind[1399]: New session 26 of user core. May 10 00:50:42.933696 env[1415]: time="2025-05-10T00:50:42.933594003Z" level=info msg="StopContainer for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" with timeout 30 (s)" May 10 00:50:42.934202 env[1415]: time="2025-05-10T00:50:42.934097607Z" level=info msg="Stop container \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" with signal terminated" May 10 00:50:42.949686 systemd[1]: cri-containerd-06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588.scope: Deactivated successfully. May 10 00:50:42.958540 env[1415]: time="2025-05-10T00:50:42.958480470Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:50:42.965561 env[1415]: time="2025-05-10T00:50:42.965522017Z" level=info msg="StopContainer for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" with timeout 2 (s)" May 10 00:50:42.965859 env[1415]: time="2025-05-10T00:50:42.965829020Z" level=info msg="Stop container \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" with signal terminated" May 10 00:50:42.973078 systemd-networkd[1568]: lxc_health: Link DOWN May 10 00:50:42.973123 systemd-networkd[1568]: lxc_health: Lost carrier May 10 00:50:42.979352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588-rootfs.mount: Deactivated successfully. May 10 00:50:43.000543 systemd[1]: cri-containerd-631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3.scope: Deactivated successfully. May 10 00:50:43.000882 systemd[1]: cri-containerd-631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3.scope: Consumed 7.412s CPU time. May 10 00:50:43.024591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3-rootfs.mount: Deactivated successfully. May 10 00:50:43.050231 env[1415]: time="2025-05-10T00:50:43.050133284Z" level=info msg="shim disconnected" id=06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588 May 10 00:50:43.050231 env[1415]: time="2025-05-10T00:50:43.050208984Z" level=warning msg="cleaning up after shim disconnected" id=06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588 namespace=k8s.io May 10 00:50:43.050231 env[1415]: time="2025-05-10T00:50:43.050221884Z" level=info msg="cleaning up dead shim" May 10 00:50:43.050676 env[1415]: time="2025-05-10T00:50:43.050133184Z" level=info msg="shim disconnected" id=631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3 May 10 00:50:43.050761 env[1415]: time="2025-05-10T00:50:43.050680187Z" level=warning msg="cleaning up after shim disconnected" id=631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3 namespace=k8s.io May 10 00:50:43.050761 env[1415]: time="2025-05-10T00:50:43.050692788Z" level=info msg="cleaning up dead shim" May 10 00:50:43.066065 env[1415]: time="2025-05-10T00:50:43.066005590Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:50:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 10 00:50:43.067274 env[1415]: time="2025-05-10T00:50:43.067232398Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4056 runtime=io.containerd.runc.v2\n" May 10 00:50:43.073012 env[1415]: time="2025-05-10T00:50:43.072964736Z" level=info msg="StopContainer for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" returns successfully" May 10 00:50:43.073731 env[1415]: time="2025-05-10T00:50:43.073695041Z" level=info msg="StopPodSandbox for \"87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d\"" May 10 00:50:43.073853 env[1415]: time="2025-05-10T00:50:43.073766142Z" level=info msg="Container to stop \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:43.077519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d-shm.mount: Deactivated successfully. May 10 00:50:43.078925 env[1415]: time="2025-05-10T00:50:43.078858576Z" level=info msg="StopContainer for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" returns successfully" May 10 00:50:43.079555 env[1415]: time="2025-05-10T00:50:43.079517280Z" level=info msg="StopPodSandbox for \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\"" May 10 00:50:43.079656 env[1415]: time="2025-05-10T00:50:43.079588081Z" level=info msg="Container to stop \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:43.079656 env[1415]: time="2025-05-10T00:50:43.079609281Z" level=info msg="Container to stop \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:43.079656 env[1415]: time="2025-05-10T00:50:43.079625081Z" level=info msg="Container to stop \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:43.079656 env[1415]: time="2025-05-10T00:50:43.079641681Z" level=info msg="Container to stop \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:43.079851 env[1415]: time="2025-05-10T00:50:43.079656781Z" level=info msg="Container to stop \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:43.082323 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514-shm.mount: Deactivated successfully. May 10 00:50:43.099873 systemd[1]: cri-containerd-87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d.scope: Deactivated successfully. May 10 00:50:43.106526 systemd[1]: cri-containerd-34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514.scope: Deactivated successfully. May 10 00:50:43.157295 env[1415]: time="2025-05-10T00:50:43.157232100Z" level=info msg="shim disconnected" id=87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d May 10 00:50:43.157613 env[1415]: time="2025-05-10T00:50:43.157580502Z" level=warning msg="cleaning up after shim disconnected" id=87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d namespace=k8s.io May 10 00:50:43.157717 env[1415]: time="2025-05-10T00:50:43.157702703Z" level=info msg="cleaning up dead shim" May 10 00:50:43.158153 env[1415]: time="2025-05-10T00:50:43.157284900Z" level=info msg="shim disconnected" id=34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514 May 10 00:50:43.158267 env[1415]: time="2025-05-10T00:50:43.158168806Z" level=warning msg="cleaning up after shim disconnected" id=34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514 namespace=k8s.io May 10 00:50:43.158267 env[1415]: time="2025-05-10T00:50:43.158182506Z" level=info msg="cleaning up dead shim" May 10 00:50:43.169052 env[1415]: time="2025-05-10T00:50:43.169017178Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4119 runtime=io.containerd.runc.v2\n" May 10 00:50:43.169203 env[1415]: time="2025-05-10T00:50:43.169019078Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4120 runtime=io.containerd.runc.v2\n" May 10 00:50:43.169433 env[1415]: time="2025-05-10T00:50:43.169398281Z" level=info msg="TearDown network for sandbox \"87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d\" successfully" May 10 00:50:43.169511 env[1415]: time="2025-05-10T00:50:43.169434681Z" level=info msg="StopPodSandbox for \"87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d\" returns successfully" May 10 00:50:43.169949 env[1415]: time="2025-05-10T00:50:43.169797184Z" level=info msg="TearDown network for sandbox \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" successfully" May 10 00:50:43.169949 env[1415]: time="2025-05-10T00:50:43.169844884Z" level=info msg="StopPodSandbox for \"34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514\" returns successfully" May 10 00:50:43.286482 kubelet[2382]: I0510 00:50:43.286432 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6hlb\" (UniqueName: \"kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-kube-api-access-n6hlb\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287068 kubelet[2382]: I0510 00:50:43.286492 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-xtables-lock\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287068 kubelet[2382]: I0510 00:50:43.286526 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cddk2\" (UniqueName: \"kubernetes.io/projected/a823b265-11be-4831-8e9d-241a2f61afed-kube-api-access-cddk2\") pod \"a823b265-11be-4831-8e9d-241a2f61afed\" (UID: \"a823b265-11be-4831-8e9d-241a2f61afed\") " May 10 00:50:43.287068 kubelet[2382]: I0510 00:50:43.286550 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-bpf-maps\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287068 kubelet[2382]: I0510 00:50:43.286572 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-etc-cni-netd\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287068 kubelet[2382]: I0510 00:50:43.286593 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-run\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287068 kubelet[2382]: I0510 00:50:43.286617 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-hostproc\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287531 kubelet[2382]: I0510 00:50:43.286643 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-cgroup\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287531 kubelet[2382]: I0510 00:50:43.286728 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-hubble-tls\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287531 kubelet[2382]: I0510 00:50:43.286771 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d240e79c-1cc3-4700-812e-a7704ff947ac-clustermesh-secrets\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287531 kubelet[2382]: I0510 00:50:43.286803 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a823b265-11be-4831-8e9d-241a2f61afed-cilium-config-path\") pod \"a823b265-11be-4831-8e9d-241a2f61afed\" (UID: \"a823b265-11be-4831-8e9d-241a2f61afed\") " May 10 00:50:43.287531 kubelet[2382]: I0510 00:50:43.286832 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-net\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287531 kubelet[2382]: I0510 00:50:43.286870 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-config-path\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287860 kubelet[2382]: I0510 00:50:43.286899 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-kernel\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287860 kubelet[2382]: I0510 00:50:43.286922 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-lib-modules\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287860 kubelet[2382]: I0510 00:50:43.286945 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cni-path\") pod \"d240e79c-1cc3-4700-812e-a7704ff947ac\" (UID: \"d240e79c-1cc3-4700-812e-a7704ff947ac\") " May 10 00:50:43.287860 kubelet[2382]: I0510 00:50:43.287058 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.288616 kubelet[2382]: I0510 00:50:43.288133 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.288827 kubelet[2382]: I0510 00:50:43.288591 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.291646 kubelet[2382]: I0510 00:50:43.291618 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.291818 kubelet[2382]: I0510 00:50:43.291799 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.291945 kubelet[2382]: I0510 00:50:43.291928 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.292061 kubelet[2382]: I0510 00:50:43.292045 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.292464 kubelet[2382]: I0510 00:50:43.292436 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-kube-api-access-n6hlb" (OuterVolumeSpecName: "kube-api-access-n6hlb") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "kube-api-access-n6hlb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:43.295947 kubelet[2382]: I0510 00:50:43.295918 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a823b265-11be-4831-8e9d-241a2f61afed-kube-api-access-cddk2" (OuterVolumeSpecName: "kube-api-access-cddk2") pod "a823b265-11be-4831-8e9d-241a2f61afed" (UID: "a823b265-11be-4831-8e9d-241a2f61afed"). InnerVolumeSpecName "kube-api-access-cddk2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:43.296050 kubelet[2382]: I0510 00:50:43.295964 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.296268 kubelet[2382]: I0510 00:50:43.296138 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.296388 kubelet[2382]: I0510 00:50:43.296371 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:43.296542 kubelet[2382]: I0510 00:50:43.296519 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:43.297174 kubelet[2382]: I0510 00:50:43.297140 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:50:43.297648 kubelet[2382]: I0510 00:50:43.297618 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a823b265-11be-4831-8e9d-241a2f61afed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a823b265-11be-4831-8e9d-241a2f61afed" (UID: "a823b265-11be-4831-8e9d-241a2f61afed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:50:43.299323 kubelet[2382]: I0510 00:50:43.299284 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d240e79c-1cc3-4700-812e-a7704ff947ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d240e79c-1cc3-4700-812e-a7704ff947ac" (UID: "d240e79c-1cc3-4700-812e-a7704ff947ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:50:43.387701 kubelet[2382]: I0510 00:50:43.387636 2382 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-bpf-maps\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.387701 kubelet[2382]: I0510 00:50:43.387680 2382 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-etc-cni-netd\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.387701 kubelet[2382]: I0510 00:50:43.387698 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-run\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.387701 kubelet[2382]: I0510 00:50:43.387710 2382 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-hostproc\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387727 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-cgroup\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387740 2382 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-hubble-tls\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387752 2382 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d240e79c-1cc3-4700-812e-a7704ff947ac-clustermesh-secrets\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387765 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a823b265-11be-4831-8e9d-241a2f61afed-cilium-config-path\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387777 2382 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-net\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387791 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d240e79c-1cc3-4700-812e-a7704ff947ac-cilium-config-path\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387805 2382 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388050 kubelet[2382]: I0510 00:50:43.387820 2382 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-lib-modules\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388342 kubelet[2382]: I0510 00:50:43.387832 2382 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-cni-path\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388342 kubelet[2382]: I0510 00:50:43.387846 2382 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n6hlb\" (UniqueName: \"kubernetes.io/projected/d240e79c-1cc3-4700-812e-a7704ff947ac-kube-api-access-n6hlb\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388342 kubelet[2382]: I0510 00:50:43.387858 2382 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d240e79c-1cc3-4700-812e-a7704ff947ac-xtables-lock\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.388342 kubelet[2382]: I0510 00:50:43.387872 2382 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cddk2\" (UniqueName: \"kubernetes.io/projected/a823b265-11be-4831-8e9d-241a2f61afed-kube-api-access-cddk2\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:43.567177 kubelet[2382]: I0510 00:50:43.565124 2382 scope.go:117] "RemoveContainer" containerID="06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588" May 10 00:50:43.571000 systemd[1]: Removed slice kubepods-besteffort-poda823b265_11be_4831_8e9d_241a2f61afed.slice. May 10 00:50:43.573062 env[1415]: time="2025-05-10T00:50:43.572749677Z" level=info msg="RemoveContainer for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\"" May 10 00:50:43.580699 systemd[1]: Removed slice kubepods-burstable-podd240e79c_1cc3_4700_812e_a7704ff947ac.slice. May 10 00:50:43.580822 systemd[1]: kubepods-burstable-podd240e79c_1cc3_4700_812e_a7704ff947ac.slice: Consumed 7.524s CPU time. May 10 00:50:43.584569 env[1415]: time="2025-05-10T00:50:43.584532556Z" level=info msg="RemoveContainer for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" returns successfully" May 10 00:50:43.584774 kubelet[2382]: I0510 00:50:43.584740 2382 scope.go:117] "RemoveContainer" containerID="06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588" May 10 00:50:43.585064 env[1415]: time="2025-05-10T00:50:43.584981659Z" level=error msg="ContainerStatus for \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\": not found" May 10 00:50:43.585275 kubelet[2382]: E0510 00:50:43.585241 2382 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\": not found" containerID="06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588" May 10 00:50:43.585372 kubelet[2382]: I0510 00:50:43.585285 2382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588"} err="failed to get container status \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\": rpc error: code = NotFound desc = an error occurred when try to find container \"06f8848eb96a8d8433824edff5c87f1224ec960a34f482f6632f2d0157b52588\": not found" May 10 00:50:43.585422 kubelet[2382]: I0510 00:50:43.585380 2382 scope.go:117] "RemoveContainer" containerID="631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3" May 10 00:50:43.586294 env[1415]: time="2025-05-10T00:50:43.586263367Z" level=info msg="RemoveContainer for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\"" May 10 00:50:43.613099 env[1415]: time="2025-05-10T00:50:43.613056146Z" level=info msg="RemoveContainer for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" returns successfully" May 10 00:50:43.613482 kubelet[2382]: I0510 00:50:43.613457 2382 scope.go:117] "RemoveContainer" containerID="9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd" May 10 00:50:43.614566 env[1415]: time="2025-05-10T00:50:43.614534756Z" level=info msg="RemoveContainer for \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\"" May 10 00:50:43.626561 env[1415]: time="2025-05-10T00:50:43.626517936Z" level=info msg="RemoveContainer for \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\" returns successfully" May 10 00:50:43.626751 kubelet[2382]: I0510 00:50:43.626731 2382 scope.go:117] "RemoveContainer" containerID="82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5" May 10 00:50:43.627771 env[1415]: time="2025-05-10T00:50:43.627738244Z" level=info msg="RemoveContainer for \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\"" May 10 00:50:43.643678 env[1415]: time="2025-05-10T00:50:43.643626750Z" level=info msg="RemoveContainer for \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\" returns successfully" May 10 00:50:43.643934 kubelet[2382]: I0510 00:50:43.643911 2382 scope.go:117] "RemoveContainer" containerID="9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434" May 10 00:50:43.645112 env[1415]: time="2025-05-10T00:50:43.645078760Z" level=info msg="RemoveContainer for \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\"" May 10 00:50:43.658181 env[1415]: time="2025-05-10T00:50:43.656526037Z" level=info msg="RemoveContainer for \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\" returns successfully" May 10 00:50:43.658309 kubelet[2382]: I0510 00:50:43.657265 2382 scope.go:117] "RemoveContainer" containerID="31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa" May 10 00:50:43.658882 env[1415]: time="2025-05-10T00:50:43.658846752Z" level=info msg="RemoveContainer for \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\"" May 10 00:50:43.671959 env[1415]: time="2025-05-10T00:50:43.671922540Z" level=info msg="RemoveContainer for \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\" returns successfully" May 10 00:50:43.672109 kubelet[2382]: I0510 00:50:43.672087 2382 scope.go:117] "RemoveContainer" containerID="631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3" May 10 00:50:43.672390 env[1415]: time="2025-05-10T00:50:43.672328342Z" level=error msg="ContainerStatus for \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\": not found" May 10 00:50:43.672531 kubelet[2382]: E0510 00:50:43.672503 2382 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\": not found" containerID="631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3" May 10 00:50:43.672629 kubelet[2382]: I0510 00:50:43.672539 2382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3"} err="failed to get container status \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"631871113890985bd57d40fb05e8c993303f2e37d5518f25175cc6d6812241c3\": not found" May 10 00:50:43.672629 kubelet[2382]: I0510 00:50:43.672565 2382 scope.go:117] "RemoveContainer" containerID="9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd" May 10 00:50:43.672830 env[1415]: time="2025-05-10T00:50:43.672767745Z" level=error msg="ContainerStatus for \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\": not found" May 10 00:50:43.672948 kubelet[2382]: E0510 00:50:43.672925 2382 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\": not found" containerID="9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd" May 10 00:50:43.673032 kubelet[2382]: I0510 00:50:43.672954 2382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd"} err="failed to get container status \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e83b9d201092c2be27dc9c4b6905f71b647997f3e89c1a3db80ea107f08fccd\": not found" May 10 00:50:43.673032 kubelet[2382]: I0510 00:50:43.672977 2382 scope.go:117] "RemoveContainer" containerID="82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5" May 10 00:50:43.673215 env[1415]: time="2025-05-10T00:50:43.673152148Z" level=error msg="ContainerStatus for \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\": not found" May 10 00:50:43.673374 kubelet[2382]: E0510 00:50:43.673307 2382 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\": not found" containerID="82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5" May 10 00:50:43.673374 kubelet[2382]: I0510 00:50:43.673335 2382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5"} err="failed to get container status \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"82e29558948ef61d8bc36fb999e6c7938740e5765cd26a12e82f96d744f6f0a5\": not found" May 10 00:50:43.673374 kubelet[2382]: I0510 00:50:43.673356 2382 scope.go:117] "RemoveContainer" containerID="9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434" May 10 00:50:43.673599 env[1415]: time="2025-05-10T00:50:43.673542250Z" level=error msg="ContainerStatus for \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\": not found" May 10 00:50:43.673711 kubelet[2382]: E0510 00:50:43.673689 2382 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\": not found" containerID="9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434" May 10 00:50:43.673780 kubelet[2382]: I0510 00:50:43.673715 2382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434"} err="failed to get container status \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f897c343e4a78b46841f73c4678f317e5cad9c65493f5bc490f518a16092434\": not found" May 10 00:50:43.673780 kubelet[2382]: I0510 00:50:43.673738 2382 scope.go:117] "RemoveContainer" containerID="31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa" May 10 00:50:43.673954 env[1415]: time="2025-05-10T00:50:43.673909553Z" level=error msg="ContainerStatus for \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\": not found" May 10 00:50:43.674061 kubelet[2382]: E0510 00:50:43.674037 2382 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\": not found" containerID="31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa" May 10 00:50:43.674134 kubelet[2382]: I0510 00:50:43.674064 2382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa"} err="failed to get container status \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\": rpc error: code = NotFound desc = an error occurred when try to find container \"31e0bee5efc68b0b73f0b7026042fef968ec00769c7db8f2f316272b9e04ebfa\": not found" May 10 00:50:43.924068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f860b8a799414aa87c27c1f674279f89cce0db080196c461b93ae8f236624d-rootfs.mount: Deactivated successfully. May 10 00:50:43.924196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34f5f14ff6d27cfb3653775beecd56435419fcff82c83dcd0d31240fdc786514-rootfs.mount: Deactivated successfully. May 10 00:50:43.924270 systemd[1]: var-lib-kubelet-pods-a823b265\x2d11be\x2d4831\x2d8e9d\x2d241a2f61afed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcddk2.mount: Deactivated successfully. May 10 00:50:43.924352 systemd[1]: var-lib-kubelet-pods-d240e79c\x2d1cc3\x2d4700\x2d812e\x2da7704ff947ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:50:43.924428 systemd[1]: var-lib-kubelet-pods-d240e79c\x2d1cc3\x2d4700\x2d812e\x2da7704ff947ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn6hlb.mount: Deactivated successfully. May 10 00:50:43.924509 systemd[1]: var-lib-kubelet-pods-d240e79c\x2d1cc3\x2d4700\x2d812e\x2da7704ff947ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:50:43.962369 kubelet[2382]: I0510 00:50:43.962326 2382 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a823b265-11be-4831-8e9d-241a2f61afed" path="/var/lib/kubelet/pods/a823b265-11be-4831-8e9d-241a2f61afed/volumes" May 10 00:50:43.962922 kubelet[2382]: I0510 00:50:43.962895 2382 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" path="/var/lib/kubelet/pods/d240e79c-1cc3-4700-812e-a7704ff947ac/volumes" May 10 00:50:44.438610 kubelet[2382]: E0510 00:50:44.438563 2382 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:50:44.955449 sshd[3985]: pam_unix(sshd:session): session closed for user core May 10 00:50:44.959331 systemd[1]: sshd@23-10.200.8.31:22-10.200.16.10:39938.service: Deactivated successfully. May 10 00:50:44.960260 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:50:44.960934 systemd-logind[1399]: Session 26 logged out. Waiting for processes to exit. May 10 00:50:44.961987 systemd-logind[1399]: Removed session 26. May 10 00:50:45.068046 systemd[1]: Started sshd@24-10.200.8.31:22-10.200.16.10:39946.service. May 10 00:50:45.704310 sshd[4152]: Accepted publickey for core from 10.200.16.10 port 39946 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:45.705724 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:45.710481 systemd-logind[1399]: New session 27 of user core. May 10 00:50:45.711067 systemd[1]: Started session-27.scope. May 10 00:50:46.673948 kubelet[2382]: E0510 00:50:46.673902 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" containerName="apply-sysctl-overwrites" May 10 00:50:46.674520 kubelet[2382]: E0510 00:50:46.674501 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" containerName="mount-cgroup" May 10 00:50:46.674640 kubelet[2382]: E0510 00:50:46.674626 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" containerName="mount-bpf-fs" May 10 00:50:46.674731 kubelet[2382]: E0510 00:50:46.674719 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a823b265-11be-4831-8e9d-241a2f61afed" containerName="cilium-operator" May 10 00:50:46.674838 kubelet[2382]: E0510 00:50:46.674825 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" containerName="clean-cilium-state" May 10 00:50:46.674920 kubelet[2382]: E0510 00:50:46.674909 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" containerName="cilium-agent" May 10 00:50:46.675031 kubelet[2382]: I0510 00:50:46.675016 2382 memory_manager.go:354] "RemoveStaleState removing state" podUID="d240e79c-1cc3-4700-812e-a7704ff947ac" containerName="cilium-agent" May 10 00:50:46.675117 kubelet[2382]: I0510 00:50:46.675105 2382 memory_manager.go:354] "RemoveStaleState removing state" podUID="a823b265-11be-4831-8e9d-241a2f61afed" containerName="cilium-operator" May 10 00:50:46.681828 systemd[1]: Created slice kubepods-burstable-pod0d04ff52_7666_4e05_a3dc_b8b6a88692ed.slice. May 10 00:50:46.750155 sshd[4152]: pam_unix(sshd:session): session closed for user core May 10 00:50:46.752955 systemd[1]: sshd@24-10.200.8.31:22-10.200.16.10:39946.service: Deactivated successfully. May 10 00:50:46.753900 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:50:46.754899 systemd-logind[1399]: Session 27 logged out. Waiting for processes to exit. May 10 00:50:46.755697 systemd-logind[1399]: Removed session 27. May 10 00:50:46.807109 kubelet[2382]: I0510 00:50:46.807059 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-run\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807344 kubelet[2382]: I0510 00:50:46.807246 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-bpf-maps\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807344 kubelet[2382]: I0510 00:50:46.807286 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hostproc\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807482 kubelet[2382]: I0510 00:50:46.807353 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-config-path\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807482 kubelet[2382]: I0510 00:50:46.807420 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hubble-tls\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807611 kubelet[2382]: I0510 00:50:46.807491 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-etc-cni-netd\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807611 kubelet[2382]: I0510 00:50:46.807522 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-xtables-lock\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807611 kubelet[2382]: I0510 00:50:46.807589 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-ipsec-secrets\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807784 kubelet[2382]: I0510 00:50:46.807651 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-kernel\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807784 kubelet[2382]: I0510 00:50:46.807685 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-net\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807784 kubelet[2382]: I0510 00:50:46.807758 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-lib-modules\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807954 kubelet[2382]: I0510 00:50:46.807809 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cni-path\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807954 kubelet[2382]: I0510 00:50:46.807838 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-clustermesh-secrets\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.807954 kubelet[2382]: I0510 00:50:46.807899 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dssbx\" (UniqueName: \"kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-kube-api-access-dssbx\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.808093 kubelet[2382]: I0510 00:50:46.807928 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-cgroup\") pod \"cilium-wcl7r\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " pod="kube-system/cilium-wcl7r" May 10 00:50:46.859804 systemd[1]: Started sshd@25-10.200.8.31:22-10.200.16.10:39956.service. May 10 00:50:46.988166 env[1415]: time="2025-05-10T00:50:46.988112474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wcl7r,Uid:0d04ff52-7666-4e05-a3dc-b8b6a88692ed,Namespace:kube-system,Attempt:0,}" May 10 00:50:47.038339 env[1415]: time="2025-05-10T00:50:47.038246305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:47.038339 env[1415]: time="2025-05-10T00:50:47.038299605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:47.038339 env[1415]: time="2025-05-10T00:50:47.038313806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:47.043602 env[1415]: time="2025-05-10T00:50:47.038555407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2 pid=4175 runtime=io.containerd.runc.v2 May 10 00:50:47.055283 systemd[1]: Started cri-containerd-14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2.scope. May 10 00:50:47.099307 env[1415]: time="2025-05-10T00:50:47.099259008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wcl7r,Uid:0d04ff52-7666-4e05-a3dc-b8b6a88692ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\"" May 10 00:50:47.101885 env[1415]: time="2025-05-10T00:50:47.101836825Z" level=info msg="CreateContainer within sandbox \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:50:47.148060 env[1415]: time="2025-05-10T00:50:47.147998529Z" level=info msg="CreateContainer within sandbox \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\"" May 10 00:50:47.149371 env[1415]: time="2025-05-10T00:50:47.148580833Z" level=info msg="StartContainer for \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\"" May 10 00:50:47.164986 systemd[1]: Started cri-containerd-650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614.scope. May 10 00:50:47.179462 systemd[1]: cri-containerd-650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614.scope: Deactivated successfully. May 10 00:50:47.249745 env[1415]: time="2025-05-10T00:50:47.249611700Z" level=info msg="shim disconnected" id=650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614 May 10 00:50:47.249745 env[1415]: time="2025-05-10T00:50:47.249668500Z" level=warning msg="cleaning up after shim disconnected" id=650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614 namespace=k8s.io May 10 00:50:47.249745 env[1415]: time="2025-05-10T00:50:47.249680900Z" level=info msg="cleaning up dead shim" May 10 00:50:47.258173 env[1415]: time="2025-05-10T00:50:47.258115256Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4230 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:50:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:50:47.258539 env[1415]: time="2025-05-10T00:50:47.258425758Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" May 10 00:50:47.259308 env[1415]: time="2025-05-10T00:50:47.259262963Z" level=error msg="Failed to pipe stderr of container \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\"" error="reading from a closed fifo" May 10 00:50:47.259409 env[1415]: time="2025-05-10T00:50:47.259328764Z" level=error msg="Failed to pipe stdout of container \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\"" error="reading from a closed fifo" May 10 00:50:47.266818 env[1415]: time="2025-05-10T00:50:47.266761513Z" level=error msg="StartContainer for \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:50:47.267094 kubelet[2382]: E0510 00:50:47.267055 2382 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614" May 10 00:50:47.267282 kubelet[2382]: E0510 00:50:47.267259 2382 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 10 00:50:47.267282 kubelet[2382]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:50:47.267282 kubelet[2382]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:50:47.267282 kubelet[2382]: rm /hostbin/cilium-mount May 10 00:50:47.267460 kubelet[2382]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dssbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wcl7r_kube-system(0d04ff52-7666-4e05-a3dc-b8b6a88692ed): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:50:47.267460 kubelet[2382]: > logger="UnhandledError" May 10 00:50:47.268778 kubelet[2382]: E0510 00:50:47.268729 2382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wcl7r" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" May 10 00:50:47.495593 sshd[4162]: Accepted publickey for core from 10.200.16.10 port 39956 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:47.497242 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:47.502380 systemd[1]: Started session-28.scope. May 10 00:50:47.502818 systemd-logind[1399]: New session 28 of user core. May 10 00:50:47.594674 env[1415]: time="2025-05-10T00:50:47.594626276Z" level=info msg="CreateContainer within sandbox \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 10 00:50:47.644705 env[1415]: time="2025-05-10T00:50:47.644639906Z" level=info msg="CreateContainer within sandbox \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\"" May 10 00:50:47.645394 env[1415]: time="2025-05-10T00:50:47.645347911Z" level=info msg="StartContainer for \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\"" May 10 00:50:47.663072 systemd[1]: Started cri-containerd-7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc.scope. May 10 00:50:47.674336 systemd[1]: cri-containerd-7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc.scope: Deactivated successfully. May 10 00:50:47.699046 env[1415]: time="2025-05-10T00:50:47.698983164Z" level=info msg="shim disconnected" id=7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc May 10 00:50:47.699046 env[1415]: time="2025-05-10T00:50:47.699047765Z" level=warning msg="cleaning up after shim disconnected" id=7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc namespace=k8s.io May 10 00:50:47.699381 env[1415]: time="2025-05-10T00:50:47.699059365Z" level=info msg="cleaning up dead shim" May 10 00:50:47.707438 env[1415]: time="2025-05-10T00:50:47.707389620Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4267 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:50:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:50:47.707711 env[1415]: time="2025-05-10T00:50:47.707655522Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" May 10 00:50:47.707946 env[1415]: time="2025-05-10T00:50:47.707900023Z" level=error msg="Failed to pipe stderr of container \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\"" error="reading from a closed fifo" May 10 00:50:47.710294 env[1415]: time="2025-05-10T00:50:47.710248139Z" level=error msg="Failed to pipe stdout of container \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\"" error="reading from a closed fifo" May 10 00:50:47.715372 env[1415]: time="2025-05-10T00:50:47.715323372Z" level=error msg="StartContainer for \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:50:47.715604 kubelet[2382]: E0510 00:50:47.715572 2382 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc" May 10 00:50:47.715957 kubelet[2382]: E0510 00:50:47.715706 2382 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 10 00:50:47.715957 kubelet[2382]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:50:47.715957 kubelet[2382]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:50:47.715957 kubelet[2382]: rm /hostbin/cilium-mount May 10 00:50:47.715957 kubelet[2382]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dssbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wcl7r_kube-system(0d04ff52-7666-4e05-a3dc-b8b6a88692ed): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:50:47.715957 kubelet[2382]: > logger="UnhandledError" May 10 00:50:47.718068 kubelet[2382]: E0510 00:50:47.717359 2382 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wcl7r" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" May 10 00:50:48.017134 sshd[4162]: pam_unix(sshd:session): session closed for user core May 10 00:50:48.020453 systemd[1]: sshd@25-10.200.8.31:22-10.200.16.10:39956.service: Deactivated successfully. May 10 00:50:48.021916 systemd[1]: session-28.scope: Deactivated successfully. May 10 00:50:48.021971 systemd-logind[1399]: Session 28 logged out. Waiting for processes to exit. May 10 00:50:48.023482 systemd-logind[1399]: Removed session 28. May 10 00:50:48.123763 systemd[1]: Started sshd@26-10.200.8.31:22-10.200.16.10:39968.service. May 10 00:50:48.594119 kubelet[2382]: I0510 00:50:48.594085 2382 scope.go:117] "RemoveContainer" containerID="650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614" May 10 00:50:48.595373 env[1415]: time="2025-05-10T00:50:48.595306566Z" level=info msg="StopPodSandbox for \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\"" May 10 00:50:48.596491 env[1415]: time="2025-05-10T00:50:48.596457673Z" level=info msg="RemoveContainer for \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\"" May 10 00:50:48.598284 env[1415]: time="2025-05-10T00:50:48.596074071Z" level=info msg="Container to stop \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:48.598447 env[1415]: time="2025-05-10T00:50:48.598423286Z" level=info msg="Container to stop \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:50:48.603881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2-shm.mount: Deactivated successfully. May 10 00:50:48.614500 systemd[1]: cri-containerd-14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2.scope: Deactivated successfully. May 10 00:50:48.616619 env[1415]: time="2025-05-10T00:50:48.616578606Z" level=info msg="RemoveContainer for \"650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614\" returns successfully" May 10 00:50:48.641757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2-rootfs.mount: Deactivated successfully. May 10 00:50:48.660878 env[1415]: time="2025-05-10T00:50:48.660822297Z" level=info msg="shim disconnected" id=14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2 May 10 00:50:48.661125 env[1415]: time="2025-05-10T00:50:48.660878297Z" level=warning msg="cleaning up after shim disconnected" id=14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2 namespace=k8s.io May 10 00:50:48.661125 env[1415]: time="2025-05-10T00:50:48.660892997Z" level=info msg="cleaning up dead shim" May 10 00:50:48.668645 env[1415]: time="2025-05-10T00:50:48.668610348Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4308 runtime=io.containerd.runc.v2\n" May 10 00:50:48.668951 env[1415]: time="2025-05-10T00:50:48.668917650Z" level=info msg="TearDown network for sandbox \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\" successfully" May 10 00:50:48.669030 env[1415]: time="2025-05-10T00:50:48.668950050Z" level=info msg="StopPodSandbox for \"14bbb00345af7b0062fc2de5c1c6982f160b9243d141b9e05671100af49844c2\" returns successfully" May 10 00:50:48.759412 sshd[4287]: Accepted publickey for core from 10.200.16.10 port 39968 ssh2: RSA SHA256:BLSLhhUraDEt88EfUErhlSBtLTKQ7R9lQ68MHwbBo5g May 10 00:50:48.761114 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:48.766768 systemd[1]: Started session-29.scope. May 10 00:50:48.767278 systemd-logind[1399]: New session 29 of user core. May 10 00:50:48.822147 kubelet[2382]: I0510 00:50:48.822091 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-lib-modules\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822179 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-clustermesh-secrets\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822212 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hostproc\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822236 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-etc-cni-netd\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822258 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-bpf-maps\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822282 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cni-path\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822309 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-xtables-lock\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822337 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-net\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822371 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dssbx\" (UniqueName: \"kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-kube-api-access-dssbx\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822399 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-ipsec-secrets\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822428 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-kernel\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822455 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-cgroup\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822480 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-run\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822511 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hubble-tls\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.822736 kubelet[2382]: I0510 00:50:48.822544 2382 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-config-path\") pod \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\" (UID: \"0d04ff52-7666-4e05-a3dc-b8b6a88692ed\") " May 10 00:50:48.823642 kubelet[2382]: I0510 00:50:48.823600 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.823782 kubelet[2382]: I0510 00:50:48.823763 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.824595 kubelet[2382]: I0510 00:50:48.824570 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.824760 kubelet[2382]: I0510 00:50:48.824739 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.824873 kubelet[2382]: I0510 00:50:48.824857 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.824975 kubelet[2382]: I0510 00:50:48.824962 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.825080 kubelet[2382]: I0510 00:50:48.825065 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.825273 kubelet[2382]: I0510 00:50:48.825256 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.825643 kubelet[2382]: I0510 00:50:48.825617 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:50:48.825739 kubelet[2382]: I0510 00:50:48.825659 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.825739 kubelet[2382]: I0510 00:50:48.825682 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:48.831481 systemd[1]: var-lib-kubelet-pods-0d04ff52\x2d7666\x2d4e05\x2da3dc\x2db8b6a88692ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:50:48.834780 systemd[1]: var-lib-kubelet-pods-0d04ff52\x2d7666\x2d4e05\x2da3dc\x2db8b6a88692ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddssbx.mount: Deactivated successfully. May 10 00:50:48.837557 kubelet[2382]: I0510 00:50:48.837531 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:50:48.837737 kubelet[2382]: I0510 00:50:48.837710 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:48.837914 kubelet[2382]: I0510 00:50:48.837884 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-kube-api-access-dssbx" (OuterVolumeSpecName: "kube-api-access-dssbx") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "kube-api-access-dssbx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:48.839693 kubelet[2382]: I0510 00:50:48.839653 2382 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0d04ff52-7666-4e05-a3dc-b8b6a88692ed" (UID: "0d04ff52-7666-4e05-a3dc-b8b6a88692ed"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.922990 2382 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-lib-modules\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923046 2382 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-clustermesh-secrets\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923068 2382 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hostproc\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923087 2382 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-etc-cni-netd\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923103 2382 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-bpf-maps\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923117 2382 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cni-path\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923132 2382 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-xtables-lock\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923150 2382 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-net\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923209 2382 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dssbx\" (UniqueName: \"kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-kube-api-access-dssbx\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923225 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923242 2382 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923258 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-cgroup\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923273 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-run\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923288 2382 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-cilium-config-path\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.925019 kubelet[2382]: I0510 00:50:48.923303 2382 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d04ff52-7666-4e05-a3dc-b8b6a88692ed-hubble-tls\") on node \"ci-3510.3.7-n-8a4b3429d2\" DevicePath \"\"" May 10 00:50:48.923574 systemd[1]: var-lib-kubelet-pods-0d04ff52\x2d7666\x2d4e05\x2da3dc\x2db8b6a88692ed-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:50:48.923694 systemd[1]: var-lib-kubelet-pods-0d04ff52\x2d7666\x2d4e05\x2da3dc\x2db8b6a88692ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:50:49.439867 kubelet[2382]: E0510 00:50:49.439808 2382 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:50:49.597712 kubelet[2382]: I0510 00:50:49.597678 2382 scope.go:117] "RemoveContainer" containerID="7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc" May 10 00:50:49.598858 env[1415]: time="2025-05-10T00:50:49.598805354Z" level=info msg="RemoveContainer for \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\"" May 10 00:50:49.603274 systemd[1]: Removed slice kubepods-burstable-pod0d04ff52_7666_4e05_a3dc_b8b6a88692ed.slice. May 10 00:50:49.614997 env[1415]: time="2025-05-10T00:50:49.614945060Z" level=info msg="RemoveContainer for \"7efb75d180567ca902ccfd3f982f1573ffd376750bec9c252fd0f0c3980b01bc\" returns successfully" May 10 00:50:49.656115 kubelet[2382]: E0510 00:50:49.656080 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" containerName="mount-cgroup" May 10 00:50:49.656332 kubelet[2382]: E0510 00:50:49.656315 2382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" containerName="mount-cgroup" May 10 00:50:49.656474 kubelet[2382]: I0510 00:50:49.656460 2382 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" containerName="mount-cgroup" May 10 00:50:49.656562 kubelet[2382]: I0510 00:50:49.656551 2382 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" containerName="mount-cgroup" May 10 00:50:49.659457 kubelet[2382]: W0510 00:50:49.659429 2382 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-n-8a4b3429d2" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-8a4b3429d2' and this object May 10 00:50:49.659621 kubelet[2382]: E0510 00:50:49.659597 2382 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.7-n-8a4b3429d2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-8a4b3429d2' and this object" logger="UnhandledError" May 10 00:50:49.663596 systemd[1]: Created slice kubepods-burstable-podb7b184ae_fe63_4443_a64d_1b69f9816453.slice. May 10 00:50:49.830202 kubelet[2382]: I0510 00:50:49.830146 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-host-proc-sys-net\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830202 kubelet[2382]: I0510 00:50:49.830206 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-etc-cni-netd\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830229 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7b184ae-fe63-4443-a64d-1b69f9816453-clustermesh-secrets\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830250 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zl9\" (UniqueName: \"kubernetes.io/projected/b7b184ae-fe63-4443-a64d-1b69f9816453-kube-api-access-l5zl9\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830269 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-cni-path\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830288 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7b184ae-fe63-4443-a64d-1b69f9816453-cilium-ipsec-secrets\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830307 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-cilium-cgroup\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830329 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-xtables-lock\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830350 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-cilium-run\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830383 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7b184ae-fe63-4443-a64d-1b69f9816453-cilium-config-path\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830404 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-lib-modules\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830426 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-host-proc-sys-kernel\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830446 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7b184ae-fe63-4443-a64d-1b69f9816453-hubble-tls\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830467 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-bpf-maps\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.830736 kubelet[2382]: I0510 00:50:49.830489 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7b184ae-fe63-4443-a64d-1b69f9816453-hostproc\") pod \"cilium-nwwvk\" (UID: \"b7b184ae-fe63-4443-a64d-1b69f9816453\") " pod="kube-system/cilium-nwwvk" May 10 00:50:49.962572 kubelet[2382]: I0510 00:50:49.962535 2382 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d04ff52-7666-4e05-a3dc-b8b6a88692ed" path="/var/lib/kubelet/pods/0d04ff52-7666-4e05-a3dc-b8b6a88692ed/volumes" May 10 00:50:50.356223 kubelet[2382]: W0510 00:50:50.356154 2382 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d04ff52_7666_4e05_a3dc_b8b6a88692ed.slice/cri-containerd-650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614.scope WatchSource:0}: container "650992cdc253df98592cae56bb95508aa97a82c21654a3aff4f97d7145ed1614" in namespace "k8s.io": not found May 10 00:50:50.726671 kubelet[2382]: I0510 00:50:50.726615 2382 setters.go:600] "Node became not ready" node="ci-3510.3.7-n-8a4b3429d2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:50:50Z","lastTransitionTime":"2025-05-10T00:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:50:51.169993 env[1415]: time="2025-05-10T00:50:51.169849428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nwwvk,Uid:b7b184ae-fe63-4443-a64d-1b69f9816453,Namespace:kube-system,Attempt:0,}" May 10 00:50:51.219307 env[1415]: time="2025-05-10T00:50:51.219223449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:51.219532 env[1415]: time="2025-05-10T00:50:51.219315750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:51.219532 env[1415]: time="2025-05-10T00:50:51.219345950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:51.220224 env[1415]: time="2025-05-10T00:50:51.220122855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558 pid=4342 runtime=io.containerd.runc.v2 May 10 00:50:51.248182 systemd[1]: run-containerd-runc-k8s.io-ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558-runc.3f5VJs.mount: Deactivated successfully. May 10 00:50:51.251322 systemd[1]: Started cri-containerd-ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558.scope. May 10 00:50:51.274269 env[1415]: time="2025-05-10T00:50:51.274226208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nwwvk,Uid:b7b184ae-fe63-4443-a64d-1b69f9816453,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\"" May 10 00:50:51.287641 env[1415]: time="2025-05-10T00:50:51.287403094Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:50:51.333338 env[1415]: time="2025-05-10T00:50:51.333289693Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf\"" May 10 00:50:51.335085 env[1415]: time="2025-05-10T00:50:51.333997997Z" level=info msg="StartContainer for \"fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf\"" May 10 00:50:51.351127 systemd[1]: Started cri-containerd-fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf.scope. May 10 00:50:51.385781 env[1415]: time="2025-05-10T00:50:51.385734035Z" level=info msg="StartContainer for \"fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf\" returns successfully" May 10 00:50:51.388751 systemd[1]: cri-containerd-fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf.scope: Deactivated successfully. May 10 00:50:51.437374 env[1415]: time="2025-05-10T00:50:51.436546666Z" level=info msg="shim disconnected" id=fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf May 10 00:50:51.437374 env[1415]: time="2025-05-10T00:50:51.436602566Z" level=warning msg="cleaning up after shim disconnected" id=fb99c6b6af5537cffba5a972535258205933582091e79b6c6d7e61dbbdb36acf namespace=k8s.io May 10 00:50:51.437374 env[1415]: time="2025-05-10T00:50:51.436614166Z" level=info msg="cleaning up dead shim" May 10 00:50:51.446277 env[1415]: time="2025-05-10T00:50:51.446229229Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4428 runtime=io.containerd.runc.v2\n" May 10 00:50:51.608199 env[1415]: time="2025-05-10T00:50:51.607518280Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:50:51.647664 env[1415]: time="2025-05-10T00:50:51.647612941Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836\"" May 10 00:50:51.649426 env[1415]: time="2025-05-10T00:50:51.648233745Z" level=info msg="StartContainer for \"ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836\"" May 10 00:50:51.665269 systemd[1]: Started cri-containerd-ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836.scope. May 10 00:50:51.700953 env[1415]: time="2025-05-10T00:50:51.700855188Z" level=info msg="StartContainer for \"ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836\" returns successfully" May 10 00:50:51.702268 systemd[1]: cri-containerd-ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836.scope: Deactivated successfully. May 10 00:50:51.747247 env[1415]: time="2025-05-10T00:50:51.747198690Z" level=info msg="shim disconnected" id=ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836 May 10 00:50:51.747247 env[1415]: time="2025-05-10T00:50:51.747248791Z" level=warning msg="cleaning up after shim disconnected" id=ed27f8635a170e142921531afeddde54d3366b252bfb175ef92d46a8eb1e1836 namespace=k8s.io May 10 00:50:51.747247 env[1415]: time="2025-05-10T00:50:51.747261091Z" level=info msg="cleaning up dead shim" May 10 00:50:51.754736 env[1415]: time="2025-05-10T00:50:51.754698339Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4490 runtime=io.containerd.runc.v2\n" May 10 00:50:52.613336 env[1415]: time="2025-05-10T00:50:52.613279522Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:50:52.650066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746669247.mount: Deactivated successfully. May 10 00:50:52.677828 env[1415]: time="2025-05-10T00:50:52.677787342Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b\"" May 10 00:50:52.679440 env[1415]: time="2025-05-10T00:50:52.679402952Z" level=info msg="StartContainer for \"b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b\"" May 10 00:50:52.699971 systemd[1]: Started cri-containerd-b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b.scope. May 10 00:50:52.732422 systemd[1]: cri-containerd-b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b.scope: Deactivated successfully. May 10 00:50:52.740989 env[1415]: time="2025-05-10T00:50:52.740947452Z" level=info msg="StartContainer for \"b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b\" returns successfully" May 10 00:50:52.805005 env[1415]: time="2025-05-10T00:50:52.804948568Z" level=info msg="shim disconnected" id=b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b May 10 00:50:52.805005 env[1415]: time="2025-05-10T00:50:52.805006668Z" level=warning msg="cleaning up after shim disconnected" id=b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b namespace=k8s.io May 10 00:50:52.805343 env[1415]: time="2025-05-10T00:50:52.805018868Z" level=info msg="cleaning up dead shim" May 10 00:50:52.815697 env[1415]: time="2025-05-10T00:50:52.815653237Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4548 runtime=io.containerd.runc.v2\n" May 10 00:50:53.211090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0dabdbd6932dbd434380dd14bf6ccb4ce474762c25362a36853631fd3639b9b-rootfs.mount: Deactivated successfully. May 10 00:50:53.616144 env[1415]: time="2025-05-10T00:50:53.616088326Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:50:53.675412 env[1415]: time="2025-05-10T00:50:53.675366010Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae\"" May 10 00:50:53.676103 env[1415]: time="2025-05-10T00:50:53.676065715Z" level=info msg="StartContainer for \"0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae\"" May 10 00:50:53.700452 systemd[1]: Started cri-containerd-0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae.scope. May 10 00:50:53.726126 systemd[1]: cri-containerd-0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae.scope: Deactivated successfully. May 10 00:50:53.730418 env[1415]: time="2025-05-10T00:50:53.730375667Z" level=info msg="StartContainer for \"0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae\" returns successfully" May 10 00:50:53.788535 env[1415]: time="2025-05-10T00:50:53.788466443Z" level=info msg="shim disconnected" id=0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae May 10 00:50:53.788535 env[1415]: time="2025-05-10T00:50:53.788534244Z" level=warning msg="cleaning up after shim disconnected" id=0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae namespace=k8s.io May 10 00:50:53.788978 env[1415]: time="2025-05-10T00:50:53.788548544Z" level=info msg="cleaning up dead shim" May 10 00:50:53.800682 env[1415]: time="2025-05-10T00:50:53.800632822Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:50:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4608 runtime=io.containerd.runc.v2\n" May 10 00:50:54.210692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0225f5ad459fac307a8431aee1c2cc46c58ea50e2f566f3429e28ac00c1a10ae-rootfs.mount: Deactivated successfully. May 10 00:50:54.441074 kubelet[2382]: E0510 00:50:54.440803 2382 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:50:54.623537 env[1415]: time="2025-05-10T00:50:54.623482641Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:50:54.669478 env[1415]: time="2025-05-10T00:50:54.669427938Z" level=info msg="CreateContainer within sandbox \"ea6e3702b8282647a8d9aaae227b30d9f5490690a6a32f0354b3d3d88ae95558\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28\"" May 10 00:50:54.670143 env[1415]: time="2025-05-10T00:50:54.670104942Z" level=info msg="StartContainer for \"bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28\"" May 10 00:50:54.693357 systemd[1]: Started cri-containerd-bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28.scope. May 10 00:50:54.728438 env[1415]: time="2025-05-10T00:50:54.728380219Z" level=info msg="StartContainer for \"bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28\" returns successfully" May 10 00:50:55.114202 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:50:55.648203 kubelet[2382]: I0510 00:50:55.648117 2382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nwwvk" podStartSLOduration=6.648097448 podStartE2EDuration="6.648097448s" podCreationTimestamp="2025-05-10 00:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:50:55.647694145 +0000 UTC m=+281.781320741" watchObservedRunningTime="2025-05-10 00:50:55.648097448 +0000 UTC m=+281.781724044" May 10 00:50:57.345315 systemd[1]: run-containerd-runc-k8s.io-bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28-runc.dq9yUp.mount: Deactivated successfully. May 10 00:50:57.405312 kubelet[2382]: E0510 00:50:57.405266 2382 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58232->127.0.0.1:45139: write tcp 127.0.0.1:58232->127.0.0.1:45139: write: broken pipe May 10 00:50:57.864931 systemd-networkd[1568]: lxc_health: Link UP May 10 00:50:57.907437 systemd-networkd[1568]: lxc_health: Gained carrier May 10 00:50:57.908225 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:50:59.534613 systemd[1]: run-containerd-runc-k8s.io-bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28-runc.0aSHHl.mount: Deactivated successfully. May 10 00:50:59.557495 systemd-networkd[1568]: lxc_health: Gained IPv6LL May 10 00:51:01.728255 systemd[1]: run-containerd-runc-k8s.io-bae1c02d979417c62c8851ddd039443bbba9ce52dda1e0d7690f580816291d28-runc.scJcSH.mount: Deactivated successfully. May 10 00:51:04.035375 sshd[4287]: pam_unix(sshd:session): session closed for user core May 10 00:51:04.038406 systemd[1]: sshd@26-10.200.8.31:22-10.200.16.10:39968.service: Deactivated successfully. May 10 00:51:04.039322 systemd[1]: session-29.scope: Deactivated successfully. May 10 00:51:04.040006 systemd-logind[1399]: Session 29 logged out. Waiting for processes to exit. May 10 00:51:04.040915 systemd-logind[1399]: Removed session 29.