Mar 17 18:48:18.033832 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:48:18.033865 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:48:18.033880 kernel: BIOS-provided physical RAM map: Mar 17 18:48:18.033890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:48:18.033900 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 17 18:48:18.033910 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Mar 17 18:48:18.033925 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Mar 17 18:48:18.033936 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 17 18:48:18.033947 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 17 18:48:18.033958 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 17 18:48:18.033968 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 17 18:48:18.033979 kernel: printk: bootconsole [earlyser0] enabled Mar 17 18:48:18.033989 kernel: NX (Execute Disable) protection: active Mar 17 18:48:18.034000 kernel: efi: EFI v2.70 by Microsoft Mar 17 18:48:18.034017 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 Mar 17 18:48:18.034029 kernel: random: crng init done Mar 17 18:48:18.034041 kernel: SMBIOS 3.1.0 present. Mar 17 18:48:18.034052 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Mar 17 18:48:18.034064 kernel: Hypervisor detected: Microsoft Hyper-V Mar 17 18:48:18.034076 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 17 18:48:18.034087 kernel: Hyper-V Host Build:20348-10.0-1-0.1799 Mar 17 18:48:18.034099 kernel: Hyper-V: Nested features: 0x1e0101 Mar 17 18:48:18.034113 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 17 18:48:18.034124 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 17 18:48:18.034135 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 17 18:48:18.034147 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 17 18:48:18.034159 kernel: tsc: Detected 2593.906 MHz processor Mar 17 18:48:18.034171 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:48:18.034183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:48:18.034195 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 17 18:48:18.034207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:48:18.034218 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 17 18:48:18.034232 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 17 18:48:18.034244 kernel: Using GB pages for direct mapping Mar 17 18:48:18.034256 kernel: Secure boot disabled Mar 17 18:48:18.034268 kernel: ACPI: Early table checksum verification disabled Mar 17 18:48:18.034279 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 17 18:48:18.034291 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.034303 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.034315 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Mar 17 18:48:18.034333 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 17 18:48:18.034346 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042032 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042088 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042102 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042115 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042133 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042145 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:18.042158 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 17 18:48:18.042170 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Mar 17 18:48:18.042182 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 17 18:48:18.042195 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 17 18:48:18.042207 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 17 18:48:18.042219 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 17 18:48:18.042234 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 17 18:48:18.042247 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Mar 17 18:48:18.042259 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 17 18:48:18.042272 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Mar 17 18:48:18.042284 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:48:18.042296 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 18:48:18.042308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 17 18:48:18.042321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 17 18:48:18.042333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 17 18:48:18.042348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 17 18:48:18.042369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 17 18:48:18.042382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 17 18:48:18.042394 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 17 18:48:18.042406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 17 18:48:18.042418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 17 18:48:18.042431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 17 18:48:18.042443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 17 18:48:18.042455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 17 18:48:18.042470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Mar 17 18:48:18.042482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Mar 17 18:48:18.042494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Mar 17 18:48:18.042506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Mar 17 18:48:18.042519 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 17 18:48:18.042531 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 17 18:48:18.042544 kernel: Zone ranges: Mar 17 18:48:18.042557 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:48:18.042569 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 18:48:18.042583 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 18:48:18.042596 kernel: Movable zone start for each node Mar 17 18:48:18.042608 kernel: Early memory node ranges Mar 17 18:48:18.042620 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 18:48:18.042632 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Mar 17 18:48:18.042645 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 17 18:48:18.042657 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 17 18:48:18.042669 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 17 18:48:18.042682 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:48:18.042697 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 18:48:18.042709 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Mar 17 18:48:18.042721 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 17 18:48:18.042733 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 17 18:48:18.042745 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:48:18.042757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:48:18.042770 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:48:18.042782 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 17 18:48:18.042794 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:48:18.042809 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 17 18:48:18.042821 kernel: Booting paravirtualized kernel on Hyper-V Mar 17 18:48:18.042833 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:48:18.042846 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:48:18.042858 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 18:48:18.042870 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 18:48:18.042882 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:48:18.042894 kernel: Hyper-V: PV spinlocks enabled Mar 17 18:48:18.042907 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:48:18.042921 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Mar 17 18:48:18.042934 kernel: Policy zone: Normal Mar 17 18:48:18.042948 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:48:18.042961 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:48:18.042973 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 18:48:18.042985 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:48:18.042998 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:48:18.043011 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 308056K reserved, 0K cma-reserved) Mar 17 18:48:18.043026 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:48:18.043038 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:48:18.043059 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:48:18.043075 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:48:18.043089 kernel: rcu: RCU event tracing is enabled. Mar 17 18:48:18.043102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:48:18.043114 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:48:18.043127 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:48:18.043140 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:48:18.043153 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:48:18.043166 kernel: Using NULL legacy PIC Mar 17 18:48:18.043181 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 17 18:48:18.043194 kernel: Console: colour dummy device 80x25 Mar 17 18:48:18.043207 kernel: printk: console [tty1] enabled Mar 17 18:48:18.043220 kernel: printk: console [ttyS0] enabled Mar 17 18:48:18.043233 kernel: printk: bootconsole [earlyser0] disabled Mar 17 18:48:18.043248 kernel: ACPI: Core revision 20210730 Mar 17 18:48:18.043261 kernel: Failed to register legacy timer interrupt Mar 17 18:48:18.043274 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:48:18.043286 kernel: Hyper-V: Using IPI hypercalls Mar 17 18:48:18.043299 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Mar 17 18:48:18.043313 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 18:48:18.043326 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 18:48:18.043339 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:48:18.043351 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:48:18.043370 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:48:18.043386 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:48:18.043399 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 17 18:48:18.043412 kernel: RETBleed: Vulnerable Mar 17 18:48:18.043425 kernel: Speculative Store Bypass: Vulnerable Mar 17 18:48:18.043438 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:48:18.043450 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:48:18.043463 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:48:18.043476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:48:18.043489 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:48:18.043502 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 17 18:48:18.043517 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 17 18:48:18.043529 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 17 18:48:18.043542 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:48:18.043555 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 17 18:48:18.043568 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 17 18:48:18.043580 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 17 18:48:18.043593 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 17 18:48:18.043606 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:48:18.043619 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:48:18.043631 kernel: LSM: Security Framework initializing Mar 17 18:48:18.043644 kernel: SELinux: Initializing. Mar 17 18:48:18.043656 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:48:18.043672 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:48:18.043685 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 17 18:48:18.043698 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 17 18:48:18.043711 kernel: signal: max sigframe size: 3632 Mar 17 18:48:18.043724 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:48:18.043737 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:48:18.043750 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:48:18.043762 kernel: x86: Booting SMP configuration: Mar 17 18:48:18.043776 kernel: .... node #0, CPUs: #1 Mar 17 18:48:18.043789 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 17 18:48:18.043805 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 18:48:18.043818 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:48:18.043831 kernel: smpboot: Max logical packages: 1 Mar 17 18:48:18.043844 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 17 18:48:18.043857 kernel: devtmpfs: initialized Mar 17 18:48:18.043870 kernel: x86/mm: Memory block size: 128MB Mar 17 18:48:18.043883 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 17 18:48:18.043896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:48:18.043911 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:48:18.043924 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:48:18.043937 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:48:18.043949 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:48:18.043962 kernel: audit: type=2000 audit(1742237297.023:1): state=initialized audit_enabled=0 res=1 Mar 17 18:48:18.043975 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:48:18.043988 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:48:18.044001 kernel: cpuidle: using governor menu Mar 17 18:48:18.044013 kernel: ACPI: bus type PCI registered Mar 17 18:48:18.044029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:48:18.044042 kernel: dca service started, version 1.12.1 Mar 17 18:48:18.044054 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:48:18.044067 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:48:18.044080 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:48:18.044093 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:48:18.044106 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:48:18.044119 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:48:18.044132 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:48:18.044147 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:48:18.044160 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:48:18.044172 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:48:18.044185 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:48:18.044198 kernel: ACPI: Interpreter enabled Mar 17 18:48:18.044211 kernel: ACPI: PM: (supports S0 S5) Mar 17 18:48:18.044224 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:48:18.044237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:48:18.044250 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 17 18:48:18.044265 kernel: iommu: Default domain type: Translated Mar 17 18:48:18.044276 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:48:18.044285 kernel: vgaarb: loaded Mar 17 18:48:18.044295 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:48:18.044306 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:48:18.044318 kernel: PTP clock support registered Mar 17 18:48:18.044329 kernel: Registered efivars operations Mar 17 18:48:18.044342 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:48:18.044354 kernel: PCI: System does not support PCI Mar 17 18:48:18.044378 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 17 18:48:18.044388 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:48:18.044400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:48:18.044412 kernel: pnp: PnP ACPI init Mar 17 18:48:18.044424 kernel: pnp: PnP ACPI: found 3 devices Mar 17 18:48:18.044435 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:48:18.044446 kernel: NET: Registered PF_INET protocol family Mar 17 18:48:18.044458 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:48:18.044470 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 18:48:18.044485 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:48:18.044498 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:48:18.044511 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Mar 17 18:48:18.044525 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 18:48:18.044538 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 18:48:18.044552 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 18:48:18.044566 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:48:18.044579 kernel: NET: Registered PF_XDP protocol family Mar 17 18:48:18.044593 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:48:18.044609 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 18:48:18.044623 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Mar 17 18:48:18.044637 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:48:18.044651 kernel: Initialise system trusted keyrings Mar 17 18:48:18.044665 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 18:48:18.044678 kernel: Key type asymmetric registered Mar 17 18:48:18.044689 kernel: Asymmetric key parser 'x509' registered Mar 17 18:48:18.044701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:48:18.044712 kernel: io scheduler mq-deadline registered Mar 17 18:48:18.044728 kernel: io scheduler kyber registered Mar 17 18:48:18.044742 kernel: io scheduler bfq registered Mar 17 18:48:18.044756 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:48:18.044770 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:48:18.044784 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:48:18.044798 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 18:48:18.044812 kernel: i8042: PNP: No PS/2 controller found. Mar 17 18:48:18.045008 kernel: rtc_cmos 00:02: registered as rtc0 Mar 17 18:48:18.045129 kernel: rtc_cmos 00:02: setting system clock to 2025-03-17T18:48:17 UTC (1742237297) Mar 17 18:48:18.045238 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 17 18:48:18.045255 kernel: intel_pstate: CPU model not supported Mar 17 18:48:18.045270 kernel: efifb: probing for efifb Mar 17 18:48:18.045284 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 18:48:18.045298 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 18:48:18.045312 kernel: efifb: scrolling: redraw Mar 17 18:48:18.045326 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:48:18.045340 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:18.045357 kernel: fb0: EFI VGA frame buffer device Mar 17 18:48:18.045409 kernel: pstore: Registered efi as persistent store backend Mar 17 18:48:18.045424 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:48:18.045437 kernel: Segment Routing with IPv6 Mar 17 18:48:18.045451 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:48:18.045465 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:48:18.045479 kernel: Key type dns_resolver registered Mar 17 18:48:18.045493 kernel: IPI shorthand broadcast: enabled Mar 17 18:48:18.045507 kernel: sched_clock: Marking stable (891449200, 21210300)->(1115160800, -202501300) Mar 17 18:48:18.045524 kernel: registered taskstats version 1 Mar 17 18:48:18.045538 kernel: Loading compiled-in X.509 certificates Mar 17 18:48:18.045552 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:48:18.045565 kernel: Key type .fscrypt registered Mar 17 18:48:18.045579 kernel: Key type fscrypt-provisioning registered Mar 17 18:48:18.045592 kernel: pstore: Using crash dump compression: deflate Mar 17 18:48:18.045607 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:48:18.045621 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:48:18.045638 kernel: ima: No architecture policies found Mar 17 18:48:18.045652 kernel: clk: Disabling unused clocks Mar 17 18:48:18.045666 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:48:18.045681 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:48:18.045695 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:48:18.045709 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:48:18.045723 kernel: Run /init as init process Mar 17 18:48:18.045736 kernel: with arguments: Mar 17 18:48:18.045750 kernel: /init Mar 17 18:48:18.045766 kernel: with environment: Mar 17 18:48:18.045780 kernel: HOME=/ Mar 17 18:48:18.045793 kernel: TERM=linux Mar 17 18:48:18.045806 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:48:18.045824 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:18.045841 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:18.045856 systemd[1]: Detected architecture x86-64. Mar 17 18:48:18.045870 systemd[1]: Running in initrd. Mar 17 18:48:18.045887 systemd[1]: No hostname configured, using default hostname. Mar 17 18:48:18.045901 systemd[1]: Hostname set to . Mar 17 18:48:18.045916 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:18.045931 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:48:18.045945 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:18.045959 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:18.045974 systemd[1]: Reached target paths.target. Mar 17 18:48:18.045987 systemd[1]: Reached target slices.target. Mar 17 18:48:18.046004 systemd[1]: Reached target swap.target. Mar 17 18:48:18.046019 systemd[1]: Reached target timers.target. Mar 17 18:48:18.046035 systemd[1]: Listening on iscsid.socket. Mar 17 18:48:18.046049 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:48:18.046064 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:48:18.046078 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:48:18.046093 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:48:18.046108 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:18.046125 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:18.046140 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:18.046155 systemd[1]: Reached target sockets.target. Mar 17 18:48:18.046169 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:18.046184 systemd[1]: Finished network-cleanup.service. Mar 17 18:48:18.046198 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:48:18.046213 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:18.046227 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:18.046242 systemd[1]: Starting systemd-resolved.service... Mar 17 18:48:18.046259 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:48:18.046280 systemd-journald[183]: Journal started Mar 17 18:48:18.046352 systemd-journald[183]: Runtime Journal (/run/log/journal/65f5c7a395a5449a94f88e0b1ea557da) is 8.0M, max 159.0M, 151.0M free. Mar 17 18:48:18.053134 systemd[1]: Started systemd-journald.service. Mar 17 18:48:18.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.053660 systemd-modules-load[184]: Inserted module 'overlay' Mar 17 18:48:18.068381 kernel: audit: type=1130 audit(1742237298.053:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.068889 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:18.073740 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:48:18.078272 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:48:18.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.094375 kernel: audit: type=1130 audit(1742237298.073:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.095599 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:48:18.100455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:18.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.120295 kernel: audit: type=1130 audit(1742237298.077:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.131809 systemd-resolved[185]: Positive Trust Anchors: Mar 17 18:48:18.135689 kernel: audit: type=1130 audit(1742237298.093:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.131821 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:48:18.131863 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:48:18.141891 systemd-resolved[185]: Defaulting to hostname 'linux'. Mar 17 18:48:18.154309 systemd[1]: Started systemd-resolved.service. Mar 17 18:48:18.157043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:18.159398 systemd[1]: Reached target nss-lookup.target. Mar 17 18:48:18.196047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:48:18.196074 kernel: audit: type=1130 audit(1742237298.156:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.196088 kernel: audit: type=1130 audit(1742237298.158:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.197301 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:48:18.204216 kernel: Bridge firewalling registered Mar 17 18:48:18.200179 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:48:18.202142 systemd-modules-load[184]: Inserted module 'br_netfilter' Mar 17 18:48:18.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.216852 dracut-cmdline[200]: dracut-dracut-053 Mar 17 18:48:18.224952 kernel: audit: type=1130 audit(1742237298.199:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.224986 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:48:18.264394 kernel: SCSI subsystem initialized Mar 17 18:48:18.290082 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:48:18.290173 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:48:18.295611 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:48:18.299889 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:48:18.300087 systemd-modules-load[184]: Inserted module 'dm_multipath' Mar 17 18:48:18.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.303294 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:18.323955 kernel: audit: type=1130 audit(1742237298.305:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.306675 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:18.327163 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:18.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.343384 kernel: audit: type=1130 audit(1742237298.331:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.343424 kernel: iscsi: registered transport (tcp) Mar 17 18:48:18.374270 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:48:18.374355 kernel: QLogic iSCSI HBA Driver Mar 17 18:48:18.404531 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:48:18.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.409629 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:48:18.461390 kernel: raid6: avx512x4 gen() 17948 MB/s Mar 17 18:48:18.480381 kernel: raid6: avx512x4 xor() 8036 MB/s Mar 17 18:48:18.500377 kernel: raid6: avx512x2 gen() 18128 MB/s Mar 17 18:48:18.521378 kernel: raid6: avx512x2 xor() 27110 MB/s Mar 17 18:48:18.541373 kernel: raid6: avx512x1 gen() 18175 MB/s Mar 17 18:48:18.561378 kernel: raid6: avx512x1 xor() 24473 MB/s Mar 17 18:48:18.581381 kernel: raid6: avx2x4 gen() 18083 MB/s Mar 17 18:48:18.601379 kernel: raid6: avx2x4 xor() 7293 MB/s Mar 17 18:48:18.621377 kernel: raid6: avx2x2 gen() 17948 MB/s Mar 17 18:48:18.641379 kernel: raid6: avx2x2 xor() 20211 MB/s Mar 17 18:48:18.661374 kernel: raid6: avx2x1 gen() 13412 MB/s Mar 17 18:48:18.681374 kernel: raid6: avx2x1 xor() 17657 MB/s Mar 17 18:48:18.702382 kernel: raid6: sse2x4 gen() 10666 MB/s Mar 17 18:48:18.722380 kernel: raid6: sse2x4 xor() 6652 MB/s Mar 17 18:48:18.742382 kernel: raid6: sse2x2 gen() 11800 MB/s Mar 17 18:48:18.762378 kernel: raid6: sse2x2 xor() 7014 MB/s Mar 17 18:48:18.782376 kernel: raid6: sse2x1 gen() 10576 MB/s Mar 17 18:48:18.805491 kernel: raid6: sse2x1 xor() 5427 MB/s Mar 17 18:48:18.805516 kernel: raid6: using algorithm avx512x1 gen() 18175 MB/s Mar 17 18:48:18.805528 kernel: raid6: .... xor() 24473 MB/s, rmw enabled Mar 17 18:48:18.808956 kernel: raid6: using avx512x2 recovery algorithm Mar 17 18:48:18.829388 kernel: xor: automatically using best checksumming function avx Mar 17 18:48:18.925397 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:48:18.933804 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:48:18.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.937000 audit: BPF prog-id=7 op=LOAD Mar 17 18:48:18.937000 audit: BPF prog-id=8 op=LOAD Mar 17 18:48:18.938354 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:18.954000 systemd-udevd[383]: Using default interface naming scheme 'v252'. Mar 17 18:48:18.961042 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:18.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.966915 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:48:18.986122 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Mar 17 18:48:19.019326 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:48:19.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:19.022865 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:19.061105 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:19.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:19.108384 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:48:19.114385 kernel: hv_vmbus: Vmbus version:5.2 Mar 17 18:48:19.149386 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 18:48:19.178379 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 18:48:19.189544 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:48:19.189616 kernel: AES CTR mode by8 optimization enabled Mar 17 18:48:19.193234 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 18:48:19.199570 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:48:19.199623 kernel: scsi host0: storvsc_host_t Mar 17 18:48:19.202391 kernel: scsi host1: storvsc_host_t Mar 17 18:48:19.208420 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 18:48:19.208489 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 18:48:19.211469 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 18:48:19.232388 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 18:48:19.241385 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 18:48:19.247445 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 18:48:19.268265 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Mar 17 18:48:19.281146 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:48:19.281179 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 18:48:19.295706 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Mar 17 18:48:19.295899 kernel: sd 1:0:0:0: [sda] Write Protect is off Mar 17 18:48:19.296064 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Mar 17 18:48:19.296246 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 18:48:19.296436 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 18:48:19.296603 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:19.296623 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Mar 17 18:48:19.370031 kernel: hv_netvsc 7c1e5288-5828-7c1e-5288-58287c1e5288 eth0: VF slot 1 added Mar 17 18:48:19.383875 kernel: hv_vmbus: registering driver hv_pci Mar 17 18:48:19.383938 kernel: hv_pci 41c33467-26e8-4ce6-a5cc-66f64f0522d0: PCI VMBus probing: Using version 0x10004 Mar 17 18:48:19.457055 kernel: hv_pci 41c33467-26e8-4ce6-a5cc-66f64f0522d0: PCI host bridge to bus 26e8:00 Mar 17 18:48:19.457243 kernel: pci_bus 26e8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 17 18:48:19.457430 kernel: pci_bus 26e8:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 18:48:19.457576 kernel: pci 26e8:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 17 18:48:19.457745 kernel: pci 26e8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 18:48:19.457899 kernel: pci 26e8:00:02.0: enabling Extended Tags Mar 17 18:48:19.458057 kernel: pci 26e8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 26e8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 17 18:48:19.458209 kernel: pci_bus 26e8:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 18:48:19.458350 kernel: pci 26e8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 17 18:48:19.551394 kernel: mlx5_core 26e8:00:02.0: firmware version: 14.30.5000 Mar 17 18:48:19.803798 kernel: mlx5_core 26e8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Mar 17 18:48:19.803986 kernel: mlx5_core 26e8:00:02.0: Supported tc offload range - chains: 1, prios: 1 Mar 17 18:48:19.804173 kernel: mlx5_core 26e8:00:02.0: mlx5e_tc_post_act_init:40:(pid 281): firmware level support is missing Mar 17 18:48:19.804336 kernel: hv_netvsc 7c1e5288-5828-7c1e-5288-58287c1e5288 eth0: VF registering: eth1 Mar 17 18:48:19.804511 kernel: mlx5_core 26e8:00:02.0 eth1: joined to eth0 Mar 17 18:48:19.768856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:48:19.812382 kernel: mlx5_core 26e8:00:02.0 enP9960s1: renamed from eth1 Mar 17 18:48:19.822381 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (441) Mar 17 18:48:19.839980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:20.127629 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:48:20.229088 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:48:20.235076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:48:20.242715 systemd[1]: Starting disk-uuid.service... Mar 17 18:48:20.254413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:20.269394 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:21.278394 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:21.279151 disk-uuid[561]: The operation has completed successfully. Mar 17 18:48:21.365603 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:48:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.365708 systemd[1]: Finished disk-uuid.service. Mar 17 18:48:21.371869 systemd[1]: Starting verity-setup.service... Mar 17 18:48:21.402387 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:48:21.855913 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:48:21.862593 systemd[1]: Finished verity-setup.service. Mar 17 18:48:21.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.869597 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:48:21.947395 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:48:21.947388 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:48:21.952854 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:48:21.959189 systemd[1]: Starting ignition-setup.service... Mar 17 18:48:21.967607 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:48:21.990972 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:48:21.991056 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:21.991075 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:22.044183 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:48:22.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.051000 audit: BPF prog-id=9 op=LOAD Mar 17 18:48:22.052792 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:22.080022 systemd-networkd[799]: lo: Link UP Mar 17 18:48:22.080032 systemd-networkd[799]: lo: Gained carrier Mar 17 18:48:22.084464 systemd-networkd[799]: Enumeration completed Mar 17 18:48:22.084890 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:22.085230 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:22.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.096624 systemd[1]: Reached target network.target. Mar 17 18:48:22.102881 systemd[1]: Starting iscsiuio.service... Mar 17 18:48:22.110032 systemd[1]: Started iscsiuio.service. Mar 17 18:48:22.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.115852 systemd[1]: Starting iscsid.service... Mar 17 18:48:22.120691 iscsid[807]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:22.120691 iscsid[807]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:48:22.120691 iscsid[807]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:48:22.120691 iscsid[807]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:48:22.120691 iscsid[807]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:48:22.120691 iscsid[807]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:22.120691 iscsid[807]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:48:22.171783 kernel: mlx5_core 26e8:00:02.0 enP9960s1: Link up Mar 17 18:48:22.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.121543 systemd[1]: Started iscsid.service. Mar 17 18:48:22.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.132971 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:48:22.172589 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:48:22.177890 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:48:22.178267 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:48:22.183256 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:22.186036 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:22.191251 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:48:22.212171 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:48:22.235260 kernel: kauditd_printk_skb: 16 callbacks suppressed Mar 17 18:48:22.235432 kernel: audit: type=1130 audit(1742237302.214:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.235456 kernel: hv_netvsc 7c1e5288-5828-7c1e-5288-58287c1e5288 eth0: Data path switched to VF: enP9960s1 Mar 17 18:48:22.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.244378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:48:22.249555 systemd-networkd[799]: enP9960s1: Link UP Mar 17 18:48:22.249794 systemd-networkd[799]: eth0: Link UP Mar 17 18:48:22.249944 systemd-networkd[799]: eth0: Gained carrier Mar 17 18:48:22.260636 systemd-networkd[799]: enP9960s1: Gained carrier Mar 17 18:48:22.279460 systemd-networkd[799]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 18:48:22.412019 systemd[1]: Finished ignition-setup.service. Mar 17 18:48:22.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.434415 kernel: audit: type=1130 audit(1742237302.415:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.435067 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:48:23.619632 systemd-networkd[799]: eth0: Gained IPv6LL Mar 17 18:48:29.057388 ignition[826]: Ignition 2.14.0 Mar 17 18:48:29.057410 ignition[826]: Stage: fetch-offline Mar 17 18:48:29.057518 ignition[826]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:29.057573 ignition[826]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:29.249591 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:29.249798 ignition[826]: parsed url from cmdline: "" Mar 17 18:48:29.249802 ignition[826]: no config URL provided Mar 17 18:48:29.249808 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:29.284552 kernel: audit: type=1130 audit(1742237309.265:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.259424 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:48:29.249817 ignition[826]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:29.285018 systemd[1]: Starting ignition-fetch.service... Mar 17 18:48:29.249823 ignition[826]: failed to fetch config: resource requires networking Mar 17 18:48:29.250165 ignition[826]: Ignition finished successfully Mar 17 18:48:29.293458 ignition[832]: Ignition 2.14.0 Mar 17 18:48:29.293467 ignition[832]: Stage: fetch Mar 17 18:48:29.293585 ignition[832]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:29.293612 ignition[832]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:29.300147 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:29.300306 ignition[832]: parsed url from cmdline: "" Mar 17 18:48:29.300310 ignition[832]: no config URL provided Mar 17 18:48:29.300316 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:29.300326 ignition[832]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:29.300384 ignition[832]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 18:48:29.409414 ignition[832]: GET result: OK Mar 17 18:48:29.409571 ignition[832]: config has been read from IMDS userdata Mar 17 18:48:29.409618 ignition[832]: parsing config with SHA512: d81cd843f98e18ea91a73981710e3f0c133c66504c5733b3d0c0c6b28a0be1e837c0377850765b546ef45cb3e9380ee59cb3200eec7242380166d9330c230592 Mar 17 18:48:29.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.414150 unknown[832]: fetched base config from "system" Mar 17 18:48:29.446343 kernel: audit: type=1130 audit(1742237309.420:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.414881 ignition[832]: fetch: fetch complete Mar 17 18:48:29.414168 unknown[832]: fetched base config from "system" Mar 17 18:48:29.414887 ignition[832]: fetch: fetch passed Mar 17 18:48:29.414176 unknown[832]: fetched user config from "azure" Mar 17 18:48:29.414945 ignition[832]: Ignition finished successfully Mar 17 18:48:29.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.416721 systemd[1]: Finished ignition-fetch.service. Mar 17 18:48:29.451839 ignition[838]: Ignition 2.14.0 Mar 17 18:48:29.421715 systemd[1]: Starting ignition-kargs.service... Mar 17 18:48:29.451845 ignition[838]: Stage: kargs Mar 17 18:48:29.459407 systemd[1]: Finished ignition-kargs.service. Mar 17 18:48:29.451965 ignition[838]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:29.463729 systemd[1]: Starting ignition-disks.service... Mar 17 18:48:29.451987 ignition[838]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:29.454973 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:29.457841 ignition[838]: kargs: kargs passed Mar 17 18:48:29.457900 ignition[838]: Ignition finished successfully Mar 17 18:48:29.472509 ignition[844]: Ignition 2.14.0 Mar 17 18:48:29.472518 ignition[844]: Stage: disks Mar 17 18:48:29.472656 ignition[844]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:29.472682 ignition[844]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:29.479593 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:29.487901 ignition[844]: disks: disks passed Mar 17 18:48:29.487956 ignition[844]: Ignition finished successfully Mar 17 18:48:29.539482 kernel: audit: type=1130 audit(1742237309.462:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.542690 systemd[1]: Finished ignition-disks.service. Mar 17 18:48:29.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.574112 kernel: audit: type=1130 audit(1742237309.548:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.548770 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:48:29.574070 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:29.577863 systemd[1]: Reached target local-fs.target. Mar 17 18:48:29.585472 systemd[1]: Reached target sysinit.target. Mar 17 18:48:29.592120 systemd[1]: Reached target basic.target. Mar 17 18:48:29.596689 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:48:29.624391 systemd-fsck[853]: ROOT: clean, 623/7326000 files, 481078/7359488 blocks Mar 17 18:48:29.630170 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:48:29.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.637953 systemd[1]: Mounting sysroot.mount... Mar 17 18:48:29.668401 kernel: audit: type=1130 audit(1742237309.636:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:29.668442 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:48:29.669681 systemd[1]: Mounted sysroot.mount. Mar 17 18:48:29.675992 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:48:29.687343 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:48:29.692634 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:48:29.702894 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:48:29.702952 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:48:29.707288 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:48:29.828435 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:29.836488 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:48:29.850402 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (864) Mar 17 18:48:29.865066 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:48:29.865140 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:29.865161 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:29.868574 initrd-setup-root[869]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:48:29.878312 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:29.903491 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:48:29.942123 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:48:29.963606 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:48:30.993181 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:48:30.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:31.022412 kernel: audit: type=1130 audit(1742237310.996:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:30.997723 systemd[1]: Starting ignition-mount.service... Mar 17 18:48:31.003634 systemd[1]: Starting sysroot-boot.service... Mar 17 18:48:31.032111 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:31.032264 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:31.049983 ignition[931]: INFO : Ignition 2.14.0 Mar 17 18:48:31.049983 ignition[931]: INFO : Stage: mount Mar 17 18:48:31.057046 ignition[931]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:31.057046 ignition[931]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:31.066334 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:31.066334 ignition[931]: INFO : mount: mount passed Mar 17 18:48:31.066334 ignition[931]: INFO : Ignition finished successfully Mar 17 18:48:31.127992 kernel: audit: type=1130 audit(1742237311.076:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:31.128026 kernel: audit: type=1130 audit(1742237311.101:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:31.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:31.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:31.062661 systemd[1]: Finished ignition-mount.service. Mar 17 18:48:31.097566 systemd[1]: Finished sysroot-boot.service. Mar 17 18:48:32.805207 coreos-metadata[863]: Mar 17 18:48:32.805 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:48:32.847248 coreos-metadata[863]: Mar 17 18:48:32.847 INFO Fetch successful Mar 17 18:48:32.882610 coreos-metadata[863]: Mar 17 18:48:32.882 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:48:32.903240 coreos-metadata[863]: Mar 17 18:48:32.903 INFO Fetch successful Mar 17 18:48:32.911998 coreos-metadata[863]: Mar 17 18:48:32.911 INFO wrote hostname ci-3510.3.7-a-961279aa07 to /sysroot/etc/hostname Mar 17 18:48:32.913940 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:48:32.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:32.925825 systemd[1]: Starting ignition-files.service... Mar 17 18:48:32.943837 kernel: audit: type=1130 audit(1742237312.924:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:32.950452 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:32.967390 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (942) Mar 17 18:48:32.983163 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:48:32.983240 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:32.983251 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:32.999702 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:33.013889 ignition[961]: INFO : Ignition 2.14.0 Mar 17 18:48:33.013889 ignition[961]: INFO : Stage: files Mar 17 18:48:33.019115 ignition[961]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:33.019115 ignition[961]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:33.036734 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:33.042165 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:48:33.048223 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:48:33.048223 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:48:33.142914 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:48:33.149226 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:48:33.149226 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:48:33.149226 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:48:33.149226 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:48:33.143616 unknown[961]: wrote ssh authorized keys file for user: core Mar 17 18:48:33.238499 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:48:33.384142 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:48:33.391654 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:33.401936 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:48:33.947595 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:48:34.126740 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:34.136001 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656134071" Mar 17 18:48:34.269826 ignition[961]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656134071": device or resource busy Mar 17 18:48:34.269826 ignition[961]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2656134071", trying btrfs: device or resource busy Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656134071" Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656134071" Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2656134071" Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2656134071" Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem105057532" Mar 17 18:48:34.269826 ignition[961]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem105057532": device or resource busy Mar 17 18:48:34.269826 ignition[961]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem105057532", trying btrfs: device or resource busy Mar 17 18:48:34.269826 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem105057532" Mar 17 18:48:34.184058 systemd[1]: mnt-oem2656134071.mount: Deactivated successfully. Mar 17 18:48:34.407302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem105057532" Mar 17 18:48:34.407302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem105057532" Mar 17 18:48:34.407302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem105057532" Mar 17 18:48:34.407302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:34.407302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:48:34.407302 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:48:34.252296 systemd[1]: mnt-oem105057532.mount: Deactivated successfully. Mar 17 18:48:34.575402 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Mar 17 18:48:35.000663 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:48:35.000663 ignition[961]: INFO : files: op(14): [started] processing unit "waagent.service" Mar 17 18:48:35.000663 ignition[961]: INFO : files: op(14): [finished] processing unit "waagent.service" Mar 17 18:48:35.000663 ignition[961]: INFO : files: op(15): [started] processing unit "nvidia.service" Mar 17 18:48:35.000663 ignition[961]: INFO : files: op(15): [finished] processing unit "nvidia.service" Mar 17 18:48:35.000663 ignition[961]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Mar 17 18:48:35.136167 kernel: audit: type=1130 audit(1742237315.025:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.136197 kernel: audit: type=1130 audit(1742237315.084:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.138703 kernel: audit: type=1131 audit(1742237315.084:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.017751 systemd[1]: Finished ignition-files.service. Mar 17 18:48:35.311599 kernel: audit: type=1130 audit(1742237315.165:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:35.311730 ignition[961]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:35.311730 ignition[961]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:35.311730 ignition[961]: INFO : files: files passed Mar 17 18:48:35.311730 ignition[961]: INFO : Ignition finished successfully Mar 17 18:48:35.050477 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:48:35.067016 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:48:35.414071 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:48:35.068306 systemd[1]: Starting ignition-quench.service... Mar 17 18:48:35.073869 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:48:35.074004 systemd[1]: Finished ignition-quench.service. Mar 17 18:48:35.084642 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:48:35.166534 systemd[1]: Reached target ignition-complete.target. Mar 17 18:48:35.316407 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:48:35.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.443613 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:48:35.487733 kernel: audit: type=1130 audit(1742237315.446:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.487773 kernel: audit: type=1131 audit(1742237315.446:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.443730 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:48:35.446758 systemd[1]: Reached target initrd-fs.target. Mar 17 18:48:35.507563 systemd[1]: Reached target initrd.target. Mar 17 18:48:35.513187 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:48:35.519013 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:48:35.530274 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:48:35.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.538087 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:48:35.560184 kernel: audit: type=1130 audit(1742237315.537:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.570902 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:48:35.578156 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:48:35.582180 systemd[1]: Stopped target timers.target. Mar 17 18:48:35.588445 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:48:35.616415 kernel: audit: type=1131 audit(1742237315.594:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.588621 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:48:35.616418 systemd[1]: Stopped target initrd.target. Mar 17 18:48:35.619898 systemd[1]: Stopped target basic.target. Mar 17 18:48:35.626615 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:48:35.643897 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:48:35.649846 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:48:35.660228 systemd[1]: Stopped target remote-fs.target. Mar 17 18:48:35.666913 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:48:35.676659 systemd[1]: Stopped target sysinit.target. Mar 17 18:48:35.683754 systemd[1]: Stopped target local-fs.target. Mar 17 18:48:35.715056 kernel: audit: type=1131 audit(1742237315.683:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.685441 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:48:35.686334 systemd[1]: Stopped target swap.target. Mar 17 18:48:35.687163 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:48:35.687287 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:48:35.715118 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:48:35.719338 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:48:35.719500 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:48:35.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.745078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:48:35.772146 kernel: audit: type=1131 audit(1742237315.744:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.745269 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:48:35.767288 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:48:35.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.769317 systemd[1]: Stopped ignition-files.service. Mar 17 18:48:35.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.781156 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:48:35.796969 iscsid[807]: iscsid shutting down. Mar 17 18:48:35.781285 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:48:35.787646 systemd[1]: Stopping ignition-mount.service... Mar 17 18:48:35.795444 systemd[1]: Stopping iscsid.service... Mar 17 18:48:35.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.830995 ignition[999]: INFO : Ignition 2.14.0 Mar 17 18:48:35.830995 ignition[999]: INFO : Stage: umount Mar 17 18:48:35.830995 ignition[999]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:35.830995 ignition[999]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:35.830995 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:35.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.811203 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:48:35.870026 ignition[999]: INFO : umount: umount passed Mar 17 18:48:35.870026 ignition[999]: INFO : Ignition finished successfully Mar 17 18:48:35.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.814104 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:48:35.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.814341 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:48:35.817717 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:48:35.817909 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:48:35.823534 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:48:35.823667 systemd[1]: Stopped iscsid.service. Mar 17 18:48:35.827461 systemd[1]: Stopping iscsiuio.service... Mar 17 18:48:35.833510 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:48:35.833633 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:48:35.846793 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:48:35.846922 systemd[1]: Stopped iscsiuio.service. Mar 17 18:48:35.858903 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:48:35.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:35.859009 systemd[1]: Stopped ignition-mount.service. Mar 17 18:48:35.872504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:48:35.873743 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:48:35.873815 systemd[1]: Stopped ignition-disks.service. Mar 17 18:48:35.877358 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:48:35.877437 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:48:35.880691 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:48:35.880750 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:48:35.886115 systemd[1]: Stopped target network.target. Mar 17 18:48:35.889405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:48:35.889487 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:48:35.892552 systemd[1]: Stopped target paths.target. Mar 17 18:48:35.899091 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:48:35.912035 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:48:35.921558 systemd[1]: Stopped target slices.target. Mar 17 18:48:35.925545 systemd[1]: Stopped target sockets.target. Mar 17 18:48:35.927249 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:48:35.927299 systemd[1]: Closed iscsid.socket. Mar 17 18:48:35.935824 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:48:35.935878 systemd[1]: Closed iscsiuio.socket. Mar 17 18:48:35.946309 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:48:35.946405 systemd[1]: Stopped ignition-setup.service. Mar 17 18:48:35.959404 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:48:35.980462 systemd-networkd[799]: eth0: DHCPv6 lease lost Mar 17 18:48:36.046057 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:48:36.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.051219 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:48:36.051350 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:48:36.066090 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:48:36.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.066213 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:48:36.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.082139 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:48:36.134000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:48:36.134000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:48:36.082264 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:48:36.135563 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:48:36.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.135614 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:48:36.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.233622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:48:36.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.233720 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:48:36.243399 systemd[1]: Stopping network-cleanup.service... Mar 17 18:48:36.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.246475 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:48:36.246566 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:48:36.252666 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:48:36.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.252742 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:48:36.262730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:48:36.262799 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:48:36.291230 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:48:36.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.303964 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:48:36.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.304560 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:48:36.304699 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:48:36.311577 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:48:36.311642 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:48:36.316522 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:48:36.316567 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:48:36.331827 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:48:36.331897 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:48:36.339617 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:48:36.339677 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:48:36.344789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:48:36.344858 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:48:36.356325 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:48:36.405753 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:48:36.438822 kernel: hv_netvsc 7c1e5288-5828-7c1e-5288-58287c1e5288 eth0: Data path switched from VF: enP9960s1 Mar 17 18:48:36.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.405850 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:48:36.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.438919 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:48:36.438996 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:48:36.447874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:48:36.447949 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:48:36.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.465286 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:48:36.470778 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:48:36.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.470883 systemd[1]: Stopped network-cleanup.service. Mar 17 18:48:36.473175 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:48:36.473259 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:48:36.487914 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:48:36.494328 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:48:36.512829 systemd[1]: Switching root. Mar 17 18:48:36.539094 systemd-journald[183]: Journal stopped Mar 17 18:48:55.364615 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Mar 17 18:48:55.364645 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:48:55.364661 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:48:55.364670 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:48:55.364677 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:48:55.364686 kernel: SELinux: policy capability open_perms=1 Mar 17 18:48:55.364699 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:48:55.364708 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:48:55.364715 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:48:55.364727 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:48:55.364735 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:48:55.364742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:48:55.364752 systemd[1]: Successfully loaded SELinux policy in 210.397ms. Mar 17 18:48:55.364764 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.226ms. Mar 17 18:48:55.364778 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:55.364789 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:55.364799 systemd[1]: Detected architecture x86-64. Mar 17 18:48:55.364808 systemd[1]: Detected first boot. Mar 17 18:48:55.364819 systemd[1]: Hostname set to . Mar 17 18:48:55.364832 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:55.364841 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:48:55.364851 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:48:55.364863 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:48:55.364873 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:48:55.364887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:48:55.364899 kernel: kauditd_printk_skb: 51 callbacks suppressed Mar 17 18:48:55.364908 kernel: audit: type=1334 audit(1742237334.743:92): prog-id=12 op=LOAD Mar 17 18:48:55.364920 kernel: audit: type=1334 audit(1742237334.743:93): prog-id=3 op=UNLOAD Mar 17 18:48:55.364928 kernel: audit: type=1334 audit(1742237334.750:94): prog-id=13 op=LOAD Mar 17 18:48:55.364937 kernel: audit: type=1334 audit(1742237334.758:95): prog-id=14 op=LOAD Mar 17 18:48:55.364946 kernel: audit: type=1334 audit(1742237334.758:96): prog-id=4 op=UNLOAD Mar 17 18:48:55.364956 kernel: audit: type=1334 audit(1742237334.758:97): prog-id=5 op=UNLOAD Mar 17 18:48:55.364965 kernel: audit: type=1334 audit(1742237334.764:98): prog-id=15 op=LOAD Mar 17 18:48:55.364975 kernel: audit: type=1334 audit(1742237334.764:99): prog-id=12 op=UNLOAD Mar 17 18:48:55.364987 kernel: audit: type=1334 audit(1742237334.796:100): prog-id=16 op=LOAD Mar 17 18:48:55.364996 kernel: audit: type=1334 audit(1742237334.802:101): prog-id=17 op=LOAD Mar 17 18:48:55.365004 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:48:55.365013 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:48:55.365026 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:48:55.365035 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:48:55.365051 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:48:55.365064 systemd[1]: Created slice system-getty.slice. Mar 17 18:48:55.365073 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:48:55.365085 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:48:55.365096 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:48:55.365105 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:48:55.365115 systemd[1]: Created slice user.slice. Mar 17 18:48:55.365126 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:55.365136 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:48:55.365146 systemd[1]: Set up automount boot.automount. Mar 17 18:48:55.365160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:48:55.365170 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:48:55.365180 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:48:55.365191 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:48:55.365201 systemd[1]: Reached target integritysetup.target. Mar 17 18:48:55.365211 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:55.365222 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:55.365234 systemd[1]: Reached target slices.target. Mar 17 18:48:55.365244 systemd[1]: Reached target swap.target. Mar 17 18:48:55.365256 systemd[1]: Reached target torcx.target. Mar 17 18:48:55.365265 systemd[1]: Reached target veritysetup.target. Mar 17 18:48:55.365277 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:48:55.365289 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:48:55.365298 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:55.365311 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:55.365326 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:55.365336 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:48:55.365347 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:48:55.365358 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:48:55.365375 systemd[1]: Mounting media.mount... Mar 17 18:48:55.365389 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:48:55.365401 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:48:55.365411 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:48:55.365423 systemd[1]: Mounting tmp.mount... Mar 17 18:48:55.365433 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:48:55.365442 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:55.365455 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:55.365465 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:48:55.365475 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:55.365487 systemd[1]: Starting modprobe@drm.service... Mar 17 18:48:55.365499 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:55.365509 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:48:55.365522 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:55.365533 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:48:55.365543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:48:55.365555 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:48:55.365564 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:48:55.365574 kernel: loop: module loaded Mar 17 18:48:55.365586 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:48:55.365597 systemd[1]: Stopped systemd-journald.service. Mar 17 18:48:55.365607 kernel: fuse: init (API version 7.34) Mar 17 18:48:55.365616 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:55.365626 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:55.365635 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:48:55.365648 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:48:55.365658 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:55.365673 systemd-journald[1141]: Journal started Mar 17 18:48:55.365721 systemd-journald[1141]: Runtime Journal (/run/log/journal/964d178959e54b27ad8397b539c537da) is 8.0M, max 159.0M, 151.0M free. Mar 17 18:48:37.327000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:48:37.592000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:48:37.597000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:37.597000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:37.597000 audit: BPF prog-id=10 op=LOAD Mar 17 18:48:37.597000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:48:37.597000 audit: BPF prog-id=11 op=LOAD Mar 17 18:48:37.597000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:48:39.735000 audit[1032]: AVC avc: denied { associate } for pid=1032 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:48:39.735000 audit[1032]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018e7cc a1=c00018aa80 a2=c00019ccc0 a3=32 items=0 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:39.735000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:39.742000 audit[1032]: AVC avc: denied { associate } for pid=1032 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:48:39.742000 audit[1032]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018e8a5 a2=1ed a3=0 items=2 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:39.742000 audit: CWD cwd="/" Mar 17 18:48:39.742000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:39.742000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:39.742000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:54.743000 audit: BPF prog-id=12 op=LOAD Mar 17 18:48:54.743000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:48:54.750000 audit: BPF prog-id=13 op=LOAD Mar 17 18:48:54.758000 audit: BPF prog-id=14 op=LOAD Mar 17 18:48:54.758000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:48:54.758000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:48:54.764000 audit: BPF prog-id=15 op=LOAD Mar 17 18:48:54.764000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:48:54.796000 audit: BPF prog-id=16 op=LOAD Mar 17 18:48:54.802000 audit: BPF prog-id=17 op=LOAD Mar 17 18:48:54.802000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:48:54.802000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:48:54.808000 audit: BPF prog-id=18 op=LOAD Mar 17 18:48:54.808000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:48:54.815000 audit: BPF prog-id=19 op=LOAD Mar 17 18:48:54.815000 audit: BPF prog-id=20 op=LOAD Mar 17 18:48:54.815000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:48:54.815000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:48:54.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:54.833000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:48:54.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:54.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.306000 audit: BPF prog-id=21 op=LOAD Mar 17 18:48:55.306000 audit: BPF prog-id=22 op=LOAD Mar 17 18:48:55.306000 audit: BPF prog-id=23 op=LOAD Mar 17 18:48:55.306000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:48:55.306000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:48:55.356000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:48:55.356000 audit[1141]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe6903b890 a2=4000 a3=7ffe6903b92c items=0 ppid=1 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:55.356000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:48:54.742570 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:48:39.638803 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:48:54.742584 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:48:39.670896 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:48:54.816468 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:48:39.670927 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:48:39.670979 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:48:39.670993 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:48:39.671050 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:48:39.671067 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:48:39.671318 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:48:39.671355 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:48:39.671399 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:48:39.706148 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:48:39.706212 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:48:39.706266 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:48:39.706296 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:48:39.706319 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:48:39.706335 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:48:52.844936 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:52.845219 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:52.845342 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:52.845591 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:52.845659 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:48:52.845728 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2025-03-17T18:48:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:48:55.373897 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:48:55.379748 systemd[1]: Stopped verity-setup.service. Mar 17 18:48:55.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.398570 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:48:55.404430 systemd[1]: Started systemd-journald.service. Mar 17 18:48:55.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.407917 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:48:55.411332 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:48:55.414625 systemd[1]: Mounted media.mount. Mar 17 18:48:55.418068 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:48:55.422030 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:48:55.426011 systemd[1]: Mounted tmp.mount. Mar 17 18:48:55.429833 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:48:55.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.434548 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:55.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.439633 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:48:55.439991 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:48:55.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.444671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:55.445027 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:55.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.449359 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:48:55.449776 systemd[1]: Finished modprobe@drm.service. Mar 17 18:48:55.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.453928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:55.454230 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:55.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.458657 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:48:55.458825 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:48:55.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.463201 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:55.463370 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:55.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.467112 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:55.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.471425 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:48:55.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.475447 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:48:55.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.479258 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:55.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.483291 systemd[1]: Reached target network-pre.target. Mar 17 18:48:55.488516 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:48:55.493834 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:48:55.499531 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:48:55.505821 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:48:55.510609 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:48:55.514487 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:55.517016 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:48:55.520281 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:55.521821 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:55.527795 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:48:55.534087 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:48:55.535474 systemd-journald[1141]: Time spent on flushing to /var/log/journal/964d178959e54b27ad8397b539c537da is 36.636ms for 1168 entries. Mar 17 18:48:55.535474 systemd-journald[1141]: System Journal (/var/log/journal/964d178959e54b27ad8397b539c537da) is 8.0M, max 2.6G, 2.6G free. Mar 17 18:48:55.616930 systemd-journald[1141]: Received client request to flush runtime journal. Mar 17 18:48:55.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:55.617939 udevadm[1155]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:48:55.550633 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:48:55.554782 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:48:55.558817 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:48:55.562808 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:48:55.589852 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:55.618138 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:48:55.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:56.089555 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:48:56.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:56.095467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:56.680898 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:56.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:56.963138 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:48:56.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:56.967000 audit: BPF prog-id=24 op=LOAD Mar 17 18:48:56.967000 audit: BPF prog-id=25 op=LOAD Mar 17 18:48:56.967000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:48:56.967000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:48:56.968663 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:56.987342 systemd-udevd[1160]: Using default interface naming scheme 'v252'. Mar 17 18:48:57.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:57.203482 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:57.212000 audit: BPF prog-id=26 op=LOAD Mar 17 18:48:57.214060 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:57.247708 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:48:57.326402 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:48:57.328000 audit[1169]: AVC avc: denied { confidentiality } for pid=1169 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:48:57.342388 kernel: hv_vmbus: registering driver hv_balloon Mar 17 18:48:57.352399 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 18:48:57.328000 audit[1169]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556ac23e0730 a1=f884 a2=7fb34dcfdbc5 a3=5 items=12 ppid=1160 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:57.328000 audit: CWD cwd="/" Mar 17 18:48:57.328000 audit: PATH item=0 name=(null) inode=1239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=1 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=2 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=3 name=(null) inode=15443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=4 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=5 name=(null) inode=15444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=6 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=7 name=(null) inode=15445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=8 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=9 name=(null) inode=15446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=10 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PATH item=11 name=(null) inode=15447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:57.328000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:48:57.385394 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 18:48:57.411060 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 18:48:57.411168 kernel: hv_vmbus: registering driver hv_utils Mar 17 18:48:57.405000 audit: BPF prog-id=27 op=LOAD Mar 17 18:48:57.406000 audit: BPF prog-id=28 op=LOAD Mar 17 18:48:57.406000 audit: BPF prog-id=29 op=LOAD Mar 17 18:48:57.407549 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:48:57.431240 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 18:48:57.431358 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 18:48:57.440778 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:48:57.442417 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 18:48:57.442503 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 18:48:57.442528 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 18:48:58.487305 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:58.483841 systemd[1]: Started systemd-userdbd.service. Mar 17 18:48:58.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:58.630094 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Mar 17 18:48:58.727222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:58.978720 systemd-networkd[1174]: lo: Link UP Mar 17 18:48:58.978732 systemd-networkd[1174]: lo: Gained carrier Mar 17 18:48:58.979392 systemd-networkd[1174]: Enumeration completed Mar 17 18:48:58.979538 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:58.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:58.984917 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:48:59.013949 systemd-networkd[1174]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:59.071102 kernel: mlx5_core 26e8:00:02.0 enP9960s1: Link up Mar 17 18:48:59.100105 kernel: hv_netvsc 7c1e5288-5828-7c1e-5288-58287c1e5288 eth0: Data path switched to VF: enP9960s1 Mar 17 18:48:59.100476 systemd-networkd[1174]: enP9960s1: Link UP Mar 17 18:48:59.100622 systemd-networkd[1174]: eth0: Link UP Mar 17 18:48:59.100627 systemd-networkd[1174]: eth0: Gained carrier Mar 17 18:48:59.106496 systemd-networkd[1174]: enP9960s1: Gained carrier Mar 17 18:48:59.130255 systemd-networkd[1174]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 18:48:59.155469 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:48:59.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:59.160696 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:48:59.444977 lvm[1239]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:59.470263 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:48:59.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:59.474705 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:59.479331 systemd[1]: Starting lvm2-activation.service... Mar 17 18:48:59.484199 lvm[1240]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:59.504242 systemd[1]: Finished lvm2-activation.service. Mar 17 18:48:59.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:59.508105 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:59.515766 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:48:59.515813 systemd[1]: Reached target local-fs.target. Mar 17 18:48:59.519607 systemd[1]: Reached target machines.target. Mar 17 18:48:59.524030 systemd[1]: Starting ldconfig.service... Mar 17 18:48:59.527308 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:59.527425 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:59.528841 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:48:59.533531 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:48:59.540018 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:48:59.545378 systemd[1]: Starting systemd-sysext.service... Mar 17 18:48:59.867681 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1242 (bootctl) Mar 17 18:48:59.869389 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:49:00.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.083846 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:49:00.127460 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:49:00.160350 systemd-networkd[1174]: eth0: Gained IPv6LL Mar 17 18:49:00.166114 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:49:00.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.179578 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:49:00.179807 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:49:00.237108 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:49:00.243705 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:49:00.244402 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:49:00.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.293106 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:49:00.308096 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:49:00.312406 (sd-sysext)[1254]: Using extensions 'kubernetes'. Mar 17 18:49:00.313655 (sd-sysext)[1254]: Merged extensions into '/usr'. Mar 17 18:49:00.330063 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.331776 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:49:00.336013 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.337820 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:00.342565 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:00.347634 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:00.349118 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.349299 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:00.349459 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.352725 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:49:00.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.354482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:00.354692 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:00.355245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:00.355399 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:00.355843 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:00.355961 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:00.357589 systemd[1]: Finished systemd-sysext.service. Mar 17 18:49:00.368625 systemd[1]: Starting ensure-sysext.service... Mar 17 18:49:00.370003 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:00.370113 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.371654 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:49:00.384536 systemd[1]: Reloading. Mar 17 18:49:00.400554 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:49:00.405350 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:49:00.410498 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:49:00.476702 /usr/lib/systemd/system-generators/torcx-generator[1280]: time="2025-03-17T18:49:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:49:00.483174 /usr/lib/systemd/system-generators/torcx-generator[1280]: time="2025-03-17T18:49:00Z" level=info msg="torcx already run" Mar 17 18:49:00.579769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:49:00.579793 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:49:00.596297 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:49:00.659000 audit: BPF prog-id=30 op=LOAD Mar 17 18:49:00.659000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:49:00.659000 audit: BPF prog-id=31 op=LOAD Mar 17 18:49:00.659000 audit: BPF prog-id=32 op=LOAD Mar 17 18:49:00.659000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:49:00.659000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:49:00.660000 audit: BPF prog-id=33 op=LOAD Mar 17 18:49:00.660000 audit: BPF prog-id=34 op=LOAD Mar 17 18:49:00.660000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:49:00.660000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:49:00.661000 audit: BPF prog-id=35 op=LOAD Mar 17 18:49:00.661000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:49:00.661000 audit: BPF prog-id=36 op=LOAD Mar 17 18:49:00.661000 audit: BPF prog-id=37 op=LOAD Mar 17 18:49:00.661000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:49:00.661000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:49:00.663000 audit: BPF prog-id=38 op=LOAD Mar 17 18:49:00.663000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:49:00.676887 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.677197 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.678836 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:00.683531 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:00.688299 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:00.691500 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.691740 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:00.691925 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.693053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:00.693352 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:00.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.697549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:00.697711 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:00.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.702020 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:00.702204 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:00.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.708207 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.708506 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.710080 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:00.713585 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:00.718637 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:00.720625 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.720800 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:00.720986 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.722526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:00.722725 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:00.728034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:00.728233 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:00.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.731634 systemd[1]: Finished ensure-sysext.service. Mar 17 18:49:00.734473 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.734715 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.736866 systemd[1]: Starting modprobe@drm.service... Mar 17 18:49:00.740047 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:00.744110 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:00.744173 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:00.744259 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:49:00.744669 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:00.745252 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:00.746843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:00.747061 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:00.747269 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:00.747302 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:00.748718 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:49:00.748853 systemd[1]: Finished modprobe@drm.service. Mar 17 18:49:01.095377 systemd-fsck[1249]: fsck.fat 4.2 (2021-01-31) Mar 17 18:49:01.095377 systemd-fsck[1249]: /dev/sda1: 789 files, 119299/258078 clusters Mar 17 18:49:01.098609 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:49:01.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.108098 kernel: kauditd_printk_skb: 125 callbacks suppressed Mar 17 18:49:01.108196 kernel: audit: type=1130 audit(1742237341.102:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.127825 systemd[1]: Mounting boot.mount... Mar 17 18:49:01.138645 systemd[1]: Mounted boot.mount. Mar 17 18:49:01.154444 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:49:01.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.175099 kernel: audit: type=1130 audit(1742237341.157:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.326912 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:49:01.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.332336 systemd[1]: Starting audit-rules.service... Mar 17 18:49:01.349473 kernel: audit: type=1130 audit(1742237341.329:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.352188 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:49:01.357592 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:49:01.369916 kernel: audit: type=1334 audit(1742237341.361:213): prog-id=39 op=LOAD Mar 17 18:49:01.361000 audit: BPF prog-id=39 op=LOAD Mar 17 18:49:01.368123 systemd[1]: Starting systemd-resolved.service... Mar 17 18:49:01.371000 audit: BPF prog-id=40 op=LOAD Mar 17 18:49:01.374111 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:49:01.382345 kernel: audit: type=1334 audit(1742237341.371:214): prog-id=40 op=LOAD Mar 17 18:49:01.385348 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:49:01.400000 audit[1363]: SYSTEM_BOOT pid=1363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.402872 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:49:01.428252 kernel: audit: type=1127 audit(1742237341.400:215): pid=1363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.428371 kernel: audit: type=1130 audit(1742237341.426:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.464271 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:49:01.468396 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:49:01.488275 kernel: audit: type=1130 audit(1742237341.467:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.524659 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:49:01.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.528213 systemd[1]: Reached target time-set.target. Mar 17 18:49:01.549103 kernel: audit: type=1130 audit(1742237341.526:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.617459 systemd-resolved[1360]: Positive Trust Anchors: Mar 17 18:49:01.617485 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:49:01.617522 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:49:01.706426 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:49:01.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.729126 kernel: audit: type=1130 audit(1742237341.708:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.789451 systemd-resolved[1360]: Using system hostname 'ci-3510.3.7-a-961279aa07'. Mar 17 18:49:01.791369 systemd[1]: Started systemd-resolved.service. Mar 17 18:49:01.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:01.794833 systemd[1]: Reached target network.target. Mar 17 18:49:01.797612 systemd[1]: Reached target network-online.target. Mar 17 18:49:01.800804 systemd[1]: Reached target nss-lookup.target. Mar 17 18:49:02.022000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:49:02.022000 audit[1378]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde391aa90 a2=420 a3=0 items=0 ppid=1357 pid=1378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:02.022000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:49:02.023946 augenrules[1378]: No rules Mar 17 18:49:02.024538 systemd[1]: Finished audit-rules.service. Mar 17 18:49:13.504170 ldconfig[1241]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:49:13.516932 systemd[1]: Finished ldconfig.service. Mar 17 18:49:13.524785 systemd[1]: Starting systemd-update-done.service... Mar 17 18:49:13.535479 systemd[1]: Finished systemd-update-done.service. Mar 17 18:49:13.540479 systemd[1]: Reached target sysinit.target. Mar 17 18:49:13.543977 systemd[1]: Started motdgen.path. Mar 17 18:49:13.546848 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:49:13.551831 systemd[1]: Started logrotate.timer. Mar 17 18:49:13.554983 systemd[1]: Started mdadm.timer. Mar 17 18:49:13.557728 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:49:13.561288 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:49:13.561334 systemd[1]: Reached target paths.target. Mar 17 18:49:13.564318 systemd[1]: Reached target timers.target. Mar 17 18:49:13.567956 systemd[1]: Listening on dbus.socket. Mar 17 18:49:13.572654 systemd[1]: Starting docker.socket... Mar 17 18:49:13.578249 systemd[1]: Listening on sshd.socket. Mar 17 18:49:13.581001 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:13.581576 systemd[1]: Listening on docker.socket. Mar 17 18:49:13.584550 systemd[1]: Reached target sockets.target. Mar 17 18:49:13.587733 systemd[1]: Reached target basic.target. Mar 17 18:49:13.590957 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:49:13.590998 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:49:13.592358 systemd[1]: Starting containerd.service... Mar 17 18:49:13.597183 systemd[1]: Starting dbus.service... Mar 17 18:49:13.601050 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:49:13.605911 systemd[1]: Starting extend-filesystems.service... Mar 17 18:49:13.609336 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:49:13.611535 systemd[1]: Starting kubelet.service... Mar 17 18:49:13.617243 systemd[1]: Starting motdgen.service... Mar 17 18:49:13.621657 systemd[1]: Started nvidia.service. Mar 17 18:49:13.627479 systemd[1]: Starting prepare-helm.service... Mar 17 18:49:13.632088 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:49:13.637343 systemd[1]: Starting sshd-keygen.service... Mar 17 18:49:13.647303 systemd[1]: Starting systemd-logind.service... Mar 17 18:49:13.650443 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:13.650567 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:49:13.651334 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:49:13.652521 systemd[1]: Starting update-engine.service... Mar 17 18:49:13.657333 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:49:13.669657 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:49:13.669924 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:49:13.712565 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:49:13.712816 systemd[1]: Finished motdgen.service. Mar 17 18:49:13.743036 jq[1404]: true Mar 17 18:49:13.744225 jq[1388]: false Mar 17 18:49:13.746486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:49:13.746743 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:49:13.784277 jq[1416]: true Mar 17 18:49:13.788553 extend-filesystems[1389]: Found loop1 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda1 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda2 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda3 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found usr Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda4 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda6 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda7 Mar 17 18:49:13.788553 extend-filesystems[1389]: Found sda9 Mar 17 18:49:13.788553 extend-filesystems[1389]: Checking size of /dev/sda9 Mar 17 18:49:13.853515 env[1410]: time="2025-03-17T18:49:13.843273100Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:49:13.794787 systemd-logind[1402]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:49:13.803908 systemd-logind[1402]: New seat seat0. Mar 17 18:49:13.889849 tar[1407]: linux-amd64/helm Mar 17 18:49:13.907838 env[1410]: time="2025-03-17T18:49:13.907778800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:49:13.913384 env[1410]: time="2025-03-17T18:49:13.913330300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:13.915409 env[1410]: time="2025-03-17T18:49:13.915357600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:13.920779 env[1410]: time="2025-03-17T18:49:13.920736900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:13.923208 env[1410]: time="2025-03-17T18:49:13.923164300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:13.923572 env[1410]: time="2025-03-17T18:49:13.923542800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:13.923706 env[1410]: time="2025-03-17T18:49:13.923686500Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:49:13.923781 env[1410]: time="2025-03-17T18:49:13.923765200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:13.923997 env[1410]: time="2025-03-17T18:49:13.923976200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:13.924403 env[1410]: time="2025-03-17T18:49:13.924377800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:13.924729 env[1410]: time="2025-03-17T18:49:13.924699400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:13.924834 env[1410]: time="2025-03-17T18:49:13.924815800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:49:13.924984 env[1410]: time="2025-03-17T18:49:13.924964100Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:49:13.925090 env[1410]: time="2025-03-17T18:49:13.925058300Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:49:13.950776 extend-filesystems[1389]: Old size kept for /dev/sda9 Mar 17 18:49:13.950776 extend-filesystems[1389]: Found sr0 Mar 17 18:49:13.962903 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:49:13.963147 systemd[1]: Finished extend-filesystems.service. Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032382200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032459100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032476900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032555900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032576200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032648600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032676100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032695800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032711600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032729200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032754700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032772200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.032944200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:49:14.033644 env[1410]: time="2025-03-17T18:49:14.033082000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033469700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033504300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033531700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033614900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033632500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033647700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033723900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033743200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033758700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033775800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033802100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.033823300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.034004000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.034039100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036330 env[1410]: time="2025-03-17T18:49:14.034063200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036800 env[1410]: time="2025-03-17T18:49:14.034103900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:49:14.036800 env[1410]: time="2025-03-17T18:49:14.034126800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:49:14.036800 env[1410]: time="2025-03-17T18:49:14.034143100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:49:14.036800 env[1410]: time="2025-03-17T18:49:14.034181200Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:49:14.036800 env[1410]: time="2025-03-17T18:49:14.034223300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:49:14.036662 systemd[1]: Started containerd.service. Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.034505900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.034592500Z" level=info msg="Connect containerd service" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.034646400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035626400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035771000Z" level=info msg="Start subscribing containerd event" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035837200Z" level=info msg="Start recovering state" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035919900Z" level=info msg="Start event monitor" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035933200Z" level=info msg="Start snapshots syncer" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035943600Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.035953000Z" level=info msg="Start streaming server" Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.036451900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:49:14.037424 env[1410]: time="2025-03-17T18:49:14.036513900Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:49:14.052397 env[1410]: time="2025-03-17T18:49:14.040133600Z" level=info msg="containerd successfully booted in 0.200047s" Mar 17 18:49:14.122477 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:49:14.147739 bash[1453]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:49:14.148445 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:49:14.172720 dbus-daemon[1387]: [system] SELinux support is enabled Mar 17 18:49:14.172948 systemd[1]: Started dbus.service. Mar 17 18:49:14.178901 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:49:14.178953 systemd[1]: Reached target system-config.target. Mar 17 18:49:14.182922 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:49:14.182962 systemd[1]: Reached target user-config.target. Mar 17 18:49:14.189744 dbus-daemon[1387]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:49:14.190007 systemd[1]: Started systemd-logind.service. Mar 17 18:49:14.811882 tar[1407]: linux-amd64/LICENSE Mar 17 18:49:14.812226 tar[1407]: linux-amd64/README.md Mar 17 18:49:14.819241 systemd[1]: Finished prepare-helm.service. Mar 17 18:49:15.027417 systemd[1]: Started kubelet.service. Mar 17 18:49:15.427093 update_engine[1403]: I0317 18:49:15.426585 1403 main.cc:92] Flatcar Update Engine starting Mar 17 18:49:15.557185 systemd[1]: Started update-engine.service. Mar 17 18:49:15.562903 systemd[1]: Started locksmithd.service. Mar 17 18:49:15.566367 update_engine[1403]: I0317 18:49:15.566231 1403 update_check_scheduler.cc:74] Next update check in 8m1s Mar 17 18:49:15.713214 kubelet[1493]: E0317 18:49:15.711317 1493 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:15.713876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:15.714032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:15.714331 systemd[1]: kubelet.service: Consumed 1.107s CPU time. Mar 17 18:49:16.025892 sshd_keygen[1401]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:49:16.047712 systemd[1]: Finished sshd-keygen.service. Mar 17 18:49:16.052335 systemd[1]: Starting issuegen.service... Mar 17 18:49:16.056498 systemd[1]: Started waagent.service. Mar 17 18:49:16.060406 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:49:16.060627 systemd[1]: Finished issuegen.service. Mar 17 18:49:16.065421 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:49:16.079405 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:49:16.084292 systemd[1]: Started getty@tty1.service. Mar 17 18:49:16.092919 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:49:16.097811 systemd[1]: Reached target getty.target. Mar 17 18:49:16.100401 systemd[1]: Reached target multi-user.target. Mar 17 18:49:16.104400 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:49:16.123874 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:49:16.124040 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:49:16.130379 systemd[1]: Startup finished in 587ms (firmware) + 31.891s (loader) + 1.085s (kernel) + 19.280s (initrd) + 38.086s (userspace) = 1min 30.932s. Mar 17 18:49:16.525465 login[1513]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Mar 17 18:49:16.528295 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:49:16.582909 systemd[1]: Created slice user-500.slice. Mar 17 18:49:16.584707 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:49:16.589684 systemd-logind[1402]: New session 2 of user core. Mar 17 18:49:16.596725 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:49:16.598697 systemd[1]: Starting user@500.service... Mar 17 18:49:16.602625 (systemd)[1522]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:16.696400 systemd[1522]: Queued start job for default target default.target. Mar 17 18:49:16.697029 systemd[1522]: Reached target paths.target. Mar 17 18:49:16.697057 systemd[1522]: Reached target sockets.target. Mar 17 18:49:16.697092 systemd[1522]: Reached target timers.target. Mar 17 18:49:16.697108 systemd[1522]: Reached target basic.target. Mar 17 18:49:16.697243 systemd[1]: Started user@500.service. Mar 17 18:49:16.698530 systemd[1]: Started session-2.scope. Mar 17 18:49:16.699094 systemd[1522]: Reached target default.target. Mar 17 18:49:16.699303 systemd[1522]: Startup finished in 90ms. Mar 17 18:49:17.526003 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:49:17.531971 systemd[1]: Started session-1.scope. Mar 17 18:49:17.532517 systemd-logind[1402]: New session 1 of user core. Mar 17 18:49:18.386762 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:49:24.300270 waagent[1508]: 2025-03-17T18:49:24.300137Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Mar 17 18:49:24.316362 waagent[1508]: 2025-03-17T18:49:24.303953Z INFO Daemon Daemon OS: flatcar 3510.3.7 Mar 17 18:49:24.316362 waagent[1508]: 2025-03-17T18:49:24.304881Z INFO Daemon Daemon Python: 3.9.16 Mar 17 18:49:24.316362 waagent[1508]: 2025-03-17T18:49:24.306326Z INFO Daemon Daemon Run daemon Mar 17 18:49:24.316362 waagent[1508]: 2025-03-17T18:49:24.307084Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Mar 17 18:49:24.322511 waagent[1508]: 2025-03-17T18:49:24.322351Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:49:24.330457 waagent[1508]: 2025-03-17T18:49:24.330301Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.331744Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.332619Z INFO Daemon Daemon Using waagent for provisioning Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.334222Z INFO Daemon Daemon Activate resource disk Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.335253Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.343438Z INFO Daemon Daemon Found device: None Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.344721Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.345684Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.347678Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.348800Z INFO Daemon Daemon Running default provisioning handler Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.359658Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.363030Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.363816Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:49:24.377748 waagent[1508]: 2025-03-17T18:49:24.364767Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 18:49:24.471284 waagent[1508]: 2025-03-17T18:49:24.467746Z INFO Daemon Daemon Successfully mounted dvd Mar 17 18:49:24.625607 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 18:49:24.666118 waagent[1508]: 2025-03-17T18:49:24.665903Z INFO Daemon Daemon Detect protocol endpoint Mar 17 18:49:24.669409 waagent[1508]: 2025-03-17T18:49:24.669280Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:49:24.674415 waagent[1508]: 2025-03-17T18:49:24.674300Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 18:49:24.678186 waagent[1508]: 2025-03-17T18:49:24.678060Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 18:49:24.681218 waagent[1508]: 2025-03-17T18:49:24.681116Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 18:49:24.686505 waagent[1508]: 2025-03-17T18:49:24.682390Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 18:49:25.041050 waagent[1508]: 2025-03-17T18:49:25.040868Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 18:49:25.045107 waagent[1508]: 2025-03-17T18:49:25.045038Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 18:49:25.048047 waagent[1508]: 2025-03-17T18:49:25.047946Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 18:49:25.425950 waagent[1508]: 2025-03-17T18:49:25.425777Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 18:49:25.438393 waagent[1508]: 2025-03-17T18:49:25.438278Z INFO Daemon Daemon Forcing an update of the goal state.. Mar 17 18:49:25.445476 waagent[1508]: 2025-03-17T18:49:25.440482Z INFO Daemon Daemon Fetching goal state [incarnation 1] Mar 17 18:49:25.651886 waagent[1508]: 2025-03-17T18:49:25.651729Z INFO Daemon Daemon Found private key matching thumbprint 5891EBF570F1987B0E2D70C428E7E23E42180538 Mar 17 18:49:25.659520 waagent[1508]: 2025-03-17T18:49:25.659398Z INFO Daemon Daemon Certificate with thumbprint 6948104C905BDA372D09AB49226587DCAEF9275B has no matching private key. Mar 17 18:49:25.666438 waagent[1508]: 2025-03-17T18:49:25.666321Z INFO Daemon Daemon Fetch goal state completed Mar 17 18:49:25.753597 waagent[1508]: 2025-03-17T18:49:25.753408Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 75378ba0-0721-4619-a2b4-7ee90f012e8b New eTag: 7393901982986811158] Mar 17 18:49:25.761585 waagent[1508]: 2025-03-17T18:49:25.761469Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:49:25.822879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:49:25.823242 systemd[1]: Stopped kubelet.service. Mar 17 18:49:25.823310 systemd[1]: kubelet.service: Consumed 1.107s CPU time. Mar 17 18:49:25.825530 systemd[1]: Starting kubelet.service... Mar 17 18:49:25.839361 waagent[1508]: 2025-03-17T18:49:25.839241Z INFO Daemon Daemon Starting provisioning Mar 17 18:49:25.841305 waagent[1508]: 2025-03-17T18:49:25.841169Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 18:49:25.849066 waagent[1508]: 2025-03-17T18:49:25.841831Z INFO Daemon Daemon Set hostname [ci-3510.3.7-a-961279aa07] Mar 17 18:49:25.971624 waagent[1508]: 2025-03-17T18:49:25.971444Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-a-961279aa07] Mar 17 18:49:26.059360 waagent[1508]: 2025-03-17T18:49:25.998932Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 18:49:26.059360 waagent[1508]: 2025-03-17T18:49:26.006977Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 18:49:26.057718 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Mar 17 18:49:26.057888 systemd[1]: Stopped systemd-networkd-wait-online.service. Mar 17 18:49:26.057966 systemd[1]: Stopping systemd-networkd-wait-online.service... Mar 17 18:49:26.058250 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:49:26.073318 systemd-networkd[1174]: eth0: DHCPv6 lease lost Mar 17 18:49:26.074340 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. Mar 17 18:49:26.075431 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:49:26.075684 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:49:26.078851 systemd[1]: Starting systemd-networkd.service... Mar 17 18:49:26.112794 systemd-networkd[1564]: enP9960s1: Link UP Mar 17 18:49:26.112806 systemd-networkd[1564]: enP9960s1: Gained carrier Mar 17 18:49:26.114373 systemd-networkd[1564]: eth0: Link UP Mar 17 18:49:26.114383 systemd-networkd[1564]: eth0: Gained carrier Mar 17 18:49:26.114837 systemd-networkd[1564]: lo: Link UP Mar 17 18:49:26.114846 systemd-networkd[1564]: lo: Gained carrier Mar 17 18:49:26.115202 systemd-networkd[1564]: eth0: Gained IPv6LL Mar 17 18:49:26.115492 systemd-networkd[1564]: Enumeration completed Mar 17 18:49:26.115638 systemd[1]: Started systemd-networkd.service. Mar 17 18:49:26.118420 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:49:26.118885 waagent[1508]: 2025-03-17T18:49:26.118689Z INFO Daemon Daemon Create user account if not exists Mar 17 18:49:26.126087 waagent[1508]: 2025-03-17T18:49:26.125933Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 18:49:26.127095 systemd-networkd[1564]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:49:26.132020 waagent[1508]: 2025-03-17T18:49:26.131897Z INFO Daemon Daemon Configure sudoer Mar 17 18:49:26.178200 systemd-networkd[1564]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 17 18:49:26.181542 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:49:26.424813 waagent[1508]: 2025-03-17T18:49:26.424640Z INFO Daemon Daemon Configure sshd Mar 17 18:49:26.432562 waagent[1508]: 2025-03-17T18:49:26.426596Z INFO Daemon Daemon Deploy ssh public key. Mar 17 18:49:26.531746 systemd[1]: Started kubelet.service. Mar 17 18:49:26.579530 kubelet[1575]: E0317 18:49:26.579488 1575 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:26.582775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:26.582938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:26.655495 systemd-timesyncd[1361]: Contacted time server 188.125.64.7:123 (3.flatcar.pool.ntp.org). Mar 17 18:49:26.655958 systemd-timesyncd[1361]: Initial clock synchronization to Mon 2025-03-17 18:49:26.653346 UTC. Mar 17 18:49:27.633677 waagent[1508]: 2025-03-17T18:49:27.633546Z INFO Daemon Daemon Provisioning complete Mar 17 18:49:27.654128 waagent[1508]: 2025-03-17T18:49:27.654016Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 18:49:27.664313 waagent[1508]: 2025-03-17T18:49:27.655875Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 18:49:27.664313 waagent[1508]: 2025-03-17T18:49:27.657862Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Mar 17 18:49:27.946446 waagent[1581]: 2025-03-17T18:49:27.946254Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Mar 17 18:49:27.947317 waagent[1581]: 2025-03-17T18:49:27.947238Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:27.947474 waagent[1581]: 2025-03-17T18:49:27.947416Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:27.960664 waagent[1581]: 2025-03-17T18:49:27.960556Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Mar 17 18:49:27.960864 waagent[1581]: 2025-03-17T18:49:27.960804Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Mar 17 18:49:28.037001 waagent[1581]: 2025-03-17T18:49:28.036849Z INFO ExtHandler ExtHandler Found private key matching thumbprint 5891EBF570F1987B0E2D70C428E7E23E42180538 Mar 17 18:49:28.037289 waagent[1581]: 2025-03-17T18:49:28.037216Z INFO ExtHandler ExtHandler Certificate with thumbprint 6948104C905BDA372D09AB49226587DCAEF9275B has no matching private key. Mar 17 18:49:28.037553 waagent[1581]: 2025-03-17T18:49:28.037498Z INFO ExtHandler ExtHandler Fetch goal state completed Mar 17 18:49:28.056196 waagent[1581]: 2025-03-17T18:49:28.056120Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 228c39fe-653e-4cf1-b924-058338132a96 New eTag: 7393901982986811158] Mar 17 18:49:28.056822 waagent[1581]: 2025-03-17T18:49:28.056753Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:49:28.095710 waagent[1581]: 2025-03-17T18:49:28.095565Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:49:28.109440 waagent[1581]: 2025-03-17T18:49:28.109321Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1581 Mar 17 18:49:28.113146 waagent[1581]: 2025-03-17T18:49:28.113026Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:49:28.114488 waagent[1581]: 2025-03-17T18:49:28.114406Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:49:28.148356 waagent[1581]: 2025-03-17T18:49:28.148276Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:49:28.148805 waagent[1581]: 2025-03-17T18:49:28.148735Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:49:28.157724 waagent[1581]: 2025-03-17T18:49:28.157656Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:49:28.158309 waagent[1581]: 2025-03-17T18:49:28.158242Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:49:28.159457 waagent[1581]: 2025-03-17T18:49:28.159389Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Mar 17 18:49:28.160834 waagent[1581]: 2025-03-17T18:49:28.160771Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:49:28.161293 waagent[1581]: 2025-03-17T18:49:28.161232Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:28.161450 waagent[1581]: 2025-03-17T18:49:28.161402Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:28.162001 waagent[1581]: 2025-03-17T18:49:28.161942Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:49:28.162346 waagent[1581]: 2025-03-17T18:49:28.162286Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:49:28.162346 waagent[1581]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:49:28.162346 waagent[1581]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:49:28.162346 waagent[1581]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:49:28.162346 waagent[1581]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:28.162346 waagent[1581]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:28.162346 waagent[1581]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:28.165977 waagent[1581]: 2025-03-17T18:49:28.165755Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:49:28.166325 waagent[1581]: 2025-03-17T18:49:28.166261Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:28.167611 waagent[1581]: 2025-03-17T18:49:28.167540Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:49:28.167837 waagent[1581]: 2025-03-17T18:49:28.167763Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:28.168864 waagent[1581]: 2025-03-17T18:49:28.168216Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:49:28.169687 waagent[1581]: 2025-03-17T18:49:28.169628Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:49:28.169894 waagent[1581]: 2025-03-17T18:49:28.169853Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:49:28.170056 waagent[1581]: 2025-03-17T18:49:28.170020Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:49:28.170731 waagent[1581]: 2025-03-17T18:49:28.170672Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:49:28.171689 waagent[1581]: 2025-03-17T18:49:28.171003Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:49:28.171864 waagent[1581]: 2025-03-17T18:49:28.171825Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:49:28.194341 waagent[1581]: 2025-03-17T18:49:28.194260Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1564' Mar 17 18:49:28.199906 waagent[1581]: 2025-03-17T18:49:28.199735Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Mar 17 18:49:28.200931 waagent[1581]: 2025-03-17T18:49:28.200866Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:49:28.202159 waagent[1581]: 2025-03-17T18:49:28.202065Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Mar 17 18:49:28.264769 waagent[1581]: 2025-03-17T18:49:28.264620Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:49:28.264769 waagent[1581]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:49:28.264769 waagent[1581]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:49:28.264769 waagent[1581]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:88:58:28 brd ff:ff:ff:ff:ff:ff Mar 17 18:49:28.264769 waagent[1581]: 3: enP9960s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:88:58:28 brd ff:ff:ff:ff:ff:ff\ altname enP9960p0s2 Mar 17 18:49:28.264769 waagent[1581]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:49:28.264769 waagent[1581]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:49:28.264769 waagent[1581]: 2: eth0 inet 10.200.8.24/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:49:28.264769 waagent[1581]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:49:28.264769 waagent[1581]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:49:28.264769 waagent[1581]: 2: eth0 inet6 fe80::7e1e:52ff:fe88:5828/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:49:28.267770 waagent[1581]: 2025-03-17T18:49:28.267708Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Mar 17 18:49:28.363659 waagent[1581]: 2025-03-17T18:49:28.363511Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Mar 17 18:49:28.367273 waagent[1581]: 2025-03-17T18:49:28.367135Z INFO EnvHandler ExtHandler Firewall rules: Mar 17 18:49:28.367273 waagent[1581]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:28.367273 waagent[1581]: pkts bytes target prot opt in out source destination Mar 17 18:49:28.367273 waagent[1581]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:28.367273 waagent[1581]: pkts bytes target prot opt in out source destination Mar 17 18:49:28.367273 waagent[1581]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:28.367273 waagent[1581]: pkts bytes target prot opt in out source destination Mar 17 18:49:28.367273 waagent[1581]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:49:28.367273 waagent[1581]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:49:28.368776 waagent[1581]: 2025-03-17T18:49:28.368712Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 18:49:28.496477 waagent[1581]: 2025-03-17T18:49:28.496326Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Mar 17 18:49:28.662041 waagent[1508]: 2025-03-17T18:49:28.661842Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Mar 17 18:49:28.666079 waagent[1508]: 2025-03-17T18:49:28.665998Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Mar 17 18:49:29.759772 waagent[1619]: 2025-03-17T18:49:29.759650Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Mar 17 18:49:29.760562 waagent[1619]: 2025-03-17T18:49:29.760490Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Mar 17 18:49:29.760714 waagent[1619]: 2025-03-17T18:49:29.760657Z INFO ExtHandler ExtHandler Python: 3.9.16 Mar 17 18:49:29.760870 waagent[1619]: 2025-03-17T18:49:29.760821Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Mar 17 18:49:29.770886 waagent[1619]: 2025-03-17T18:49:29.770755Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:49:29.771360 waagent[1619]: 2025-03-17T18:49:29.771294Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:29.771540 waagent[1619]: 2025-03-17T18:49:29.771488Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:29.785238 waagent[1619]: 2025-03-17T18:49:29.785156Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 18:49:29.796017 waagent[1619]: 2025-03-17T18:49:29.795939Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 18:49:29.797087 waagent[1619]: 2025-03-17T18:49:29.797012Z INFO ExtHandler Mar 17 18:49:29.797255 waagent[1619]: 2025-03-17T18:49:29.797202Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b9177d7a-8af7-4e1b-b272-4066aa0be389 eTag: 7393901982986811158 source: Fabric] Mar 17 18:49:29.797985 waagent[1619]: 2025-03-17T18:49:29.797924Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:49:29.799114 waagent[1619]: 2025-03-17T18:49:29.799038Z INFO ExtHandler Mar 17 18:49:29.799264 waagent[1619]: 2025-03-17T18:49:29.799211Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 18:49:29.811558 waagent[1619]: 2025-03-17T18:49:29.811488Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:49:29.812097 waagent[1619]: 2025-03-17T18:49:29.812030Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:49:29.837061 waagent[1619]: 2025-03-17T18:49:29.836967Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Mar 17 18:49:29.909769 waagent[1619]: 2025-03-17T18:49:29.909615Z INFO ExtHandler Downloaded certificate {'thumbprint': '6948104C905BDA372D09AB49226587DCAEF9275B', 'hasPrivateKey': False} Mar 17 18:49:29.910849 waagent[1619]: 2025-03-17T18:49:29.910780Z INFO ExtHandler Downloaded certificate {'thumbprint': '5891EBF570F1987B0E2D70C428E7E23E42180538', 'hasPrivateKey': True} Mar 17 18:49:29.911864 waagent[1619]: 2025-03-17T18:49:29.911797Z INFO ExtHandler Fetch goal state completed Mar 17 18:49:29.933771 waagent[1619]: 2025-03-17T18:49:29.933635Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Mar 17 18:49:29.946660 waagent[1619]: 2025-03-17T18:49:29.946531Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1619 Mar 17 18:49:29.949819 waagent[1619]: 2025-03-17T18:49:29.949717Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:49:29.950970 waagent[1619]: 2025-03-17T18:49:29.950896Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 17 18:49:29.951321 waagent[1619]: 2025-03-17T18:49:29.951262Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 17 18:49:29.953293 waagent[1619]: 2025-03-17T18:49:29.953227Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:49:29.958966 waagent[1619]: 2025-03-17T18:49:29.958895Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:49:29.959437 waagent[1619]: 2025-03-17T18:49:29.959368Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:49:29.968610 waagent[1619]: 2025-03-17T18:49:29.968541Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:49:29.969282 waagent[1619]: 2025-03-17T18:49:29.969207Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:49:29.983909 waagent[1619]: 2025-03-17T18:49:29.983772Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Mar 17 18:49:29.987409 waagent[1619]: 2025-03-17T18:49:29.987271Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Mar 17 18:49:29.988530 waagent[1619]: 2025-03-17T18:49:29.988441Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 17 18:49:29.990185 waagent[1619]: 2025-03-17T18:49:29.990115Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:49:29.990632 waagent[1619]: 2025-03-17T18:49:29.990572Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:29.990796 waagent[1619]: 2025-03-17T18:49:29.990746Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:29.991432 waagent[1619]: 2025-03-17T18:49:29.991374Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:49:29.991937 waagent[1619]: 2025-03-17T18:49:29.991876Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:49:29.992296 waagent[1619]: 2025-03-17T18:49:29.992239Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:49:29.992296 waagent[1619]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:49:29.992296 waagent[1619]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:49:29.992296 waagent[1619]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:49:29.992296 waagent[1619]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:29.992296 waagent[1619]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:29.992296 waagent[1619]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:29.992736 waagent[1619]: 2025-03-17T18:49:29.992679Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:29.993381 waagent[1619]: 2025-03-17T18:49:29.993325Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:29.996427 waagent[1619]: 2025-03-17T18:49:29.996307Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:49:29.996783 waagent[1619]: 2025-03-17T18:49:29.996702Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:49:29.996907 waagent[1619]: 2025-03-17T18:49:29.996835Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:49:29.997276 waagent[1619]: 2025-03-17T18:49:29.997217Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:49:29.997418 waagent[1619]: 2025-03-17T18:49:29.997370Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:49:29.999344 waagent[1619]: 2025-03-17T18:49:29.999280Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:49:30.001507 waagent[1619]: 2025-03-17T18:49:30.001368Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:49:30.004424 waagent[1619]: 2025-03-17T18:49:30.004330Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:49:30.018166 waagent[1619]: 2025-03-17T18:49:30.018013Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:49:30.018166 waagent[1619]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:49:30.018166 waagent[1619]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:49:30.018166 waagent[1619]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:88:58:28 brd ff:ff:ff:ff:ff:ff Mar 17 18:49:30.018166 waagent[1619]: 3: enP9960s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:88:58:28 brd ff:ff:ff:ff:ff:ff\ altname enP9960p0s2 Mar 17 18:49:30.018166 waagent[1619]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:49:30.018166 waagent[1619]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:49:30.018166 waagent[1619]: 2: eth0 inet 10.200.8.24/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:49:30.018166 waagent[1619]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:49:30.018166 waagent[1619]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:49:30.018166 waagent[1619]: 2: eth0 inet6 fe80::7e1e:52ff:fe88:5828/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:49:30.025873 waagent[1619]: 2025-03-17T18:49:30.025762Z INFO ExtHandler ExtHandler Downloading agent manifest Mar 17 18:49:30.087162 waagent[1619]: 2025-03-17T18:49:30.087019Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 18:49:30.087162 waagent[1619]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:30.087162 waagent[1619]: pkts bytes target prot opt in out source destination Mar 17 18:49:30.087162 waagent[1619]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:30.087162 waagent[1619]: pkts bytes target prot opt in out source destination Mar 17 18:49:30.087162 waagent[1619]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:30.087162 waagent[1619]: pkts bytes target prot opt in out source destination Mar 17 18:49:30.087162 waagent[1619]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:49:30.087162 waagent[1619]: 139 15666 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:49:30.087162 waagent[1619]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:49:30.099391 waagent[1619]: 2025-03-17T18:49:30.099306Z INFO ExtHandler ExtHandler Mar 17 18:49:30.099569 waagent[1619]: 2025-03-17T18:49:30.099488Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 148ba5d0-993f-44e7-923b-89dd3a7e5747 correlation 23bc0ddb-3bd9-4b87-b915-1957404ea1ae created: 2025-03-17T18:47:31.653351Z] Mar 17 18:49:30.100524 waagent[1619]: 2025-03-17T18:49:30.100441Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:49:30.102328 waagent[1619]: 2025-03-17T18:49:30.102268Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 17 18:49:30.142909 waagent[1619]: 2025-03-17T18:49:30.142802Z INFO ExtHandler ExtHandler Looking for existing remote access users. Mar 17 18:49:30.159704 waagent[1619]: 2025-03-17T18:49:30.159604Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 167A37F5-6823-49CE-AFA2-E05342914A7A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Mar 17 18:49:36.822781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:49:36.823146 systemd[1]: Stopped kubelet.service. Mar 17 18:49:36.825066 systemd[1]: Starting kubelet.service... Mar 17 18:49:36.910989 systemd[1]: Started kubelet.service. Mar 17 18:49:36.954882 kubelet[1667]: E0317 18:49:36.954837 1667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:36.956755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:36.956923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:46.476447 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 17 18:49:47.072817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:49:47.073198 systemd[1]: Stopped kubelet.service. Mar 17 18:49:47.075330 systemd[1]: Starting kubelet.service... Mar 17 18:49:47.161030 systemd[1]: Started kubelet.service. Mar 17 18:49:47.206826 kubelet[1677]: E0317 18:49:47.206767 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:47.208569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:47.208730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:57.322844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:49:57.323230 systemd[1]: Stopped kubelet.service. Mar 17 18:49:57.325358 systemd[1]: Starting kubelet.service... Mar 17 18:49:57.413249 systemd[1]: Started kubelet.service. Mar 17 18:49:58.039101 kubelet[1688]: E0317 18:49:58.039044 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:58.040899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:58.041060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:00.704253 update_engine[1403]: I0317 18:50:00.704185 1403 update_attempter.cc:509] Updating boot flags... Mar 17 18:50:08.072782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:50:08.073147 systemd[1]: Stopped kubelet.service. Mar 17 18:50:08.075261 systemd[1]: Starting kubelet.service... Mar 17 18:50:08.410319 systemd[1]: Started kubelet.service. Mar 17 18:50:08.744936 kubelet[1791]: E0317 18:50:08.744806 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:08.746650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:08.746816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:09.217439 systemd[1]: Created slice system-sshd.slice. Mar 17 18:50:09.219478 systemd[1]: Started sshd@0-10.200.8.24:22-10.200.16.10:57846.service. Mar 17 18:50:09.905935 sshd[1798]: Accepted publickey for core from 10.200.16.10 port 57846 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:50:09.907758 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:09.912971 systemd-logind[1402]: New session 3 of user core. Mar 17 18:50:09.913661 systemd[1]: Started session-3.scope. Mar 17 18:50:10.453972 systemd[1]: Started sshd@1-10.200.8.24:22-10.200.16.10:57862.service. Mar 17 18:50:11.078363 sshd[1803]: Accepted publickey for core from 10.200.16.10 port 57862 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:50:11.080190 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:11.086240 systemd[1]: Started session-4.scope. Mar 17 18:50:11.086852 systemd-logind[1402]: New session 4 of user core. Mar 17 18:50:11.527194 sshd[1803]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:11.530439 systemd[1]: sshd@1-10.200.8.24:22-10.200.16.10:57862.service: Deactivated successfully. Mar 17 18:50:11.531498 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:50:11.532375 systemd-logind[1402]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:50:11.533384 systemd-logind[1402]: Removed session 4. Mar 17 18:50:11.631294 systemd[1]: Started sshd@2-10.200.8.24:22-10.200.16.10:57870.service. Mar 17 18:50:12.260088 sshd[1809]: Accepted publickey for core from 10.200.16.10 port 57870 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:50:12.261877 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:12.267933 systemd[1]: Started session-5.scope. Mar 17 18:50:12.268414 systemd-logind[1402]: New session 5 of user core. Mar 17 18:50:12.702868 sshd[1809]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:12.706228 systemd[1]: sshd@2-10.200.8.24:22-10.200.16.10:57870.service: Deactivated successfully. Mar 17 18:50:12.707134 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:50:12.707785 systemd-logind[1402]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:50:12.708579 systemd-logind[1402]: Removed session 5. Mar 17 18:50:12.808130 systemd[1]: Started sshd@3-10.200.8.24:22-10.200.16.10:57876.service. Mar 17 18:50:13.432435 sshd[1815]: Accepted publickey for core from 10.200.16.10 port 57876 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:50:13.434282 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:13.439973 systemd[1]: Started session-6.scope. Mar 17 18:50:13.440452 systemd-logind[1402]: New session 6 of user core. Mar 17 18:50:13.879391 sshd[1815]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:13.883024 systemd[1]: sshd@3-10.200.8.24:22-10.200.16.10:57876.service: Deactivated successfully. Mar 17 18:50:13.884107 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:50:13.884772 systemd-logind[1402]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:50:13.885588 systemd-logind[1402]: Removed session 6. Mar 17 18:50:13.987984 systemd[1]: Started sshd@4-10.200.8.24:22-10.200.16.10:57886.service. Mar 17 18:50:14.612428 sshd[1821]: Accepted publickey for core from 10.200.16.10 port 57886 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:50:14.614153 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:14.619581 systemd[1]: Started session-7.scope. Mar 17 18:50:14.620200 systemd-logind[1402]: New session 7 of user core. Mar 17 18:50:15.026801 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:50:15.027119 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:50:15.052724 systemd[1]: Starting docker.service... Mar 17 18:50:15.089737 env[1834]: time="2025-03-17T18:50:15.089693958Z" level=info msg="Starting up" Mar 17 18:50:15.091210 env[1834]: time="2025-03-17T18:50:15.091172750Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:50:15.091210 env[1834]: time="2025-03-17T18:50:15.091191950Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:50:15.091372 env[1834]: time="2025-03-17T18:50:15.091217250Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:50:15.091372 env[1834]: time="2025-03-17T18:50:15.091230350Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:50:15.093019 env[1834]: time="2025-03-17T18:50:15.092983640Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:50:15.093019 env[1834]: time="2025-03-17T18:50:15.093005040Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:50:15.093190 env[1834]: time="2025-03-17T18:50:15.093024040Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:50:15.093190 env[1834]: time="2025-03-17T18:50:15.093040140Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:50:15.164142 env[1834]: time="2025-03-17T18:50:15.164093358Z" level=info msg="Loading containers: start." Mar 17 18:50:15.278271 kernel: Initializing XFRM netlink socket Mar 17 18:50:15.294019 env[1834]: time="2025-03-17T18:50:15.293972360Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:50:15.359811 systemd-networkd[1564]: docker0: Link UP Mar 17 18:50:15.382444 env[1834]: time="2025-03-17T18:50:15.382394785Z" level=info msg="Loading containers: done." Mar 17 18:50:15.399921 env[1834]: time="2025-03-17T18:50:15.399860591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:50:15.400176 env[1834]: time="2025-03-17T18:50:15.400128790Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:50:15.400292 env[1834]: time="2025-03-17T18:50:15.400269789Z" level=info msg="Daemon has completed initialization" Mar 17 18:50:15.432266 systemd[1]: Started docker.service. Mar 17 18:50:15.442283 env[1834]: time="2025-03-17T18:50:15.442220464Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:50:18.822822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:50:18.823185 systemd[1]: Stopped kubelet.service. Mar 17 18:50:18.825428 systemd[1]: Starting kubelet.service... Mar 17 18:50:19.292707 systemd[1]: Started kubelet.service. Mar 17 18:50:19.470506 kubelet[1957]: E0317 18:50:19.470437 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:19.472300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:19.472503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:24.327567 env[1410]: time="2025-03-17T18:50:24.327497661Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:50:25.180282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956604342.mount: Deactivated successfully. Mar 17 18:50:27.260313 env[1410]: time="2025-03-17T18:50:27.260042833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:27.267397 env[1410]: time="2025-03-17T18:50:27.267341915Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:27.270545 env[1410]: time="2025-03-17T18:50:27.270501907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:27.275680 env[1410]: time="2025-03-17T18:50:27.275628495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:27.277764 env[1410]: time="2025-03-17T18:50:27.277720490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:50:27.291648 env[1410]: time="2025-03-17T18:50:27.291603855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:50:29.210664 env[1410]: time="2025-03-17T18:50:29.210603020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:29.217157 env[1410]: time="2025-03-17T18:50:29.217113906Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:29.221770 env[1410]: time="2025-03-17T18:50:29.221730296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:29.226510 env[1410]: time="2025-03-17T18:50:29.226462185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:29.227171 env[1410]: time="2025-03-17T18:50:29.227139784Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:50:29.237587 env[1410]: time="2025-03-17T18:50:29.237554861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:50:29.572746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:50:29.572992 systemd[1]: Stopped kubelet.service. Mar 17 18:50:29.574765 systemd[1]: Starting kubelet.service... Mar 17 18:50:29.659859 systemd[1]: Started kubelet.service. Mar 17 18:50:30.252310 kubelet[1979]: E0317 18:50:30.252259 1979 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:30.254313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:30.254487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:31.145078 env[1410]: time="2025-03-17T18:50:31.145016363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:31.150474 env[1410]: time="2025-03-17T18:50:31.150419579Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:31.154517 env[1410]: time="2025-03-17T18:50:31.154468116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:31.160220 env[1410]: time="2025-03-17T18:50:31.160172528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:31.160986 env[1410]: time="2025-03-17T18:50:31.160946816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:50:31.172000 env[1410]: time="2025-03-17T18:50:31.171960645Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:50:32.247932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744026318.mount: Deactivated successfully. Mar 17 18:50:32.840447 env[1410]: time="2025-03-17T18:50:32.840376907Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:32.845832 env[1410]: time="2025-03-17T18:50:32.845769826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:32.850643 env[1410]: time="2025-03-17T18:50:32.850581453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:32.855433 env[1410]: time="2025-03-17T18:50:32.855378981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:32.855737 env[1410]: time="2025-03-17T18:50:32.855700076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:50:32.867541 env[1410]: time="2025-03-17T18:50:32.867473098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:50:33.391495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622158366.mount: Deactivated successfully. Mar 17 18:50:34.657864 env[1410]: time="2025-03-17T18:50:34.657801131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:34.665368 env[1410]: time="2025-03-17T18:50:34.665317823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:34.671751 env[1410]: time="2025-03-17T18:50:34.671687733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:34.677736 env[1410]: time="2025-03-17T18:50:34.677676647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:34.678446 env[1410]: time="2025-03-17T18:50:34.678410237Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:50:34.690216 env[1410]: time="2025-03-17T18:50:34.690177969Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:50:35.219940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234351806.mount: Deactivated successfully. Mar 17 18:50:35.244427 env[1410]: time="2025-03-17T18:50:35.244373852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:35.252632 env[1410]: time="2025-03-17T18:50:35.252568338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:35.256576 env[1410]: time="2025-03-17T18:50:35.256530983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:35.260962 env[1410]: time="2025-03-17T18:50:35.260918522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:35.261489 env[1410]: time="2025-03-17T18:50:35.261454715Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:50:35.272196 env[1410]: time="2025-03-17T18:50:35.272145866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:50:35.857832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846678599.mount: Deactivated successfully. Mar 17 18:50:38.466633 env[1410]: time="2025-03-17T18:50:38.466567376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:38.475537 env[1410]: time="2025-03-17T18:50:38.475483362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:38.482625 env[1410]: time="2025-03-17T18:50:38.482563572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:38.488004 env[1410]: time="2025-03-17T18:50:38.487948303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:38.488936 env[1410]: time="2025-03-17T18:50:38.488889791Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:50:40.322961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:50:40.323234 systemd[1]: Stopped kubelet.service. Mar 17 18:50:40.328830 systemd[1]: Starting kubelet.service... Mar 17 18:50:40.525290 systemd[1]: Started kubelet.service. Mar 17 18:50:40.600171 kubelet[2068]: E0317 18:50:40.599751 2068 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:40.602930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:40.603107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:41.978665 systemd[1]: Stopped kubelet.service. Mar 17 18:50:41.982465 systemd[1]: Starting kubelet.service... Mar 17 18:50:42.016392 systemd[1]: Reloading. Mar 17 18:50:42.152827 /usr/lib/systemd/system-generators/torcx-generator[2102]: time="2025-03-17T18:50:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:50:42.153382 /usr/lib/systemd/system-generators/torcx-generator[2102]: time="2025-03-17T18:50:42Z" level=info msg="torcx already run" Mar 17 18:50:42.249585 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:50:42.249610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:50:42.266996 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:42.368752 systemd[1]: Started kubelet.service. Mar 17 18:50:42.371412 systemd[1]: Stopping kubelet.service... Mar 17 18:50:42.371820 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:42.372060 systemd[1]: Stopped kubelet.service. Mar 17 18:50:42.374339 systemd[1]: Starting kubelet.service... Mar 17 18:50:42.626802 systemd[1]: Started kubelet.service. Mar 17 18:50:42.671772 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:42.671772 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:42.671772 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:42.672336 kubelet[2169]: I0317 18:50:42.671829 2169 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:43.932102 kubelet[2169]: I0317 18:50:43.932045 2169 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:50:43.932536 kubelet[2169]: I0317 18:50:43.932504 2169 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:43.932817 kubelet[2169]: I0317 18:50:43.932793 2169 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:50:43.945693 kubelet[2169]: I0317 18:50:43.945647 2169 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:43.947254 kubelet[2169]: E0317 18:50:43.947220 2169 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:43.956694 kubelet[2169]: I0317 18:50:43.956653 2169 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:43.958008 kubelet[2169]: I0317 18:50:43.957942 2169 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:43.958278 kubelet[2169]: I0317 18:50:43.958005 2169 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-961279aa07","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:50:43.958731 kubelet[2169]: I0317 18:50:43.958710 2169 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:43.958801 kubelet[2169]: I0317 18:50:43.958739 2169 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:50:43.958913 kubelet[2169]: I0317 18:50:43.958895 2169 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:43.959808 kubelet[2169]: I0317 18:50:43.959785 2169 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:50:43.959808 kubelet[2169]: I0317 18:50:43.959811 2169 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:43.959957 kubelet[2169]: I0317 18:50:43.959845 2169 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:50:43.959957 kubelet[2169]: I0317 18:50:43.959867 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:43.973248 kubelet[2169]: I0317 18:50:43.973212 2169 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:50:43.975478 kubelet[2169]: W0317 18:50:43.975400 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-961279aa07&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:43.975665 kubelet[2169]: E0317 18:50:43.975494 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-961279aa07&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:43.975665 kubelet[2169]: I0317 18:50:43.975554 2169 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:43.975665 kubelet[2169]: W0317 18:50:43.975618 2169 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:50:43.977872 kubelet[2169]: W0317 18:50:43.977801 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:43.978141 kubelet[2169]: E0317 18:50:43.978123 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:43.979752 kubelet[2169]: I0317 18:50:43.979724 2169 server.go:1264] "Started kubelet" Mar 17 18:50:43.996687 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:50:43.997133 kubelet[2169]: I0317 18:50:43.996917 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:43.997998 kubelet[2169]: E0317 18:50:43.997812 2169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-961279aa07.182dabb97acb3a86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-961279aa07,UID:ci-3510.3.7-a-961279aa07,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-961279aa07,},FirstTimestamp:2025-03-17 18:50:43.979688582 +0000 UTC m=+1.345846381,LastTimestamp:2025-03-17 18:50:43.979688582 +0000 UTC m=+1.345846381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-961279aa07,}" Mar 17 18:50:44.003415 kubelet[2169]: I0317 18:50:44.003344 2169 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:44.004725 kubelet[2169]: I0317 18:50:44.004689 2169 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:50:44.005703 kubelet[2169]: I0317 18:50:44.005680 2169 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:50:44.005909 kubelet[2169]: I0317 18:50:44.005852 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:44.006175 kubelet[2169]: I0317 18:50:44.006153 2169 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:44.007052 kubelet[2169]: E0317 18:50:44.006534 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-961279aa07?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="200ms" Mar 17 18:50:44.008038 kubelet[2169]: I0317 18:50:44.008010 2169 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:44.008198 kubelet[2169]: I0317 18:50:44.008161 2169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:44.009862 kubelet[2169]: I0317 18:50:44.009723 2169 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:50:44.010335 kubelet[2169]: W0317 18:50:44.010279 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:44.010441 kubelet[2169]: E0317 18:50:44.010347 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:44.010558 kubelet[2169]: I0317 18:50:44.010536 2169 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:44.012598 kubelet[2169]: I0317 18:50:44.012572 2169 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:44.017433 kubelet[2169]: E0317 18:50:44.017391 2169 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:44.062817 kubelet[2169]: I0317 18:50:44.062778 2169 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:44.062817 kubelet[2169]: I0317 18:50:44.062795 2169 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:44.062817 kubelet[2169]: I0317 18:50:44.062821 2169 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:44.070317 kubelet[2169]: I0317 18:50:44.070277 2169 policy_none.go:49] "None policy: Start" Mar 17 18:50:44.071246 kubelet[2169]: I0317 18:50:44.071216 2169 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:44.071410 kubelet[2169]: I0317 18:50:44.071267 2169 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:44.079689 systemd[1]: Created slice kubepods.slice. Mar 17 18:50:44.085144 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:50:44.088583 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:50:44.095028 kubelet[2169]: I0317 18:50:44.094996 2169 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:44.095629 kubelet[2169]: I0317 18:50:44.095532 2169 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:44.095755 kubelet[2169]: I0317 18:50:44.095742 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:44.101552 kubelet[2169]: E0317 18:50:44.097677 2169 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-a-961279aa07\" not found" Mar 17 18:50:44.107812 kubelet[2169]: I0317 18:50:44.107551 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:44.109189 kubelet[2169]: I0317 18:50:44.109153 2169 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.109633 kubelet[2169]: E0317 18:50:44.109596 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.110117 kubelet[2169]: I0317 18:50:44.110095 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:44.110248 kubelet[2169]: I0317 18:50:44.110237 2169 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:44.110361 kubelet[2169]: I0317 18:50:44.110351 2169 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:50:44.110502 kubelet[2169]: E0317 18:50:44.110487 2169 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:50:44.111856 kubelet[2169]: W0317 18:50:44.111798 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:44.112051 kubelet[2169]: E0317 18:50:44.112026 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:44.207715 kubelet[2169]: E0317 18:50:44.207549 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-961279aa07?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="400ms" Mar 17 18:50:44.210927 kubelet[2169]: I0317 18:50:44.210862 2169 topology_manager.go:215] "Topology Admit Handler" podUID="939852edf363424fa77e18ea6c04e5a1" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.212594 kubelet[2169]: I0317 18:50:44.212556 2169 topology_manager.go:215] "Topology Admit Handler" podUID="6f73a52fd6e43e333d90895af685febd" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.213245 kubelet[2169]: I0317 18:50:44.213217 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/939852edf363424fa77e18ea6c04e5a1-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-961279aa07\" (UID: \"939852edf363424fa77e18ea6c04e5a1\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.213535 kubelet[2169]: I0317 18:50:44.213505 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/939852edf363424fa77e18ea6c04e5a1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-961279aa07\" (UID: \"939852edf363424fa77e18ea6c04e5a1\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.213622 kubelet[2169]: I0317 18:50:44.213552 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/939852edf363424fa77e18ea6c04e5a1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-961279aa07\" (UID: \"939852edf363424fa77e18ea6c04e5a1\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.214523 kubelet[2169]: I0317 18:50:44.214496 2169 topology_manager.go:215] "Topology Admit Handler" podUID="4a524ebe17f3873544b0bd37e1eb8e68" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.221678 systemd[1]: Created slice kubepods-burstable-pod939852edf363424fa77e18ea6c04e5a1.slice. Mar 17 18:50:44.232142 systemd[1]: Created slice kubepods-burstable-pod6f73a52fd6e43e333d90895af685febd.slice. Mar 17 18:50:44.240795 systemd[1]: Created slice kubepods-burstable-pod4a524ebe17f3873544b0bd37e1eb8e68.slice. Mar 17 18:50:44.312614 kubelet[2169]: I0317 18:50:44.312577 2169 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.313115 kubelet[2169]: E0317 18:50:44.313061 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.314193 kubelet[2169]: I0317 18:50:44.314166 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.314310 kubelet[2169]: I0317 18:50:44.314288 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.314394 kubelet[2169]: I0317 18:50:44.314369 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.314443 kubelet[2169]: I0317 18:50:44.314398 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a524ebe17f3873544b0bd37e1eb8e68-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-961279aa07\" (UID: \"4a524ebe17f3873544b0bd37e1eb8e68\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.314443 kubelet[2169]: I0317 18:50:44.314424 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.314527 kubelet[2169]: I0317 18:50:44.314451 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.532855 env[1410]: time="2025-03-17T18:50:44.532192877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-961279aa07,Uid:939852edf363424fa77e18ea6c04e5a1,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:44.536462 env[1410]: time="2025-03-17T18:50:44.536416131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-961279aa07,Uid:6f73a52fd6e43e333d90895af685febd,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:44.544023 env[1410]: time="2025-03-17T18:50:44.543968449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-961279aa07,Uid:4a524ebe17f3873544b0bd37e1eb8e68,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:44.609093 kubelet[2169]: E0317 18:50:44.609015 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-961279aa07?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="800ms" Mar 17 18:50:44.715183 kubelet[2169]: I0317 18:50:44.715140 2169 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:44.715617 kubelet[2169]: E0317 18:50:44.715586 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:45.069886 kubelet[2169]: W0317 18:50:45.069836 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.069886 kubelet[2169]: E0317 18:50:45.069890 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.141294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916203178.mount: Deactivated successfully. Mar 17 18:50:45.176730 kubelet[2169]: W0317 18:50:45.176644 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.176730 kubelet[2169]: E0317 18:50:45.176728 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.289864 kubelet[2169]: W0317 18:50:45.289787 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.289864 kubelet[2169]: E0317 18:50:45.289870 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.409956 kubelet[2169]: E0317 18:50:45.409889 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-961279aa07?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="1.6s" Mar 17 18:50:45.478339 kubelet[2169]: W0317 18:50:45.478263 2169 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-961279aa07&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.478339 kubelet[2169]: E0317 18:50:45.478342 2169 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-961279aa07&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:45.517696 kubelet[2169]: I0317 18:50:45.517661 2169 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:45.518301 kubelet[2169]: E0317 18:50:45.518250 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:45.703648 env[1410]: time="2025-03-17T18:50:45.703516360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.823537 env[1410]: time="2025-03-17T18:50:45.823470692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.826978 env[1410]: time="2025-03-17T18:50:45.826927155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.833366 env[1410]: time="2025-03-17T18:50:45.833312188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.983513 env[1410]: time="2025-03-17T18:50:45.982983605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.987930 env[1410]: time="2025-03-17T18:50:45.987861854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.991750 env[1410]: time="2025-03-17T18:50:45.991695413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.997466 env[1410]: time="2025-03-17T18:50:45.996562762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.997656 kubelet[2169]: E0317 18:50:45.997407 2169 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.24:6443: connect: connection refused Mar 17 18:50:46.001978 env[1410]: time="2025-03-17T18:50:46.001929005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:46.671865 env[1410]: time="2025-03-17T18:50:46.671797209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:46.684855 env[1410]: time="2025-03-17T18:50:46.684793375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:46.689864 env[1410]: time="2025-03-17T18:50:46.689805623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:46.756444 env[1410]: time="2025-03-17T18:50:46.749297211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:46.756444 env[1410]: time="2025-03-17T18:50:46.749347110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:46.756444 env[1410]: time="2025-03-17T18:50:46.749361610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:46.756444 env[1410]: time="2025-03-17T18:50:46.749509409Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f700658109b1ee0880852b767407a9631fe79b708b32d0ef3dd28429188f579e pid=2209 runtime=io.containerd.runc.v2 Mar 17 18:50:46.768391 systemd[1]: Started cri-containerd-f700658109b1ee0880852b767407a9631fe79b708b32d0ef3dd28429188f579e.scope. Mar 17 18:50:46.817149 env[1410]: time="2025-03-17T18:50:46.816316621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:46.817149 env[1410]: time="2025-03-17T18:50:46.816357720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:46.817149 env[1410]: time="2025-03-17T18:50:46.816370920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:46.817149 env[1410]: time="2025-03-17T18:50:46.816488119Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/997cdab70653c3bd63b0e0e4d961d76b5397ad1d785689d08dba72b24b3a4083 pid=2249 runtime=io.containerd.runc.v2 Mar 17 18:50:46.828602 env[1410]: time="2025-03-17T18:50:46.828496395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:46.828602 env[1410]: time="2025-03-17T18:50:46.828543595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:46.828602 env[1410]: time="2025-03-17T18:50:46.828558695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:46.828974 env[1410]: time="2025-03-17T18:50:46.828914591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02c7e7bca69f00870562dc56fee2f98cbb86fc772639e2c57a7c0ba4ae3d0218 pid=2250 runtime=io.containerd.runc.v2 Mar 17 18:50:46.851681 systemd[1]: Started cri-containerd-997cdab70653c3bd63b0e0e4d961d76b5397ad1d785689d08dba72b24b3a4083.scope. Mar 17 18:50:46.857633 env[1410]: time="2025-03-17T18:50:46.857580596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-961279aa07,Uid:939852edf363424fa77e18ea6c04e5a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f700658109b1ee0880852b767407a9631fe79b708b32d0ef3dd28429188f579e\"" Mar 17 18:50:46.863418 env[1410]: time="2025-03-17T18:50:46.863372236Z" level=info msg="CreateContainer within sandbox \"f700658109b1ee0880852b767407a9631fe79b708b32d0ef3dd28429188f579e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:50:46.875604 systemd[1]: Started cri-containerd-02c7e7bca69f00870562dc56fee2f98cbb86fc772639e2c57a7c0ba4ae3d0218.scope. Mar 17 18:50:46.912228 env[1410]: time="2025-03-17T18:50:46.912143234Z" level=info msg="CreateContainer within sandbox \"f700658109b1ee0880852b767407a9631fe79b708b32d0ef3dd28429188f579e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a426326b7d69d55267246b70465f4229d4e1ea435a07bc9d42b0e0d97dad8570\"" Mar 17 18:50:46.915097 env[1410]: time="2025-03-17T18:50:46.913398221Z" level=info msg="StartContainer for \"a426326b7d69d55267246b70465f4229d4e1ea435a07bc9d42b0e0d97dad8570\"" Mar 17 18:50:46.926287 env[1410]: time="2025-03-17T18:50:46.926125690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-961279aa07,Uid:4a524ebe17f3873544b0bd37e1eb8e68,Namespace:kube-system,Attempt:0,} returns sandbox id \"997cdab70653c3bd63b0e0e4d961d76b5397ad1d785689d08dba72b24b3a4083\"" Mar 17 18:50:46.930247 env[1410]: time="2025-03-17T18:50:46.930189148Z" level=info msg="CreateContainer within sandbox \"997cdab70653c3bd63b0e0e4d961d76b5397ad1d785689d08dba72b24b3a4083\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:50:46.954715 systemd[1]: Started cri-containerd-a426326b7d69d55267246b70465f4229d4e1ea435a07bc9d42b0e0d97dad8570.scope. Mar 17 18:50:46.956679 env[1410]: time="2025-03-17T18:50:46.956580577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-961279aa07,Uid:6f73a52fd6e43e333d90895af685febd,Namespace:kube-system,Attempt:0,} returns sandbox id \"02c7e7bca69f00870562dc56fee2f98cbb86fc772639e2c57a7c0ba4ae3d0218\"" Mar 17 18:50:46.963124 env[1410]: time="2025-03-17T18:50:46.963079310Z" level=info msg="CreateContainer within sandbox \"02c7e7bca69f00870562dc56fee2f98cbb86fc772639e2c57a7c0ba4ae3d0218\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:50:47.002789 env[1410]: time="2025-03-17T18:50:47.002740202Z" level=info msg="CreateContainer within sandbox \"997cdab70653c3bd63b0e0e4d961d76b5397ad1d785689d08dba72b24b3a4083\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ebf37cf900aa8fab35376489df2436c50837b4dd56a040e02f9332db13ebc8f\"" Mar 17 18:50:47.003463 env[1410]: time="2025-03-17T18:50:47.003428195Z" level=info msg="StartContainer for \"4ebf37cf900aa8fab35376489df2436c50837b4dd56a040e02f9332db13ebc8f\"" Mar 17 18:50:47.010759 kubelet[2169]: E0317 18:50:47.010705 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-961279aa07?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="3.2s" Mar 17 18:50:47.030299 env[1410]: time="2025-03-17T18:50:47.030228326Z" level=info msg="StartContainer for \"a426326b7d69d55267246b70465f4229d4e1ea435a07bc9d42b0e0d97dad8570\" returns successfully" Mar 17 18:50:47.038261 systemd[1]: Started cri-containerd-4ebf37cf900aa8fab35376489df2436c50837b4dd56a040e02f9332db13ebc8f.scope. Mar 17 18:50:47.040465 env[1410]: time="2025-03-17T18:50:47.040417524Z" level=info msg="CreateContainer within sandbox \"02c7e7bca69f00870562dc56fee2f98cbb86fc772639e2c57a7c0ba4ae3d0218\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"988c7c544a9e54f8efabe9d7133e99d785383f737090883b2a66381d0e786c47\"" Mar 17 18:50:47.041560 env[1410]: time="2025-03-17T18:50:47.041528213Z" level=info msg="StartContainer for \"988c7c544a9e54f8efabe9d7133e99d785383f737090883b2a66381d0e786c47\"" Mar 17 18:50:47.061032 systemd[1]: Started cri-containerd-988c7c544a9e54f8efabe9d7133e99d785383f737090883b2a66381d0e786c47.scope. Mar 17 18:50:47.120624 kubelet[2169]: I0317 18:50:47.120117 2169 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:47.120624 kubelet[2169]: E0317 18:50:47.120580 2169 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:47.158767 env[1410]: time="2025-03-17T18:50:47.158709438Z" level=info msg="StartContainer for \"4ebf37cf900aa8fab35376489df2436c50837b4dd56a040e02f9332db13ebc8f\" returns successfully" Mar 17 18:50:47.226640 env[1410]: time="2025-03-17T18:50:47.226497458Z" level=info msg="StartContainer for \"988c7c544a9e54f8efabe9d7133e99d785383f737090883b2a66381d0e786c47\" returns successfully" Mar 17 18:50:49.974533 kubelet[2169]: I0317 18:50:49.974472 2169 apiserver.go:52] "Watching apiserver" Mar 17 18:50:50.010124 kubelet[2169]: I0317 18:50:50.010050 2169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:50:50.057086 kubelet[2169]: E0317 18:50:50.057042 2169 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.7-a-961279aa07" not found Mar 17 18:50:50.214919 kubelet[2169]: E0317 18:50:50.214845 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-a-961279aa07\" not found" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:50.323950 kubelet[2169]: I0317 18:50:50.323916 2169 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:50.338767 kubelet[2169]: I0317 18:50:50.338727 2169 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:51.226671 kubelet[2169]: W0317 18:50:51.226635 2169 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:52.327582 systemd[1]: Reloading. Mar 17 18:50:52.427255 /usr/lib/systemd/system-generators/torcx-generator[2467]: time="2025-03-17T18:50:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:50:52.430943 /usr/lib/systemd/system-generators/torcx-generator[2467]: time="2025-03-17T18:50:52Z" level=info msg="torcx already run" Mar 17 18:50:52.524126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:50:52.524146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:50:52.543423 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:52.661038 kubelet[2169]: E0317 18:50:52.660802 2169 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510.3.7-a-961279aa07.182dabb97acb3a86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-961279aa07,UID:ci-3510.3.7-a-961279aa07,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-961279aa07,},FirstTimestamp:2025-03-17 18:50:43.979688582 +0000 UTC m=+1.345846381,LastTimestamp:2025-03-17 18:50:43.979688582 +0000 UTC m=+1.345846381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-961279aa07,}" Mar 17 18:50:52.661939 systemd[1]: Stopping kubelet.service... Mar 17 18:50:52.677638 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:52.677874 systemd[1]: Stopped kubelet.service. Mar 17 18:50:52.677944 systemd[1]: kubelet.service: Consumed 1.214s CPU time. Mar 17 18:50:52.680161 systemd[1]: Starting kubelet.service... Mar 17 18:50:54.590923 systemd[1]: Started kubelet.service. Mar 17 18:50:54.642933 kubelet[2533]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:54.642933 kubelet[2533]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:54.642933 kubelet[2533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:54.643476 kubelet[2533]: I0317 18:50:54.642989 2533 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:54.647797 kubelet[2533]: I0317 18:50:54.647759 2533 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:50:54.647797 kubelet[2533]: I0317 18:50:54.647785 2533 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:54.648013 kubelet[2533]: I0317 18:50:54.648004 2533 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:50:54.649265 kubelet[2533]: I0317 18:50:54.649229 2533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:50:54.650515 kubelet[2533]: I0317 18:50:54.650488 2533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:54.662558 kubelet[2533]: I0317 18:50:54.662525 2533 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:54.662847 kubelet[2533]: I0317 18:50:54.662809 2533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:54.663035 kubelet[2533]: I0317 18:50:54.662847 2533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-961279aa07","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:50:54.663196 kubelet[2533]: I0317 18:50:54.663050 2533 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:54.663196 kubelet[2533]: I0317 18:50:54.663065 2533 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:50:54.663196 kubelet[2533]: I0317 18:50:54.663151 2533 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:54.663335 kubelet[2533]: I0317 18:50:54.663263 2533 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:50:54.663335 kubelet[2533]: I0317 18:50:54.663286 2533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:54.663335 kubelet[2533]: I0317 18:50:54.663317 2533 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:50:54.663450 kubelet[2533]: I0317 18:50:54.663339 2533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:54.673440 waagent[1619]: 2025-03-17T18:50:54.673314Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:50:54.675177 kubelet[2533]: I0317 18:50:54.675145 2533 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:50:54.675430 kubelet[2533]: I0317 18:50:54.675409 2533 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:54.675957 kubelet[2533]: I0317 18:50:54.675898 2533 server.go:1264] "Started kubelet" Mar 17 18:50:54.677804 kubelet[2533]: I0317 18:50:54.677784 2533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:54.678645 kubelet[2533]: I0317 18:50:54.678603 2533 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:54.679960 kubelet[2533]: I0317 18:50:54.679941 2533 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:50:54.682115 kubelet[2533]: I0317 18:50:54.682020 2533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:54.682444 kubelet[2533]: I0317 18:50:54.682419 2533 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:54.686869 waagent[1619]: 2025-03-17T18:50:54.686767Z INFO ExtHandler Mar 17 18:50:54.687330 waagent[1619]: 2025-03-17T18:50:54.687261Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5a19f4b7-3e6e-4052-a412-c352ba784e90 eTag: 13378841216330483110 source: Fabric] Mar 17 18:50:54.688564 waagent[1619]: 2025-03-17T18:50:54.688495Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:50:54.690512 waagent[1619]: 2025-03-17T18:50:54.690441Z INFO ExtHandler Mar 17 18:50:54.690854 waagent[1619]: 2025-03-17T18:50:54.690786Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:50:54.694713 kubelet[2533]: I0317 18:50:54.694692 2533 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:50:54.696368 kubelet[2533]: E0317 18:50:54.696341 2533 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:54.698109 kubelet[2533]: I0317 18:50:54.698089 2533 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:50:54.698383 kubelet[2533]: I0317 18:50:54.698369 2533 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:54.699670 kubelet[2533]: I0317 18:50:54.699644 2533 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:54.699808 kubelet[2533]: I0317 18:50:54.699771 2533 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:54.700873 kubelet[2533]: I0317 18:50:54.700845 2533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:54.702330 kubelet[2533]: I0317 18:50:54.702310 2533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:54.702462 kubelet[2533]: I0317 18:50:54.702450 2533 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:54.702604 kubelet[2533]: I0317 18:50:54.702593 2533 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:50:54.702738 kubelet[2533]: E0317 18:50:54.702718 2533 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:50:54.704682 kubelet[2533]: I0317 18:50:54.704655 2533 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:54.765898 kubelet[2533]: I0317 18:50:54.765863 2533 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:54.765898 kubelet[2533]: I0317 18:50:54.765888 2533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:54.765898 kubelet[2533]: I0317 18:50:54.765912 2533 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:54.766209 kubelet[2533]: I0317 18:50:54.766165 2533 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:50:54.766209 kubelet[2533]: I0317 18:50:54.766180 2533 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:50:54.766209 kubelet[2533]: I0317 18:50:54.766205 2533 policy_none.go:49] "None policy: Start" Mar 17 18:50:54.767122 kubelet[2533]: I0317 18:50:54.767097 2533 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:54.767250 kubelet[2533]: I0317 18:50:54.767130 2533 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:54.767314 kubelet[2533]: I0317 18:50:54.767300 2533 state_mem.go:75] "Updated machine memory state" Mar 17 18:50:54.769188 waagent[1619]: 2025-03-17T18:50:54.769060Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:50:54.772268 kubelet[2533]: I0317 18:50:54.772247 2533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:54.772593 kubelet[2533]: I0317 18:50:54.772555 2533 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:54.772740 kubelet[2533]: I0317 18:50:54.772732 2533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:54.797972 kubelet[2533]: I0317 18:50:54.797937 2533 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:54.803827 kubelet[2533]: I0317 18:50:54.803761 2533 topology_manager.go:215] "Topology Admit Handler" podUID="939852edf363424fa77e18ea6c04e5a1" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:54.804181 kubelet[2533]: I0317 18:50:54.804163 2533 topology_manager.go:215] "Topology Admit Handler" podUID="6f73a52fd6e43e333d90895af685febd" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:54.804825 kubelet[2533]: I0317 18:50:54.804802 2533 topology_manager.go:215] "Topology Admit Handler" podUID="4a524ebe17f3873544b0bd37e1eb8e68" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-961279aa07" Mar 17 18:50:54.815932 kubelet[2533]: W0317 18:50:54.815899 2533 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:54.816379 kubelet[2533]: W0317 18:50:54.815909 2533 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:54.817588 kubelet[2533]: W0317 18:50:54.817561 2533 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:54.817817 kubelet[2533]: E0317 18:50:54.817795 2533 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-a-961279aa07\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:54.817927 kubelet[2533]: I0317 18:50:54.817820 2533 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:54.818037 kubelet[2533]: I0317 18:50:54.818016 2533 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.000408 kubelet[2533]: I0317 18:50:55.000277 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.000714 kubelet[2533]: I0317 18:50:55.000694 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.000845 kubelet[2533]: I0317 18:50:55.000831 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.000966 kubelet[2533]: I0317 18:50:55.000951 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a524ebe17f3873544b0bd37e1eb8e68-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-961279aa07\" (UID: \"4a524ebe17f3873544b0bd37e1eb8e68\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.001117 kubelet[2533]: I0317 18:50:55.001098 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.001285 kubelet[2533]: I0317 18:50:55.001259 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f73a52fd6e43e333d90895af685febd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-961279aa07\" (UID: \"6f73a52fd6e43e333d90895af685febd\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.004208 kubelet[2533]: I0317 18:50:55.001404 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/939852edf363424fa77e18ea6c04e5a1-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-961279aa07\" (UID: \"939852edf363424fa77e18ea6c04e5a1\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.006309 kubelet[2533]: I0317 18:50:55.006244 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/939852edf363424fa77e18ea6c04e5a1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-961279aa07\" (UID: \"939852edf363424fa77e18ea6c04e5a1\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.006309 kubelet[2533]: I0317 18:50:55.006302 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/939852edf363424fa77e18ea6c04e5a1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-961279aa07\" (UID: \"939852edf363424fa77e18ea6c04e5a1\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.066297 waagent[1619]: 2025-03-17T18:50:55.066155Z INFO ExtHandler Downloaded certificate {'thumbprint': '6948104C905BDA372D09AB49226587DCAEF9275B', 'hasPrivateKey': False} Mar 17 18:50:55.067357 waagent[1619]: 2025-03-17T18:50:55.067288Z INFO ExtHandler Downloaded certificate {'thumbprint': '5891EBF570F1987B0E2D70C428E7E23E42180538', 'hasPrivateKey': True} Mar 17 18:50:55.068459 waagent[1619]: 2025-03-17T18:50:55.068392Z INFO ExtHandler Fetch goal state completed Mar 17 18:50:55.069510 waagent[1619]: 2025-03-17T18:50:55.069439Z INFO ExtHandler ExtHandler VM enabled for RSM updates, switching to RSM update mode Mar 17 18:50:55.070713 waagent[1619]: 2025-03-17T18:50:55.070654Z INFO ExtHandler ExtHandler Mar 17 18:50:55.070864 waagent[1619]: 2025-03-17T18:50:55.070810Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 481ce99a-3c00-4b7c-9bf9-fb9cce791916 correlation 23bc0ddb-3bd9-4b87-b915-1957404ea1ae created: 2025-03-17T18:50:46.245850Z] Mar 17 18:50:55.071633 waagent[1619]: 2025-03-17T18:50:55.071571Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:50:55.073733 waagent[1619]: 2025-03-17T18:50:55.073667Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Mar 17 18:50:55.083873 sudo[2570]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:50:55.084197 sudo[2570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:50:55.606006 sudo[2570]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:55.670752 kubelet[2533]: I0317 18:50:55.670704 2533 apiserver.go:52] "Watching apiserver" Mar 17 18:50:55.698731 kubelet[2533]: I0317 18:50:55.698683 2533 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:50:55.760576 kubelet[2533]: W0317 18:50:55.760525 2533 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:55.762914 kubelet[2533]: E0317 18:50:55.762875 2533 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-a-961279aa07\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" Mar 17 18:50:55.799198 kubelet[2533]: I0317 18:50:55.799038 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-a-961279aa07" podStartSLOduration=4.79901291 podStartE2EDuration="4.79901291s" podCreationTimestamp="2025-03-17 18:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:55.787444405 +0000 UTC m=+1.187728020" watchObservedRunningTime="2025-03-17 18:50:55.79901291 +0000 UTC m=+1.199296625" Mar 17 18:50:55.812353 kubelet[2533]: I0317 18:50:55.812287 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-a-961279aa07" podStartSLOduration=1.812261802 podStartE2EDuration="1.812261802s" podCreationTimestamp="2025-03-17 18:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:55.800377399 +0000 UTC m=+1.200661014" watchObservedRunningTime="2025-03-17 18:50:55.812261802 +0000 UTC m=+1.212545517" Mar 17 18:50:55.827167 kubelet[2533]: I0317 18:50:55.827094 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-961279aa07" podStartSLOduration=1.827046081 podStartE2EDuration="1.827046081s" podCreationTimestamp="2025-03-17 18:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:55.813315194 +0000 UTC m=+1.213598909" watchObservedRunningTime="2025-03-17 18:50:55.827046081 +0000 UTC m=+1.227329796" Mar 17 18:50:57.164316 sudo[1824]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:57.267402 sshd[1821]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:57.270796 systemd[1]: sshd@4-10.200.8.24:22-10.200.16.10:57886.service: Deactivated successfully. Mar 17 18:50:57.271635 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:50:57.271799 systemd[1]: session-7.scope: Consumed 4.688s CPU time. Mar 17 18:50:57.273168 systemd-logind[1402]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:50:57.274043 systemd-logind[1402]: Removed session 7. Mar 17 18:51:08.533906 kubelet[2533]: I0317 18:51:08.533868 2533 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:51:08.534729 env[1410]: time="2025-03-17T18:51:08.534685415Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:51:08.535110 kubelet[2533]: I0317 18:51:08.534910 2533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:51:09.158356 kubelet[2533]: I0317 18:51:09.158309 2533 topology_manager.go:215] "Topology Admit Handler" podUID="8cae89de-7ddc-48ea-8b65-4121fe144999" podNamespace="kube-system" podName="kube-proxy-2twdj" Mar 17 18:51:09.164363 systemd[1]: Created slice kubepods-besteffort-pod8cae89de_7ddc_48ea_8b65_4121fe144999.slice. Mar 17 18:51:09.170108 kubelet[2533]: I0317 18:51:09.170059 2533 topology_manager.go:215] "Topology Admit Handler" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" podNamespace="kube-system" podName="cilium-sq97j" Mar 17 18:51:09.175264 kubelet[2533]: W0317 18:51:09.174805 2533 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-a-961279aa07" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-961279aa07' and this object Mar 17 18:51:09.175264 kubelet[2533]: E0317 18:51:09.174859 2533 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-a-961279aa07" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-961279aa07' and this object Mar 17 18:51:09.175264 kubelet[2533]: W0317 18:51:09.175097 2533 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-a-961279aa07" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-961279aa07' and this object Mar 17 18:51:09.175264 kubelet[2533]: E0317 18:51:09.175124 2533 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-a-961279aa07" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-961279aa07' and this object Mar 17 18:51:09.175515 kubelet[2533]: W0317 18:51:09.175374 2533 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-a-961279aa07" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-961279aa07' and this object Mar 17 18:51:09.175515 kubelet[2533]: E0317 18:51:09.175394 2533 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-a-961279aa07" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-961279aa07' and this object Mar 17 18:51:09.178284 systemd[1]: Created slice kubepods-burstable-pod4232e0b6_4788_48aa_b36c_c4dddd7c8182.slice. Mar 17 18:51:09.199957 kubelet[2533]: I0317 18:51:09.199915 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-xtables-lock\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200119 kubelet[2533]: I0317 18:51:09.199963 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4232e0b6-4788-48aa-b36c-c4dddd7c8182-clustermesh-secrets\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200119 kubelet[2533]: I0317 18:51:09.199986 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cae89de-7ddc-48ea-8b65-4121fe144999-lib-modules\") pod \"kube-proxy-2twdj\" (UID: \"8cae89de-7ddc-48ea-8b65-4121fe144999\") " pod="kube-system/kube-proxy-2twdj" Mar 17 18:51:09.200119 kubelet[2533]: I0317 18:51:09.200005 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-run\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200119 kubelet[2533]: I0317 18:51:09.200025 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94qb4\" (UniqueName: \"kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-kube-api-access-94qb4\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200119 kubelet[2533]: I0317 18:51:09.200047 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hostproc\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200119 kubelet[2533]: I0317 18:51:09.200084 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-lib-modules\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200392 kubelet[2533]: I0317 18:51:09.200106 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-bpf-maps\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200392 kubelet[2533]: I0317 18:51:09.200125 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-etc-cni-netd\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200392 kubelet[2533]: I0317 18:51:09.200147 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-net\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200392 kubelet[2533]: I0317 18:51:09.200168 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-kernel\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200392 kubelet[2533]: I0317 18:51:09.200191 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-cgroup\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200392 kubelet[2533]: I0317 18:51:09.200215 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hubble-tls\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200600 kubelet[2533]: I0317 18:51:09.200238 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-config-path\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200600 kubelet[2533]: I0317 18:51:09.200262 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8sz\" (UniqueName: \"kubernetes.io/projected/8cae89de-7ddc-48ea-8b65-4121fe144999-kube-api-access-pk8sz\") pod \"kube-proxy-2twdj\" (UID: \"8cae89de-7ddc-48ea-8b65-4121fe144999\") " pod="kube-system/kube-proxy-2twdj" Mar 17 18:51:09.200600 kubelet[2533]: I0317 18:51:09.200284 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cae89de-7ddc-48ea-8b65-4121fe144999-kube-proxy\") pod \"kube-proxy-2twdj\" (UID: \"8cae89de-7ddc-48ea-8b65-4121fe144999\") " pod="kube-system/kube-proxy-2twdj" Mar 17 18:51:09.200600 kubelet[2533]: I0317 18:51:09.200306 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cni-path\") pod \"cilium-sq97j\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " pod="kube-system/cilium-sq97j" Mar 17 18:51:09.200600 kubelet[2533]: I0317 18:51:09.200330 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cae89de-7ddc-48ea-8b65-4121fe144999-xtables-lock\") pod \"kube-proxy-2twdj\" (UID: \"8cae89de-7ddc-48ea-8b65-4121fe144999\") " pod="kube-system/kube-proxy-2twdj" Mar 17 18:51:09.307274 kubelet[2533]: E0317 18:51:09.307230 2533 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:51:09.307524 kubelet[2533]: E0317 18:51:09.307505 2533 projected.go:200] Error preparing data for projected volume kube-api-access-94qb4 for pod kube-system/cilium-sq97j: configmap "kube-root-ca.crt" not found Mar 17 18:51:09.307723 kubelet[2533]: E0317 18:51:09.307694 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-kube-api-access-94qb4 podName:4232e0b6-4788-48aa-b36c-c4dddd7c8182 nodeName:}" failed. No retries permitted until 2025-03-17 18:51:09.807670627 +0000 UTC m=+15.207954242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-94qb4" (UniqueName: "kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-kube-api-access-94qb4") pod "cilium-sq97j" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182") : configmap "kube-root-ca.crt" not found Mar 17 18:51:09.307876 kubelet[2533]: E0317 18:51:09.307711 2533 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:51:09.307998 kubelet[2533]: E0317 18:51:09.307985 2533 projected.go:200] Error preparing data for projected volume kube-api-access-pk8sz for pod kube-system/kube-proxy-2twdj: configmap "kube-root-ca.crt" not found Mar 17 18:51:09.308148 kubelet[2533]: E0317 18:51:09.308136 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cae89de-7ddc-48ea-8b65-4121fe144999-kube-api-access-pk8sz podName:8cae89de-7ddc-48ea-8b65-4121fe144999 nodeName:}" failed. No retries permitted until 2025-03-17 18:51:09.808119825 +0000 UTC m=+15.208403440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pk8sz" (UniqueName: "kubernetes.io/projected/8cae89de-7ddc-48ea-8b65-4121fe144999-kube-api-access-pk8sz") pod "kube-proxy-2twdj" (UID: "8cae89de-7ddc-48ea-8b65-4121fe144999") : configmap "kube-root-ca.crt" not found Mar 17 18:51:09.725605 kubelet[2533]: I0317 18:51:09.725548 2533 topology_manager.go:215] "Topology Admit Handler" podUID="2c18a260-96f8-4257-bb35-f769e1b63cb5" podNamespace="kube-system" podName="cilium-operator-599987898-cdv7p" Mar 17 18:51:09.731526 systemd[1]: Created slice kubepods-besteffort-pod2c18a260_96f8_4257_bb35_f769e1b63cb5.slice. Mar 17 18:51:09.806042 kubelet[2533]: I0317 18:51:09.806003 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c18a260-96f8-4257-bb35-f769e1b63cb5-cilium-config-path\") pod \"cilium-operator-599987898-cdv7p\" (UID: \"2c18a260-96f8-4257-bb35-f769e1b63cb5\") " pod="kube-system/cilium-operator-599987898-cdv7p" Mar 17 18:51:09.806042 kubelet[2533]: I0317 18:51:09.806046 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rczzw\" (UniqueName: \"kubernetes.io/projected/2c18a260-96f8-4257-bb35-f769e1b63cb5-kube-api-access-rczzw\") pod \"cilium-operator-599987898-cdv7p\" (UID: \"2c18a260-96f8-4257-bb35-f769e1b63cb5\") " pod="kube-system/cilium-operator-599987898-cdv7p" Mar 17 18:51:10.074154 env[1410]: time="2025-03-17T18:51:10.074108350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2twdj,Uid:8cae89de-7ddc-48ea-8b65-4121fe144999,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:10.109813 env[1410]: time="2025-03-17T18:51:10.109745346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:10.110008 env[1410]: time="2025-03-17T18:51:10.109779346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:10.110008 env[1410]: time="2025-03-17T18:51:10.109793146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:10.110008 env[1410]: time="2025-03-17T18:51:10.109924045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b48c64ff8c16337cbc07dd4e7aa42092448ca93b6d54fa65fb823fff637c1395 pid=2620 runtime=io.containerd.runc.v2 Mar 17 18:51:10.128147 systemd[1]: Started cri-containerd-b48c64ff8c16337cbc07dd4e7aa42092448ca93b6d54fa65fb823fff637c1395.scope. Mar 17 18:51:10.157249 env[1410]: time="2025-03-17T18:51:10.157206374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2twdj,Uid:8cae89de-7ddc-48ea-8b65-4121fe144999,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48c64ff8c16337cbc07dd4e7aa42092448ca93b6d54fa65fb823fff637c1395\"" Mar 17 18:51:10.161480 env[1410]: time="2025-03-17T18:51:10.161438150Z" level=info msg="CreateContainer within sandbox \"b48c64ff8c16337cbc07dd4e7aa42092448ca93b6d54fa65fb823fff637c1395\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:51:10.197788 env[1410]: time="2025-03-17T18:51:10.197747542Z" level=info msg="CreateContainer within sandbox \"b48c64ff8c16337cbc07dd4e7aa42092448ca93b6d54fa65fb823fff637c1395\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f084785af7fa53a0f7c70156586b083db283f81e54ece37afb6e624fb8099782\"" Mar 17 18:51:10.198320 env[1410]: time="2025-03-17T18:51:10.198275539Z" level=info msg="StartContainer for \"f084785af7fa53a0f7c70156586b083db283f81e54ece37afb6e624fb8099782\"" Mar 17 18:51:10.216341 systemd[1]: Started cri-containerd-f084785af7fa53a0f7c70156586b083db283f81e54ece37afb6e624fb8099782.scope. Mar 17 18:51:10.248856 env[1410]: time="2025-03-17T18:51:10.248802850Z" level=info msg="StartContainer for \"f084785af7fa53a0f7c70156586b083db283f81e54ece37afb6e624fb8099782\" returns successfully" Mar 17 18:51:10.302174 kubelet[2533]: E0317 18:51:10.302128 2533 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 17 18:51:10.302174 kubelet[2533]: E0317 18:51:10.302167 2533 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-sq97j: failed to sync secret cache: timed out waiting for the condition Mar 17 18:51:10.302387 kubelet[2533]: E0317 18:51:10.302233 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hubble-tls podName:4232e0b6-4788-48aa-b36c-c4dddd7c8182 nodeName:}" failed. No retries permitted until 2025-03-17 18:51:10.802212144 +0000 UTC m=+16.202495759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hubble-tls") pod "cilium-sq97j" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:51:10.302509 kubelet[2533]: E0317 18:51:10.302489 2533 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:51:10.302573 kubelet[2533]: E0317 18:51:10.302553 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4232e0b6-4788-48aa-b36c-c4dddd7c8182-clustermesh-secrets podName:4232e0b6-4788-48aa-b36c-c4dddd7c8182 nodeName:}" failed. No retries permitted until 2025-03-17 18:51:10.802535442 +0000 UTC m=+16.202819057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/4232e0b6-4788-48aa-b36c-c4dddd7c8182-clustermesh-secrets") pod "cilium-sq97j" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:51:10.302648 kubelet[2533]: E0317 18:51:10.302579 2533 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:51:10.302648 kubelet[2533]: E0317 18:51:10.302614 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-config-path podName:4232e0b6-4788-48aa-b36c-c4dddd7c8182 nodeName:}" failed. No retries permitted until 2025-03-17 18:51:10.802604842 +0000 UTC m=+16.202888457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-config-path") pod "cilium-sq97j" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:51:10.634777 env[1410]: time="2025-03-17T18:51:10.634737040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-cdv7p,Uid:2c18a260-96f8-4257-bb35-f769e1b63cb5,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:10.679977 env[1410]: time="2025-03-17T18:51:10.679871382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:10.679977 env[1410]: time="2025-03-17T18:51:10.679924682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:10.679977 env[1410]: time="2025-03-17T18:51:10.679938682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:10.680534 env[1410]: time="2025-03-17T18:51:10.680488378Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b pid=2782 runtime=io.containerd.runc.v2 Mar 17 18:51:10.699729 systemd[1]: Started cri-containerd-b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b.scope. Mar 17 18:51:10.766384 env[1410]: time="2025-03-17T18:51:10.766335787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-cdv7p,Uid:2c18a260-96f8-4257-bb35-f769e1b63cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b\"" Mar 17 18:51:10.768773 env[1410]: time="2025-03-17T18:51:10.768741573Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:51:10.787895 kubelet[2533]: I0317 18:51:10.787481 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2twdj" podStartSLOduration=1.787458366 podStartE2EDuration="1.787458366s" podCreationTimestamp="2025-03-17 18:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:10.787272867 +0000 UTC m=+16.187556482" watchObservedRunningTime="2025-03-17 18:51:10.787458366 +0000 UTC m=+16.187742081" Mar 17 18:51:10.922032 systemd[1]: run-containerd-runc-k8s.io-b48c64ff8c16337cbc07dd4e7aa42092448ca93b6d54fa65fb823fff637c1395-runc.X9jQRz.mount: Deactivated successfully. Mar 17 18:51:10.982586 env[1410]: time="2025-03-17T18:51:10.982518049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sq97j,Uid:4232e0b6-4788-48aa-b36c-c4dddd7c8182,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:11.022121 env[1410]: time="2025-03-17T18:51:11.021887626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:11.022121 env[1410]: time="2025-03-17T18:51:11.021931426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:11.022121 env[1410]: time="2025-03-17T18:51:11.021946826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:11.022426 env[1410]: time="2025-03-17T18:51:11.022366324Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4 pid=2861 runtime=io.containerd.runc.v2 Mar 17 18:51:11.042950 systemd[1]: Started cri-containerd-50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4.scope. Mar 17 18:51:11.048063 systemd[1]: run-containerd-runc-k8s.io-50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4-runc.uIF3zn.mount: Deactivated successfully. Mar 17 18:51:11.075002 env[1410]: time="2025-03-17T18:51:11.074957629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sq97j,Uid:4232e0b6-4788-48aa-b36c-c4dddd7c8182,Namespace:kube-system,Attempt:0,} returns sandbox id \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\"" Mar 17 18:51:12.100150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521548838.mount: Deactivated successfully. Mar 17 18:51:12.961170 env[1410]: time="2025-03-17T18:51:12.961105383Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:12.968921 env[1410]: time="2025-03-17T18:51:12.968885840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:12.973508 env[1410]: time="2025-03-17T18:51:12.973474115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:12.974003 env[1410]: time="2025-03-17T18:51:12.973970612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:51:12.975462 env[1410]: time="2025-03-17T18:51:12.975421505Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:51:12.977188 env[1410]: time="2025-03-17T18:51:12.977160395Z" level=info msg="CreateContainer within sandbox \"b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:51:13.008256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056192385.mount: Deactivated successfully. Mar 17 18:51:13.022276 env[1410]: time="2025-03-17T18:51:13.022234350Z" level=info msg="CreateContainer within sandbox \"b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\"" Mar 17 18:51:13.024185 env[1410]: time="2025-03-17T18:51:13.022962147Z" level=info msg="StartContainer for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\"" Mar 17 18:51:13.039995 systemd[1]: Started cri-containerd-aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953.scope. Mar 17 18:51:13.074210 env[1410]: time="2025-03-17T18:51:13.074167672Z" level=info msg="StartContainer for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" returns successfully" Mar 17 18:51:20.129370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75518562.mount: Deactivated successfully. Mar 17 18:51:22.850562 env[1410]: time="2025-03-17T18:51:22.850500029Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:22.855461 env[1410]: time="2025-03-17T18:51:22.855423007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:22.859518 env[1410]: time="2025-03-17T18:51:22.859479689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:22.860118 env[1410]: time="2025-03-17T18:51:22.860062686Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:51:22.863190 env[1410]: time="2025-03-17T18:51:22.862815874Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:51:22.893044 env[1410]: time="2025-03-17T18:51:22.892993940Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\"" Mar 17 18:51:22.893660 env[1410]: time="2025-03-17T18:51:22.893631537Z" level=info msg="StartContainer for \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\"" Mar 17 18:51:22.919854 systemd[1]: Started cri-containerd-2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36.scope. Mar 17 18:51:22.951499 env[1410]: time="2025-03-17T18:51:22.950116885Z" level=info msg="StartContainer for \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\" returns successfully" Mar 17 18:51:22.961318 systemd[1]: cri-containerd-2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36.scope: Deactivated successfully. Mar 17 18:51:23.814313 kubelet[2533]: I0317 18:51:23.814250 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-cdv7p" podStartSLOduration=12.607266373 podStartE2EDuration="14.814234902s" podCreationTimestamp="2025-03-17 18:51:09 +0000 UTC" firstStartedPulling="2025-03-17 18:51:10.768280176 +0000 UTC m=+16.168563791" lastFinishedPulling="2025-03-17 18:51:12.975248605 +0000 UTC m=+18.375532320" observedRunningTime="2025-03-17 18:51:13.815918395 +0000 UTC m=+19.216202010" watchObservedRunningTime="2025-03-17 18:51:23.814234902 +0000 UTC m=+29.214518617" Mar 17 18:51:23.883895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36-rootfs.mount: Deactivated successfully. Mar 17 18:51:27.138695 env[1410]: time="2025-03-17T18:51:27.138643697Z" level=info msg="shim disconnected" id=2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36 Mar 17 18:51:27.138695 env[1410]: time="2025-03-17T18:51:27.138688596Z" level=warning msg="cleaning up after shim disconnected" id=2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36 namespace=k8s.io Mar 17 18:51:27.138695 env[1410]: time="2025-03-17T18:51:27.138700796Z" level=info msg="cleaning up dead shim" Mar 17 18:51:27.146585 env[1410]: time="2025-03-17T18:51:27.146546464Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:51:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2982 runtime=io.containerd.runc.v2\n" Mar 17 18:51:27.812796 env[1410]: time="2025-03-17T18:51:27.812478563Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:51:27.851040 env[1410]: time="2025-03-17T18:51:27.850995507Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\"" Mar 17 18:51:27.851529 env[1410]: time="2025-03-17T18:51:27.851487805Z" level=info msg="StartContainer for \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\"" Mar 17 18:51:27.878145 systemd[1]: Started cri-containerd-566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0.scope. Mar 17 18:51:27.907998 env[1410]: time="2025-03-17T18:51:27.904834788Z" level=info msg="StartContainer for \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\" returns successfully" Mar 17 18:51:27.914773 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:51:27.915089 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:51:27.915317 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:51:27.917728 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:51:27.922972 systemd[1]: cri-containerd-566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0.scope: Deactivated successfully. Mar 17 18:51:27.932513 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:51:27.956226 env[1410]: time="2025-03-17T18:51:27.956178280Z" level=info msg="shim disconnected" id=566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0 Mar 17 18:51:27.956435 env[1410]: time="2025-03-17T18:51:27.956230680Z" level=warning msg="cleaning up after shim disconnected" id=566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0 namespace=k8s.io Mar 17 18:51:27.956435 env[1410]: time="2025-03-17T18:51:27.956243180Z" level=info msg="cleaning up dead shim" Mar 17 18:51:27.964087 env[1410]: time="2025-03-17T18:51:27.964047248Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:51:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3046 runtime=io.containerd.runc.v2\n" Mar 17 18:51:28.815240 env[1410]: time="2025-03-17T18:51:28.815160755Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:51:28.839205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0-rootfs.mount: Deactivated successfully. Mar 17 18:51:28.859975 env[1410]: time="2025-03-17T18:51:28.859932277Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\"" Mar 17 18:51:28.861449 env[1410]: time="2025-03-17T18:51:28.860540574Z" level=info msg="StartContainer for \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\"" Mar 17 18:51:28.889547 systemd[1]: Started cri-containerd-4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa.scope. Mar 17 18:51:28.918142 systemd[1]: cri-containerd-4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa.scope: Deactivated successfully. Mar 17 18:51:28.921772 env[1410]: time="2025-03-17T18:51:28.921729930Z" level=info msg="StartContainer for \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\" returns successfully" Mar 17 18:51:28.939105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa-rootfs.mount: Deactivated successfully. Mar 17 18:51:28.949056 env[1410]: time="2025-03-17T18:51:28.949012822Z" level=info msg="shim disconnected" id=4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa Mar 17 18:51:28.949056 env[1410]: time="2025-03-17T18:51:28.949057821Z" level=warning msg="cleaning up after shim disconnected" id=4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa namespace=k8s.io Mar 17 18:51:28.949056 env[1410]: time="2025-03-17T18:51:28.949087421Z" level=info msg="cleaning up dead shim" Mar 17 18:51:28.956121 env[1410]: time="2025-03-17T18:51:28.956089293Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:51:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3103 runtime=io.containerd.runc.v2\n" Mar 17 18:51:29.817974 env[1410]: time="2025-03-17T18:51:29.817919918Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:51:29.855737 env[1410]: time="2025-03-17T18:51:29.855693670Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\"" Mar 17 18:51:29.856421 env[1410]: time="2025-03-17T18:51:29.856380467Z" level=info msg="StartContainer for \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\"" Mar 17 18:51:29.879688 systemd[1]: Started cri-containerd-536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8.scope. Mar 17 18:51:29.901945 systemd[1]: cri-containerd-536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8.scope: Deactivated successfully. Mar 17 18:51:29.918545 env[1410]: time="2025-03-17T18:51:29.918506324Z" level=info msg="StartContainer for \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\" returns successfully" Mar 17 18:51:29.935967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8-rootfs.mount: Deactivated successfully. Mar 17 18:51:29.959704 env[1410]: time="2025-03-17T18:51:29.959657763Z" level=info msg="shim disconnected" id=536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8 Mar 17 18:51:29.959914 env[1410]: time="2025-03-17T18:51:29.959705463Z" level=warning msg="cleaning up after shim disconnected" id=536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8 namespace=k8s.io Mar 17 18:51:29.959914 env[1410]: time="2025-03-17T18:51:29.959716963Z" level=info msg="cleaning up dead shim" Mar 17 18:51:29.967119 env[1410]: time="2025-03-17T18:51:29.967083934Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:51:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3157 runtime=io.containerd.runc.v2\n" Mar 17 18:51:30.823618 env[1410]: time="2025-03-17T18:51:30.823296540Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:51:30.862739 env[1410]: time="2025-03-17T18:51:30.862700088Z" level=info msg="CreateContainer within sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\"" Mar 17 18:51:30.863260 env[1410]: time="2025-03-17T18:51:30.863224186Z" level=info msg="StartContainer for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\"" Mar 17 18:51:30.887107 systemd[1]: run-containerd-runc-k8s.io-92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d-runc.csJjNb.mount: Deactivated successfully. Mar 17 18:51:30.892985 systemd[1]: Started cri-containerd-92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d.scope. Mar 17 18:51:30.924248 env[1410]: time="2025-03-17T18:51:30.924206052Z" level=info msg="StartContainer for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" returns successfully" Mar 17 18:51:31.038889 kubelet[2533]: I0317 18:51:31.038853 2533 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:51:31.073175 kubelet[2533]: I0317 18:51:31.073036 2533 topology_manager.go:215] "Topology Admit Handler" podUID="b8201f46-07ab-4e7e-8b94-5f2b8258c179" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jqh2v" Mar 17 18:51:31.080062 systemd[1]: Created slice kubepods-burstable-podb8201f46_07ab_4e7e_8b94_5f2b8258c179.slice. Mar 17 18:51:31.087502 kubelet[2533]: I0317 18:51:31.087476 2533 topology_manager.go:215] "Topology Admit Handler" podUID="8d0d159b-9cf7-47b4-b603-8b4e1dad2361" podNamespace="kube-system" podName="coredns-7db6d8ff4d-26nrr" Mar 17 18:51:31.095463 systemd[1]: Created slice kubepods-burstable-pod8d0d159b_9cf7_47b4_b603_8b4e1dad2361.slice. Mar 17 18:51:31.146603 kubelet[2533]: I0317 18:51:31.146567 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d0d159b-9cf7-47b4-b603-8b4e1dad2361-config-volume\") pod \"coredns-7db6d8ff4d-26nrr\" (UID: \"8d0d159b-9cf7-47b4-b603-8b4e1dad2361\") " pod="kube-system/coredns-7db6d8ff4d-26nrr" Mar 17 18:51:31.146776 kubelet[2533]: I0317 18:51:31.146610 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/8d0d159b-9cf7-47b4-b603-8b4e1dad2361-kube-api-access-qmn2s\") pod \"coredns-7db6d8ff4d-26nrr\" (UID: \"8d0d159b-9cf7-47b4-b603-8b4e1dad2361\") " pod="kube-system/coredns-7db6d8ff4d-26nrr" Mar 17 18:51:31.146776 kubelet[2533]: I0317 18:51:31.146635 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8201f46-07ab-4e7e-8b94-5f2b8258c179-config-volume\") pod \"coredns-7db6d8ff4d-jqh2v\" (UID: \"b8201f46-07ab-4e7e-8b94-5f2b8258c179\") " pod="kube-system/coredns-7db6d8ff4d-jqh2v" Mar 17 18:51:31.146776 kubelet[2533]: I0317 18:51:31.146656 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vrk\" (UniqueName: \"kubernetes.io/projected/b8201f46-07ab-4e7e-8b94-5f2b8258c179-kube-api-access-76vrk\") pod \"coredns-7db6d8ff4d-jqh2v\" (UID: \"b8201f46-07ab-4e7e-8b94-5f2b8258c179\") " pod="kube-system/coredns-7db6d8ff4d-jqh2v" Mar 17 18:51:31.389336 env[1410]: time="2025-03-17T18:51:31.388794391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jqh2v,Uid:b8201f46-07ab-4e7e-8b94-5f2b8258c179,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:31.401894 env[1410]: time="2025-03-17T18:51:31.401862942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-26nrr,Uid:8d0d159b-9cf7-47b4-b603-8b4e1dad2361,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:33.639447 systemd-networkd[1564]: cilium_host: Link UP Mar 17 18:51:33.646914 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:51:33.646966 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:51:33.647237 systemd-networkd[1564]: cilium_net: Link UP Mar 17 18:51:33.647470 systemd-networkd[1564]: cilium_net: Gained carrier Mar 17 18:51:33.648307 systemd-networkd[1564]: cilium_host: Gained carrier Mar 17 18:51:33.764698 systemd-networkd[1564]: cilium_vxlan: Link UP Mar 17 18:51:33.764707 systemd-networkd[1564]: cilium_vxlan: Gained carrier Mar 17 18:51:34.096179 kernel: NET: Registered PF_ALG protocol family Mar 17 18:51:34.272263 systemd-networkd[1564]: cilium_host: Gained IPv6LL Mar 17 18:51:34.592232 systemd-networkd[1564]: cilium_net: Gained IPv6LL Mar 17 18:51:35.104213 systemd-networkd[1564]: cilium_vxlan: Gained IPv6LL Mar 17 18:51:35.147558 systemd-networkd[1564]: lxc_health: Link UP Mar 17 18:51:35.164894 systemd-networkd[1564]: lxc_health: Gained carrier Mar 17 18:51:35.165122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:51:35.464460 systemd-networkd[1564]: lxc6a933ac50ebc: Link UP Mar 17 18:51:35.474242 kernel: eth0: renamed from tmpbe738 Mar 17 18:51:35.484323 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6a933ac50ebc: link becomes ready Mar 17 18:51:35.483731 systemd-networkd[1564]: lxc6a933ac50ebc: Gained carrier Mar 17 18:51:35.505213 systemd-networkd[1564]: lxcfa7dda01d2fa: Link UP Mar 17 18:51:35.511119 kernel: eth0: renamed from tmp48d42 Mar 17 18:51:35.523153 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfa7dda01d2fa: link becomes ready Mar 17 18:51:35.522902 systemd-networkd[1564]: lxcfa7dda01d2fa: Gained carrier Mar 17 18:51:36.576295 systemd-networkd[1564]: lxc6a933ac50ebc: Gained IPv6LL Mar 17 18:51:37.011164 kubelet[2533]: I0317 18:51:37.010990 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sq97j" podStartSLOduration=16.226212215 podStartE2EDuration="28.010969375s" podCreationTimestamp="2025-03-17 18:51:09 +0000 UTC" firstStartedPulling="2025-03-17 18:51:11.076499021 +0000 UTC m=+16.476782736" lastFinishedPulling="2025-03-17 18:51:22.861256281 +0000 UTC m=+28.261539896" observedRunningTime="2025-03-17 18:51:31.843187274 +0000 UTC m=+37.243470989" watchObservedRunningTime="2025-03-17 18:51:37.010969375 +0000 UTC m=+42.411252990" Mar 17 18:51:37.153296 systemd-networkd[1564]: lxc_health: Gained IPv6LL Mar 17 18:51:37.600250 systemd-networkd[1564]: lxcfa7dda01d2fa: Gained IPv6LL Mar 17 18:51:39.185394 env[1410]: time="2025-03-17T18:51:39.185303006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:39.185984 env[1410]: time="2025-03-17T18:51:39.185399506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:39.185984 env[1410]: time="2025-03-17T18:51:39.185425706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:39.185984 env[1410]: time="2025-03-17T18:51:39.185626905Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48d42e17252e17c028de2d867af576615255010d0688b63709e748ba23a5856e pid=3706 runtime=io.containerd.runc.v2 Mar 17 18:51:39.202195 env[1410]: time="2025-03-17T18:51:39.202133850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:39.202387 env[1410]: time="2025-03-17T18:51:39.202359849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:39.202493 env[1410]: time="2025-03-17T18:51:39.202469749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:39.202769 env[1410]: time="2025-03-17T18:51:39.202725548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be738650f6cae9678d939f187889198f150e0392645dfb8d8ec4c0e614092dfc pid=3723 runtime=io.containerd.runc.v2 Mar 17 18:51:39.221250 systemd[1]: run-containerd-runc-k8s.io-48d42e17252e17c028de2d867af576615255010d0688b63709e748ba23a5856e-runc.JE6EDF.mount: Deactivated successfully. Mar 17 18:51:39.226449 systemd[1]: Started cri-containerd-48d42e17252e17c028de2d867af576615255010d0688b63709e748ba23a5856e.scope. Mar 17 18:51:39.259481 systemd[1]: run-containerd-runc-k8s.io-be738650f6cae9678d939f187889198f150e0392645dfb8d8ec4c0e614092dfc-runc.um75mE.mount: Deactivated successfully. Mar 17 18:51:39.269681 systemd[1]: Started cri-containerd-be738650f6cae9678d939f187889198f150e0392645dfb8d8ec4c0e614092dfc.scope. Mar 17 18:51:39.361761 env[1410]: time="2025-03-17T18:51:39.361717421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-26nrr,Uid:8d0d159b-9cf7-47b4-b603-8b4e1dad2361,Namespace:kube-system,Attempt:0,} returns sandbox id \"48d42e17252e17c028de2d867af576615255010d0688b63709e748ba23a5856e\"" Mar 17 18:51:39.364823 env[1410]: time="2025-03-17T18:51:39.364781211Z" level=info msg="CreateContainer within sandbox \"48d42e17252e17c028de2d867af576615255010d0688b63709e748ba23a5856e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:51:39.381986 env[1410]: time="2025-03-17T18:51:39.381939654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jqh2v,Uid:b8201f46-07ab-4e7e-8b94-5f2b8258c179,Namespace:kube-system,Attempt:0,} returns sandbox id \"be738650f6cae9678d939f187889198f150e0392645dfb8d8ec4c0e614092dfc\"" Mar 17 18:51:39.387245 env[1410]: time="2025-03-17T18:51:39.387207536Z" level=info msg="CreateContainer within sandbox \"be738650f6cae9678d939f187889198f150e0392645dfb8d8ec4c0e614092dfc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:51:39.398302 env[1410]: time="2025-03-17T18:51:39.398261299Z" level=info msg="CreateContainer within sandbox \"48d42e17252e17c028de2d867af576615255010d0688b63709e748ba23a5856e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b85a89b7c5a587d09c3abfeb5c1b4e9813e8a2e80b37524cb13707c1c77e745\"" Mar 17 18:51:39.399175 env[1410]: time="2025-03-17T18:51:39.398800998Z" level=info msg="StartContainer for \"4b85a89b7c5a587d09c3abfeb5c1b4e9813e8a2e80b37524cb13707c1c77e745\"" Mar 17 18:51:39.418342 systemd[1]: Started cri-containerd-4b85a89b7c5a587d09c3abfeb5c1b4e9813e8a2e80b37524cb13707c1c77e745.scope. Mar 17 18:51:39.435266 env[1410]: time="2025-03-17T18:51:39.435205677Z" level=info msg="CreateContainer within sandbox \"be738650f6cae9678d939f187889198f150e0392645dfb8d8ec4c0e614092dfc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"295ce3d87fff06c3357ef164bc94cf034466212abbeaaacb532827f012635f4d\"" Mar 17 18:51:39.436958 env[1410]: time="2025-03-17T18:51:39.436028774Z" level=info msg="StartContainer for \"295ce3d87fff06c3357ef164bc94cf034466212abbeaaacb532827f012635f4d\"" Mar 17 18:51:39.483166 systemd[1]: Started cri-containerd-295ce3d87fff06c3357ef164bc94cf034466212abbeaaacb532827f012635f4d.scope. Mar 17 18:51:39.496141 env[1410]: time="2025-03-17T18:51:39.496098275Z" level=info msg="StartContainer for \"4b85a89b7c5a587d09c3abfeb5c1b4e9813e8a2e80b37524cb13707c1c77e745\" returns successfully" Mar 17 18:51:39.527874 env[1410]: time="2025-03-17T18:51:39.527820770Z" level=info msg="StartContainer for \"295ce3d87fff06c3357ef164bc94cf034466212abbeaaacb532827f012635f4d\" returns successfully" Mar 17 18:51:39.855323 kubelet[2533]: I0317 18:51:39.855172 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-26nrr" podStartSLOduration=30.855138584 podStartE2EDuration="30.855138584s" podCreationTimestamp="2025-03-17 18:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:39.854135087 +0000 UTC m=+45.254418802" watchObservedRunningTime="2025-03-17 18:51:39.855138584 +0000 UTC m=+45.255422199" Mar 17 18:51:39.895705 kubelet[2533]: I0317 18:51:39.895643 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jqh2v" podStartSLOduration=30.895613149 podStartE2EDuration="30.895613149s" podCreationTimestamp="2025-03-17 18:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:39.872702225 +0000 UTC m=+45.272985940" watchObservedRunningTime="2025-03-17 18:51:39.895613149 +0000 UTC m=+45.295896864" Mar 17 18:53:24.062040 systemd[1]: Started sshd@5-10.200.8.24:22-10.200.16.10:52050.service. Mar 17 18:53:24.685791 sshd[3883]: Accepted publickey for core from 10.200.16.10 port 52050 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:24.687385 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:24.692517 systemd[1]: Started session-8.scope. Mar 17 18:53:24.692740 systemd-logind[1402]: New session 8 of user core. Mar 17 18:53:25.208141 sshd[3883]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:25.211695 systemd[1]: sshd@5-10.200.8.24:22-10.200.16.10:52050.service: Deactivated successfully. Mar 17 18:53:25.212780 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:53:25.213448 systemd-logind[1402]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:53:25.214213 systemd-logind[1402]: Removed session 8. Mar 17 18:53:30.314991 systemd[1]: Started sshd@6-10.200.8.24:22-10.200.16.10:38416.service. Mar 17 18:53:30.936783 sshd[3896]: Accepted publickey for core from 10.200.16.10 port 38416 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:30.938494 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:30.943232 systemd[1]: Started session-9.scope. Mar 17 18:53:30.943708 systemd-logind[1402]: New session 9 of user core. Mar 17 18:53:31.444755 sshd[3896]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:31.448929 systemd[1]: sshd@6-10.200.8.24:22-10.200.16.10:38416.service: Deactivated successfully. Mar 17 18:53:31.449788 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:53:31.450286 systemd-logind[1402]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:53:31.451048 systemd-logind[1402]: Removed session 9. Mar 17 18:53:36.550172 systemd[1]: Started sshd@7-10.200.8.24:22-10.200.16.10:38422.service. Mar 17 18:53:37.173465 sshd[3908]: Accepted publickey for core from 10.200.16.10 port 38422 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:37.175017 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:37.180440 systemd[1]: Started session-10.scope. Mar 17 18:53:37.180923 systemd-logind[1402]: New session 10 of user core. Mar 17 18:53:37.681208 sshd[3908]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:37.684575 systemd[1]: sshd@7-10.200.8.24:22-10.200.16.10:38422.service: Deactivated successfully. Mar 17 18:53:37.685721 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:53:37.686648 systemd-logind[1402]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:53:37.687657 systemd-logind[1402]: Removed session 10. Mar 17 18:53:42.786470 systemd[1]: Started sshd@8-10.200.8.24:22-10.200.16.10:50412.service. Mar 17 18:53:43.409394 sshd[3922]: Accepted publickey for core from 10.200.16.10 port 50412 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:43.410990 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:43.415888 systemd-logind[1402]: New session 11 of user core. Mar 17 18:53:43.416903 systemd[1]: Started session-11.scope. Mar 17 18:53:43.904237 sshd[3922]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:43.907733 systemd[1]: sshd@8-10.200.8.24:22-10.200.16.10:50412.service: Deactivated successfully. Mar 17 18:53:43.908781 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:53:43.909657 systemd-logind[1402]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:53:43.910650 systemd-logind[1402]: Removed session 11. Mar 17 18:53:44.009864 systemd[1]: Started sshd@9-10.200.8.24:22-10.200.16.10:50428.service. Mar 17 18:53:44.634411 sshd[3935]: Accepted publickey for core from 10.200.16.10 port 50428 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:44.635976 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:44.641388 systemd[1]: Started session-12.scope. Mar 17 18:53:44.642043 systemd-logind[1402]: New session 12 of user core. Mar 17 18:53:45.171579 sshd[3935]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:45.174853 systemd[1]: sshd@9-10.200.8.24:22-10.200.16.10:50428.service: Deactivated successfully. Mar 17 18:53:45.175787 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:53:45.176526 systemd-logind[1402]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:53:45.177335 systemd-logind[1402]: Removed session 12. Mar 17 18:53:45.278990 systemd[1]: Started sshd@10-10.200.8.24:22-10.200.16.10:50434.service. Mar 17 18:53:45.903156 sshd[3945]: Accepted publickey for core from 10.200.16.10 port 50434 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:45.904764 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:45.910161 systemd[1]: Started session-13.scope. Mar 17 18:53:45.911056 systemd-logind[1402]: New session 13 of user core. Mar 17 18:53:46.406159 sshd[3945]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:46.409638 systemd[1]: sshd@10-10.200.8.24:22-10.200.16.10:50434.service: Deactivated successfully. Mar 17 18:53:46.410749 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:53:46.411601 systemd-logind[1402]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:53:46.412531 systemd-logind[1402]: Removed session 13. Mar 17 18:53:51.512065 systemd[1]: Started sshd@11-10.200.8.24:22-10.200.16.10:38258.service. Mar 17 18:53:52.135322 sshd[3956]: Accepted publickey for core from 10.200.16.10 port 38258 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:52.137015 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:52.141789 systemd-logind[1402]: New session 14 of user core. Mar 17 18:53:52.142296 systemd[1]: Started session-14.scope. Mar 17 18:53:52.632246 sshd[3956]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:52.635603 systemd[1]: sshd@11-10.200.8.24:22-10.200.16.10:38258.service: Deactivated successfully. Mar 17 18:53:52.636591 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:53:52.637388 systemd-logind[1402]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:53:52.638216 systemd-logind[1402]: Removed session 14. Mar 17 18:53:57.739064 systemd[1]: Started sshd@12-10.200.8.24:22-10.200.16.10:38266.service. Mar 17 18:53:58.366940 sshd[3971]: Accepted publickey for core from 10.200.16.10 port 38266 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:58.368712 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:58.373823 systemd-logind[1402]: New session 15 of user core. Mar 17 18:53:58.374370 systemd[1]: Started session-15.scope. Mar 17 18:53:58.866426 sshd[3971]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:58.869915 systemd[1]: sshd@12-10.200.8.24:22-10.200.16.10:38266.service: Deactivated successfully. Mar 17 18:53:58.870865 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:53:58.871557 systemd-logind[1402]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:53:58.872369 systemd-logind[1402]: Removed session 15. Mar 17 18:53:58.973117 systemd[1]: Started sshd@13-10.200.8.24:22-10.200.16.10:45704.service. Mar 17 18:53:59.596978 sshd[3982]: Accepted publickey for core from 10.200.16.10 port 45704 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:53:59.598476 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:59.604794 systemd[1]: Started session-16.scope. Mar 17 18:53:59.605948 systemd-logind[1402]: New session 16 of user core. Mar 17 18:54:00.165691 sshd[3982]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:00.169114 systemd[1]: sshd@13-10.200.8.24:22-10.200.16.10:45704.service: Deactivated successfully. Mar 17 18:54:00.170242 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:54:00.171050 systemd-logind[1402]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:54:00.172050 systemd-logind[1402]: Removed session 16. Mar 17 18:54:00.271806 systemd[1]: Started sshd@14-10.200.8.24:22-10.200.16.10:45706.service. Mar 17 18:54:00.895634 sshd[3995]: Accepted publickey for core from 10.200.16.10 port 45706 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:00.897263 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:00.902425 systemd-logind[1402]: New session 17 of user core. Mar 17 18:54:00.903061 systemd[1]: Started session-17.scope. Mar 17 18:54:02.818471 sshd[3995]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:02.822839 systemd[1]: sshd@14-10.200.8.24:22-10.200.16.10:45706.service: Deactivated successfully. Mar 17 18:54:02.823934 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:54:02.824657 systemd-logind[1402]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:54:02.825614 systemd-logind[1402]: Removed session 17. Mar 17 18:54:02.923335 systemd[1]: Started sshd@15-10.200.8.24:22-10.200.16.10:45712.service. Mar 17 18:54:03.546675 sshd[4012]: Accepted publickey for core from 10.200.16.10 port 45712 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:03.548223 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:03.553482 systemd[1]: Started session-18.scope. Mar 17 18:54:03.554113 systemd-logind[1402]: New session 18 of user core. Mar 17 18:54:04.147133 sshd[4012]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:04.150080 systemd[1]: sshd@15-10.200.8.24:22-10.200.16.10:45712.service: Deactivated successfully. Mar 17 18:54:04.151002 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:54:04.151758 systemd-logind[1402]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:54:04.152586 systemd-logind[1402]: Removed session 18. Mar 17 18:54:04.252764 systemd[1]: Started sshd@16-10.200.8.24:22-10.200.16.10:45718.service. Mar 17 18:54:04.876296 sshd[4021]: Accepted publickey for core from 10.200.16.10 port 45718 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:04.878015 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:04.883177 systemd[1]: Started session-19.scope. Mar 17 18:54:04.883802 systemd-logind[1402]: New session 19 of user core. Mar 17 18:54:05.373812 sshd[4021]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:05.377251 systemd[1]: sshd@16-10.200.8.24:22-10.200.16.10:45718.service: Deactivated successfully. Mar 17 18:54:05.378360 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:54:05.379185 systemd-logind[1402]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:54:05.380229 systemd-logind[1402]: Removed session 19. Mar 17 18:54:10.480685 systemd[1]: Started sshd@17-10.200.8.24:22-10.200.16.10:51192.service. Mar 17 18:54:11.107050 sshd[4038]: Accepted publickey for core from 10.200.16.10 port 51192 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:11.108834 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:11.114134 systemd[1]: Started session-20.scope. Mar 17 18:54:11.114564 systemd-logind[1402]: New session 20 of user core. Mar 17 18:54:11.608436 sshd[4038]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:11.611187 systemd[1]: sshd@17-10.200.8.24:22-10.200.16.10:51192.service: Deactivated successfully. Mar 17 18:54:11.612162 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:54:11.612839 systemd-logind[1402]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:54:11.613706 systemd-logind[1402]: Removed session 20. Mar 17 18:54:16.727692 systemd[1]: Started sshd@18-10.200.8.24:22-10.200.16.10:51202.service. Mar 17 18:54:17.352672 sshd[4050]: Accepted publickey for core from 10.200.16.10 port 51202 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:17.354462 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:17.360517 systemd[1]: Started session-21.scope. Mar 17 18:54:17.361285 systemd-logind[1402]: New session 21 of user core. Mar 17 18:54:17.849430 sshd[4050]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:17.852309 systemd[1]: sshd@18-10.200.8.24:22-10.200.16.10:51202.service: Deactivated successfully. Mar 17 18:54:17.853249 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:54:17.853870 systemd-logind[1402]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:54:17.854731 systemd-logind[1402]: Removed session 21. Mar 17 18:54:22.956882 systemd[1]: Started sshd@19-10.200.8.24:22-10.200.16.10:56144.service. Mar 17 18:54:23.588240 sshd[4061]: Accepted publickey for core from 10.200.16.10 port 56144 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:23.590057 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:23.595202 systemd-logind[1402]: New session 22 of user core. Mar 17 18:54:23.595397 systemd[1]: Started session-22.scope. Mar 17 18:54:24.084536 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:24.087582 systemd[1]: sshd@19-10.200.8.24:22-10.200.16.10:56144.service: Deactivated successfully. Mar 17 18:54:24.088558 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:54:24.089247 systemd-logind[1402]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:54:24.090115 systemd-logind[1402]: Removed session 22. Mar 17 18:54:24.188859 systemd[1]: Started sshd@20-10.200.8.24:22-10.200.16.10:56160.service. Mar 17 18:54:24.814737 sshd[4073]: Accepted publickey for core from 10.200.16.10 port 56160 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:24.816261 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:24.821311 systemd[1]: Started session-23.scope. Mar 17 18:54:24.821753 systemd-logind[1402]: New session 23 of user core. Mar 17 18:54:26.620244 systemd[1]: run-containerd-runc-k8s.io-92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d-runc.yiovoi.mount: Deactivated successfully. Mar 17 18:54:26.629318 env[1410]: time="2025-03-17T18:54:26.627874036Z" level=info msg="StopContainer for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" with timeout 30 (s)" Mar 17 18:54:26.630673 env[1410]: time="2025-03-17T18:54:26.630633493Z" level=info msg="Stop container \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" with signal terminated" Mar 17 18:54:26.647248 systemd[1]: cri-containerd-aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953.scope: Deactivated successfully. Mar 17 18:54:26.649824 env[1410]: time="2025-03-17T18:54:26.649768089Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:54:26.658644 env[1410]: time="2025-03-17T18:54:26.658609272Z" level=info msg="StopContainer for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" with timeout 2 (s)" Mar 17 18:54:26.659008 env[1410]: time="2025-03-17T18:54:26.658981580Z" level=info msg="Stop container \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" with signal terminated" Mar 17 18:54:26.668139 systemd-networkd[1564]: lxc_health: Link DOWN Mar 17 18:54:26.668148 systemd-networkd[1564]: lxc_health: Lost carrier Mar 17 18:54:26.679057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953-rootfs.mount: Deactivated successfully. Mar 17 18:54:26.688383 systemd[1]: cri-containerd-92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d.scope: Deactivated successfully. Mar 17 18:54:26.688659 systemd[1]: cri-containerd-92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d.scope: Consumed 7.070s CPU time. Mar 17 18:54:26.711724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d-rootfs.mount: Deactivated successfully. Mar 17 18:54:26.753471 env[1410]: time="2025-03-17T18:54:26.751719200Z" level=info msg="shim disconnected" id=92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d Mar 17 18:54:26.753471 env[1410]: time="2025-03-17T18:54:26.751849902Z" level=warning msg="cleaning up after shim disconnected" id=92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d namespace=k8s.io Mar 17 18:54:26.753471 env[1410]: time="2025-03-17T18:54:26.751864003Z" level=info msg="cleaning up dead shim" Mar 17 18:54:26.761351 env[1410]: time="2025-03-17T18:54:26.761297298Z" level=info msg="shim disconnected" id=aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953 Mar 17 18:54:26.761493 env[1410]: time="2025-03-17T18:54:26.761356199Z" level=warning msg="cleaning up after shim disconnected" id=aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953 namespace=k8s.io Mar 17 18:54:26.761493 env[1410]: time="2025-03-17T18:54:26.761368399Z" level=info msg="cleaning up dead shim" Mar 17 18:54:26.762570 env[1410]: time="2025-03-17T18:54:26.762536924Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4144 runtime=io.containerd.runc.v2\n" Mar 17 18:54:26.766975 env[1410]: time="2025-03-17T18:54:26.766937415Z" level=info msg="StopContainer for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" returns successfully" Mar 17 18:54:26.769721 env[1410]: time="2025-03-17T18:54:26.769688372Z" level=info msg="StopPodSandbox for \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\"" Mar 17 18:54:26.769919 env[1410]: time="2025-03-17T18:54:26.769884876Z" level=info msg="Container to stop \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:26.770027 env[1410]: time="2025-03-17T18:54:26.770006078Z" level=info msg="Container to stop \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:26.770142 env[1410]: time="2025-03-17T18:54:26.770118581Z" level=info msg="Container to stop \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:26.770251 env[1410]: time="2025-03-17T18:54:26.770228683Z" level=info msg="Container to stop \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:26.770348 env[1410]: time="2025-03-17T18:54:26.770329285Z" level=info msg="Container to stop \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:26.774108 env[1410]: time="2025-03-17T18:54:26.774077263Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4158 runtime=io.containerd.runc.v2\n" Mar 17 18:54:26.777703 systemd[1]: cri-containerd-50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4.scope: Deactivated successfully. Mar 17 18:54:26.779262 env[1410]: time="2025-03-17T18:54:26.779230169Z" level=info msg="StopContainer for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" returns successfully" Mar 17 18:54:26.781734 env[1410]: time="2025-03-17T18:54:26.781705220Z" level=info msg="StopPodSandbox for \"b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b\"" Mar 17 18:54:26.781834 env[1410]: time="2025-03-17T18:54:26.781767322Z" level=info msg="Container to stop \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:26.805577 systemd[1]: cri-containerd-b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b.scope: Deactivated successfully. Mar 17 18:54:26.826360 env[1410]: time="2025-03-17T18:54:26.826303044Z" level=info msg="shim disconnected" id=50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4 Mar 17 18:54:26.827047 env[1410]: time="2025-03-17T18:54:26.827019059Z" level=warning msg="cleaning up after shim disconnected" id=50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4 namespace=k8s.io Mar 17 18:54:26.827203 env[1410]: time="2025-03-17T18:54:26.827185162Z" level=info msg="cleaning up dead shim" Mar 17 18:54:26.836029 env[1410]: time="2025-03-17T18:54:26.835968844Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4210 runtime=io.containerd.runc.v2\n" Mar 17 18:54:26.836366 env[1410]: time="2025-03-17T18:54:26.836332751Z" level=info msg="TearDown network for sandbox \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" successfully" Mar 17 18:54:26.836444 env[1410]: time="2025-03-17T18:54:26.836368952Z" level=info msg="StopPodSandbox for \"50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4\" returns successfully" Mar 17 18:54:26.838639 env[1410]: time="2025-03-17T18:54:26.837929784Z" level=info msg="shim disconnected" id=b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b Mar 17 18:54:26.838783 env[1410]: time="2025-03-17T18:54:26.838761802Z" level=warning msg="cleaning up after shim disconnected" id=b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b namespace=k8s.io Mar 17 18:54:26.839002 env[1410]: time="2025-03-17T18:54:26.838972306Z" level=info msg="cleaning up dead shim" Mar 17 18:54:26.851371 env[1410]: time="2025-03-17T18:54:26.850392242Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4224 runtime=io.containerd.runc.v2\n" Mar 17 18:54:26.851371 env[1410]: time="2025-03-17T18:54:26.850694649Z" level=info msg="TearDown network for sandbox \"b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b\" successfully" Mar 17 18:54:26.851371 env[1410]: time="2025-03-17T18:54:26.850719049Z" level=info msg="StopPodSandbox for \"b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b\" returns successfully" Mar 17 18:54:26.955819 kubelet[2533]: I0317 18:54:26.955180 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-lib-modules\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.955819 kubelet[2533]: I0317 18:54:26.955224 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-etc-cni-netd\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.955819 kubelet[2533]: I0317 18:54:26.955238 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.955819 kubelet[2533]: I0317 18:54:26.955250 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hubble-tls\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.955819 kubelet[2533]: I0317 18:54:26.955311 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-run\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.955819 kubelet[2533]: I0317 18:54:26.955350 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-bpf-maps\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956529 kubelet[2533]: I0317 18:54:26.955372 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-net\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956529 kubelet[2533]: I0317 18:54:26.955398 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rczzw\" (UniqueName: \"kubernetes.io/projected/2c18a260-96f8-4257-bb35-f769e1b63cb5-kube-api-access-rczzw\") pod \"2c18a260-96f8-4257-bb35-f769e1b63cb5\" (UID: \"2c18a260-96f8-4257-bb35-f769e1b63cb5\") " Mar 17 18:54:26.956529 kubelet[2533]: I0317 18:54:26.955435 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c18a260-96f8-4257-bb35-f769e1b63cb5-cilium-config-path\") pod \"2c18a260-96f8-4257-bb35-f769e1b63cb5\" (UID: \"2c18a260-96f8-4257-bb35-f769e1b63cb5\") " Mar 17 18:54:26.956529 kubelet[2533]: I0317 18:54:26.955460 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-config-path\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956529 kubelet[2533]: I0317 18:54:26.955481 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hostproc\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956529 kubelet[2533]: I0317 18:54:26.955512 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-kernel\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956779 kubelet[2533]: I0317 18:54:26.955534 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-cgroup\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956779 kubelet[2533]: I0317 18:54:26.955555 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cni-path\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956779 kubelet[2533]: I0317 18:54:26.955591 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94qb4\" (UniqueName: \"kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-kube-api-access-94qb4\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956779 kubelet[2533]: I0317 18:54:26.955622 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-xtables-lock\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956779 kubelet[2533]: I0317 18:54:26.955645 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4232e0b6-4788-48aa-b36c-c4dddd7c8182-clustermesh-secrets\") pod \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\" (UID: \"4232e0b6-4788-48aa-b36c-c4dddd7c8182\") " Mar 17 18:54:26.956779 kubelet[2533]: I0317 18:54:26.955705 2533 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-lib-modules\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:26.960167 kubelet[2533]: I0317 18:54:26.958580 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.960167 kubelet[2533]: I0317 18:54:26.960116 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hostproc" (OuterVolumeSpecName: "hostproc") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.960430 kubelet[2533]: I0317 18:54:26.960151 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.960430 kubelet[2533]: I0317 18:54:26.960391 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.960620 kubelet[2533]: I0317 18:54:26.960413 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cni-path" (OuterVolumeSpecName: "cni-path") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.961172 kubelet[2533]: I0317 18:54:26.961139 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:26.961268 kubelet[2533]: I0317 18:54:26.961194 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.961268 kubelet[2533]: I0317 18:54:26.961217 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.961268 kubelet[2533]: I0317 18:54:26.961236 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.962048 kubelet[2533]: I0317 18:54:26.962018 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:26.962212 kubelet[2533]: I0317 18:54:26.962190 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:26.964720 kubelet[2533]: I0317 18:54:26.964694 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4232e0b6-4788-48aa-b36c-c4dddd7c8182-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:26.965296 kubelet[2533]: I0317 18:54:26.965209 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c18a260-96f8-4257-bb35-f769e1b63cb5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c18a260-96f8-4257-bb35-f769e1b63cb5" (UID: "2c18a260-96f8-4257-bb35-f769e1b63cb5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:26.967472 kubelet[2533]: I0317 18:54:26.967437 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-kube-api-access-94qb4" (OuterVolumeSpecName: "kube-api-access-94qb4") pod "4232e0b6-4788-48aa-b36c-c4dddd7c8182" (UID: "4232e0b6-4788-48aa-b36c-c4dddd7c8182"). InnerVolumeSpecName "kube-api-access-94qb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:26.968790 kubelet[2533]: I0317 18:54:26.968755 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c18a260-96f8-4257-bb35-f769e1b63cb5-kube-api-access-rczzw" (OuterVolumeSpecName: "kube-api-access-rczzw") pod "2c18a260-96f8-4257-bb35-f769e1b63cb5" (UID: "2c18a260-96f8-4257-bb35-f769e1b63cb5"). InnerVolumeSpecName "kube-api-access-rczzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:27.055950 kubelet[2533]: I0317 18:54:27.055903 2533 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-xtables-lock\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.055950 kubelet[2533]: I0317 18:54:27.055936 2533 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4232e0b6-4788-48aa-b36c-c4dddd7c8182-clustermesh-secrets\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.055950 kubelet[2533]: I0317 18:54:27.055950 2533 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-94qb4\" (UniqueName: \"kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-kube-api-access-94qb4\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.055950 kubelet[2533]: I0317 18:54:27.055963 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-run\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.055974 2533 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-bpf-maps\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.055985 2533 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-etc-cni-netd\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.055995 2533 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hubble-tls\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.056005 2533 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-net\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.056015 2533 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rczzw\" (UniqueName: \"kubernetes.io/projected/2c18a260-96f8-4257-bb35-f769e1b63cb5-kube-api-access-rczzw\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.056025 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c18a260-96f8-4257-bb35-f769e1b63cb5-cilium-config-path\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.056035 2533 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-hostproc\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056245 kubelet[2533]: I0317 18:54:27.056047 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-config-path\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056466 kubelet[2533]: I0317 18:54:27.056062 2533 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056466 kubelet[2533]: I0317 18:54:27.056096 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cilium-cgroup\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.056466 kubelet[2533]: I0317 18:54:27.056107 2533 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4232e0b6-4788-48aa-b36c-c4dddd7c8182-cni-path\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:27.176496 kubelet[2533]: I0317 18:54:27.176457 2533 scope.go:117] "RemoveContainer" containerID="aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953" Mar 17 18:54:27.179111 env[1410]: time="2025-03-17T18:54:27.178727403Z" level=info msg="RemoveContainer for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\"" Mar 17 18:54:27.184877 systemd[1]: Removed slice kubepods-besteffort-pod2c18a260_96f8_4257_bb35_f769e1b63cb5.slice. Mar 17 18:54:27.189007 env[1410]: time="2025-03-17T18:54:27.188955712Z" level=info msg="RemoveContainer for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" returns successfully" Mar 17 18:54:27.189352 kubelet[2533]: I0317 18:54:27.189327 2533 scope.go:117] "RemoveContainer" containerID="aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953" Mar 17 18:54:27.189927 env[1410]: time="2025-03-17T18:54:27.189794129Z" level=error msg="ContainerStatus for \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\": not found" Mar 17 18:54:27.190176 kubelet[2533]: E0317 18:54:27.190139 2533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\": not found" containerID="aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953" Mar 17 18:54:27.190379 kubelet[2533]: I0317 18:54:27.190295 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953"} err="failed to get container status \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\": rpc error: code = NotFound desc = an error occurred when try to find container \"aed37de8b7b6cebde4f524c346ae3181e9f44738dc177f9f5927f4f990be4953\": not found" Mar 17 18:54:27.190581 kubelet[2533]: I0317 18:54:27.190381 2533 scope.go:117] "RemoveContainer" containerID="92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d" Mar 17 18:54:27.192397 env[1410]: time="2025-03-17T18:54:27.191938173Z" level=info msg="RemoveContainer for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\"" Mar 17 18:54:27.194126 systemd[1]: Removed slice kubepods-burstable-pod4232e0b6_4788_48aa_b36c_c4dddd7c8182.slice. Mar 17 18:54:27.194252 systemd[1]: kubepods-burstable-pod4232e0b6_4788_48aa_b36c_c4dddd7c8182.slice: Consumed 7.159s CPU time. Mar 17 18:54:27.199576 env[1410]: time="2025-03-17T18:54:27.199478628Z" level=info msg="RemoveContainer for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" returns successfully" Mar 17 18:54:27.199681 kubelet[2533]: I0317 18:54:27.199663 2533 scope.go:117] "RemoveContainer" containerID="536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8" Mar 17 18:54:27.200627 env[1410]: time="2025-03-17T18:54:27.200600651Z" level=info msg="RemoveContainer for \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\"" Mar 17 18:54:27.209795 env[1410]: time="2025-03-17T18:54:27.209666037Z" level=info msg="RemoveContainer for \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\" returns successfully" Mar 17 18:54:27.211121 kubelet[2533]: I0317 18:54:27.211096 2533 scope.go:117] "RemoveContainer" containerID="4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa" Mar 17 18:54:27.212308 env[1410]: time="2025-03-17T18:54:27.212280790Z" level=info msg="RemoveContainer for \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\"" Mar 17 18:54:27.220366 env[1410]: time="2025-03-17T18:54:27.220329755Z" level=info msg="RemoveContainer for \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\" returns successfully" Mar 17 18:54:27.220838 kubelet[2533]: I0317 18:54:27.220815 2533 scope.go:117] "RemoveContainer" containerID="566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0" Mar 17 18:54:27.221875 env[1410]: time="2025-03-17T18:54:27.221841786Z" level=info msg="RemoveContainer for \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\"" Mar 17 18:54:27.230387 env[1410]: time="2025-03-17T18:54:27.230354961Z" level=info msg="RemoveContainer for \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\" returns successfully" Mar 17 18:54:27.231031 kubelet[2533]: I0317 18:54:27.230997 2533 scope.go:117] "RemoveContainer" containerID="2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36" Mar 17 18:54:27.232055 env[1410]: time="2025-03-17T18:54:27.232030295Z" level=info msg="RemoveContainer for \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\"" Mar 17 18:54:27.240705 env[1410]: time="2025-03-17T18:54:27.240671172Z" level=info msg="RemoveContainer for \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\" returns successfully" Mar 17 18:54:27.240844 kubelet[2533]: I0317 18:54:27.240823 2533 scope.go:117] "RemoveContainer" containerID="92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d" Mar 17 18:54:27.241050 env[1410]: time="2025-03-17T18:54:27.240997279Z" level=error msg="ContainerStatus for \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\": not found" Mar 17 18:54:27.241188 kubelet[2533]: E0317 18:54:27.241162 2533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\": not found" containerID="92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d" Mar 17 18:54:27.241262 kubelet[2533]: I0317 18:54:27.241193 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d"} err="failed to get container status \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\": rpc error: code = NotFound desc = an error occurred when try to find container \"92b3f87e2b7bdd4200a6706f0e691fc8a513841c66fe68dd62719ff29267c30d\": not found" Mar 17 18:54:27.241262 kubelet[2533]: I0317 18:54:27.241218 2533 scope.go:117] "RemoveContainer" containerID="536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8" Mar 17 18:54:27.241503 env[1410]: time="2025-03-17T18:54:27.241452888Z" level=error msg="ContainerStatus for \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\": not found" Mar 17 18:54:27.242186 kubelet[2533]: E0317 18:54:27.242156 2533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\": not found" containerID="536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8" Mar 17 18:54:27.242345 kubelet[2533]: I0317 18:54:27.242251 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8"} err="failed to get container status \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"536e3ec0414a4199e7b4d67c6d0b3cd5002b7a54b92db738c5618d9e334ff3a8\": not found" Mar 17 18:54:27.242345 kubelet[2533]: I0317 18:54:27.242288 2533 scope.go:117] "RemoveContainer" containerID="4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa" Mar 17 18:54:27.242536 env[1410]: time="2025-03-17T18:54:27.242491509Z" level=error msg="ContainerStatus for \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\": not found" Mar 17 18:54:27.242704 kubelet[2533]: E0317 18:54:27.242674 2533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\": not found" containerID="4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa" Mar 17 18:54:27.242790 kubelet[2533]: I0317 18:54:27.242700 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa"} err="failed to get container status \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b32cb4f437508f72c5ee953f07bfcf32e0cdc5b730ad6f5a77c75c3175fedaa\": not found" Mar 17 18:54:27.242790 kubelet[2533]: I0317 18:54:27.242720 2533 scope.go:117] "RemoveContainer" containerID="566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0" Mar 17 18:54:27.242948 env[1410]: time="2025-03-17T18:54:27.242890717Z" level=error msg="ContainerStatus for \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\": not found" Mar 17 18:54:27.243090 kubelet[2533]: E0317 18:54:27.243056 2533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\": not found" containerID="566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0" Mar 17 18:54:27.243177 kubelet[2533]: I0317 18:54:27.243100 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0"} err="failed to get container status \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\": rpc error: code = NotFound desc = an error occurred when try to find container \"566f01fe5f5c54021719e45761f2cd307d484c56dfbb31e566c7254552955fd0\": not found" Mar 17 18:54:27.243177 kubelet[2533]: I0317 18:54:27.243121 2533 scope.go:117] "RemoveContainer" containerID="2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36" Mar 17 18:54:27.243355 env[1410]: time="2025-03-17T18:54:27.243300526Z" level=error msg="ContainerStatus for \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\": not found" Mar 17 18:54:27.243493 kubelet[2533]: E0317 18:54:27.243471 2533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\": not found" containerID="2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36" Mar 17 18:54:27.243567 kubelet[2533]: I0317 18:54:27.243496 2533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36"} err="failed to get container status \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\": rpc error: code = NotFound desc = an error occurred when try to find container \"2255df9dfa1d69f372dee8b45f4a4117f441f60fd5fcc869fd128951ef444b36\": not found" Mar 17 18:54:27.611213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4-rootfs.mount: Deactivated successfully. Mar 17 18:54:27.611622 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50c1d237b9f82be7445638a2ce84748f8e3c9c846588dfebd4ddc33abde900b4-shm.mount: Deactivated successfully. Mar 17 18:54:27.611859 systemd[1]: var-lib-kubelet-pods-4232e0b6\x2d4788\x2d48aa\x2db36c\x2dc4dddd7c8182-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:54:27.612083 systemd[1]: var-lib-kubelet-pods-4232e0b6\x2d4788\x2d48aa\x2db36c\x2dc4dddd7c8182-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:27.612205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b-rootfs.mount: Deactivated successfully. Mar 17 18:54:27.612298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b40f796494216d39e7466f0f6aeb6e472cf2292035de7321b8810bcff5526f6b-shm.mount: Deactivated successfully. Mar 17 18:54:27.612409 systemd[1]: var-lib-kubelet-pods-2c18a260\x2d96f8\x2d4257\x2dbb35\x2df769e1b63cb5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drczzw.mount: Deactivated successfully. Mar 17 18:54:27.612513 systemd[1]: var-lib-kubelet-pods-4232e0b6\x2d4788\x2d48aa\x2db36c\x2dc4dddd7c8182-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94qb4.mount: Deactivated successfully. Mar 17 18:54:28.655437 sshd[4073]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:28.659138 systemd[1]: sshd@20-10.200.8.24:22-10.200.16.10:56160.service: Deactivated successfully. Mar 17 18:54:28.660584 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:54:28.660625 systemd-logind[1402]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:54:28.662050 systemd-logind[1402]: Removed session 23. Mar 17 18:54:28.706420 kubelet[2533]: I0317 18:54:28.706382 2533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c18a260-96f8-4257-bb35-f769e1b63cb5" path="/var/lib/kubelet/pods/2c18a260-96f8-4257-bb35-f769e1b63cb5/volumes" Mar 17 18:54:28.706929 kubelet[2533]: I0317 18:54:28.706906 2533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" path="/var/lib/kubelet/pods/4232e0b6-4788-48aa-b36c-c4dddd7c8182/volumes" Mar 17 18:54:28.762560 systemd[1]: Started sshd@21-10.200.8.24:22-10.200.16.10:53078.service. Mar 17 18:54:29.386783 sshd[4243]: Accepted publickey for core from 10.200.16.10 port 53078 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:29.388551 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:29.393527 systemd[1]: Started session-24.scope. Mar 17 18:54:29.394167 systemd-logind[1402]: New session 24 of user core. Mar 17 18:54:29.822448 kubelet[2533]: E0317 18:54:29.822408 2533 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:30.096853 kubelet[2533]: I0317 18:54:30.096712 2533 topology_manager.go:215] "Topology Admit Handler" podUID="6d3f6657-201d-4314-a219-82654560fc52" podNamespace="kube-system" podName="cilium-4bvdr" Mar 17 18:54:30.097121 kubelet[2533]: E0317 18:54:30.097100 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" containerName="mount-cgroup" Mar 17 18:54:30.097267 kubelet[2533]: E0317 18:54:30.097253 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" containerName="mount-bpf-fs" Mar 17 18:54:30.097363 kubelet[2533]: E0317 18:54:30.097352 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c18a260-96f8-4257-bb35-f769e1b63cb5" containerName="cilium-operator" Mar 17 18:54:30.097451 kubelet[2533]: E0317 18:54:30.097440 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" containerName="apply-sysctl-overwrites" Mar 17 18:54:30.097551 kubelet[2533]: E0317 18:54:30.097538 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" containerName="clean-cilium-state" Mar 17 18:54:30.097718 kubelet[2533]: E0317 18:54:30.097704 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" containerName="cilium-agent" Mar 17 18:54:30.097858 kubelet[2533]: I0317 18:54:30.097844 2533 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c18a260-96f8-4257-bb35-f769e1b63cb5" containerName="cilium-operator" Mar 17 18:54:30.098459 kubelet[2533]: I0317 18:54:30.098437 2533 memory_manager.go:354] "RemoveStaleState removing state" podUID="4232e0b6-4788-48aa-b36c-c4dddd7c8182" containerName="cilium-agent" Mar 17 18:54:30.105544 systemd[1]: Created slice kubepods-burstable-pod6d3f6657_201d_4314_a219_82654560fc52.slice. Mar 17 18:54:30.190564 sshd[4243]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:30.194271 systemd-logind[1402]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:54:30.194481 systemd[1]: sshd@21-10.200.8.24:22-10.200.16.10:53078.service: Deactivated successfully. Mar 17 18:54:30.195334 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:54:30.196395 systemd-logind[1402]: Removed session 24. Mar 17 18:54:30.273900 kubelet[2533]: I0317 18:54:30.273848 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-hostproc\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.273900 kubelet[2533]: I0317 18:54:30.273909 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-kernel\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274277 kubelet[2533]: I0317 18:54:30.273965 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6ws\" (UniqueName: \"kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-kube-api-access-rz6ws\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274277 kubelet[2533]: I0317 18:54:30.274001 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-run\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274277 kubelet[2533]: I0317 18:54:30.274027 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-bpf-maps\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274277 kubelet[2533]: I0317 18:54:30.274054 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-lib-modules\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274277 kubelet[2533]: I0317 18:54:30.274104 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-xtables-lock\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274277 kubelet[2533]: I0317 18:54:30.274135 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-net\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274570 kubelet[2533]: I0317 18:54:30.274164 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d3f6657-201d-4314-a219-82654560fc52-cilium-config-path\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274570 kubelet[2533]: I0317 18:54:30.274189 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-cgroup\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274570 kubelet[2533]: I0317 18:54:30.274215 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cni-path\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274570 kubelet[2533]: I0317 18:54:30.274253 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-clustermesh-secrets\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274570 kubelet[2533]: I0317 18:54:30.274279 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-hubble-tls\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274570 kubelet[2533]: I0317 18:54:30.274307 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-etc-cni-netd\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.274758 kubelet[2533]: I0317 18:54:30.274339 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-cilium-ipsec-secrets\") pod \"cilium-4bvdr\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " pod="kube-system/cilium-4bvdr" Mar 17 18:54:30.295474 systemd[1]: Started sshd@22-10.200.8.24:22-10.200.16.10:53088.service. Mar 17 18:54:30.431292 env[1410]: time="2025-03-17T18:54:30.430797463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4bvdr,Uid:6d3f6657-201d-4314-a219-82654560fc52,Namespace:kube-system,Attempt:0,}" Mar 17 18:54:30.462699 env[1410]: time="2025-03-17T18:54:30.462635196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:30.464458 env[1410]: time="2025-03-17T18:54:30.464387231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:30.464458 env[1410]: time="2025-03-17T18:54:30.464413432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:30.464756 env[1410]: time="2025-03-17T18:54:30.464698537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f pid=4271 runtime=io.containerd.runc.v2 Mar 17 18:54:30.477520 systemd[1]: Started cri-containerd-6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f.scope. Mar 17 18:54:30.507223 env[1410]: time="2025-03-17T18:54:30.507188182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4bvdr,Uid:6d3f6657-201d-4314-a219-82654560fc52,Namespace:kube-system,Attempt:0,} returns sandbox id \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\"" Mar 17 18:54:30.513497 env[1410]: time="2025-03-17T18:54:30.513436406Z" level=info msg="CreateContainer within sandbox \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:54:30.541398 env[1410]: time="2025-03-17T18:54:30.541364461Z" level=info msg="CreateContainer within sandbox \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\"" Mar 17 18:54:30.541966 env[1410]: time="2025-03-17T18:54:30.541927773Z" level=info msg="StartContainer for \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\"" Mar 17 18:54:30.560099 systemd[1]: Started cri-containerd-2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9.scope. Mar 17 18:54:30.573780 systemd[1]: cri-containerd-2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9.scope: Deactivated successfully. Mar 17 18:54:30.634586 env[1410]: time="2025-03-17T18:54:30.634533614Z" level=info msg="shim disconnected" id=2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9 Mar 17 18:54:30.634586 env[1410]: time="2025-03-17T18:54:30.634585415Z" level=warning msg="cleaning up after shim disconnected" id=2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9 namespace=k8s.io Mar 17 18:54:30.634586 env[1410]: time="2025-03-17T18:54:30.634596215Z" level=info msg="cleaning up dead shim" Mar 17 18:54:30.643011 env[1410]: time="2025-03-17T18:54:30.642970681Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4329 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:54:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:54:30.643380 env[1410]: time="2025-03-17T18:54:30.643275287Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Mar 17 18:54:30.644427 env[1410]: time="2025-03-17T18:54:30.644382609Z" level=error msg="Failed to pipe stderr of container \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\"" error="reading from a closed fifo" Mar 17 18:54:30.645157 env[1410]: time="2025-03-17T18:54:30.645121924Z" level=error msg="Failed to pipe stdout of container \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\"" error="reading from a closed fifo" Mar 17 18:54:30.650623 env[1410]: time="2025-03-17T18:54:30.650571132Z" level=error msg="StartContainer for \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:54:30.650863 kubelet[2533]: E0317 18:54:30.650829 2533 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9" Mar 17 18:54:30.652173 kubelet[2533]: E0317 18:54:30.651249 2533 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:54:30.652173 kubelet[2533]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:54:30.652173 kubelet[2533]: rm /hostbin/cilium-mount Mar 17 18:54:30.652302 kubelet[2533]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rz6ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4bvdr_kube-system(6d3f6657-201d-4314-a219-82654560fc52): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:54:30.652302 kubelet[2533]: E0317 18:54:30.651294 2533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4bvdr" podUID="6d3f6657-201d-4314-a219-82654560fc52" Mar 17 18:54:30.919232 sshd[4257]: Accepted publickey for core from 10.200.16.10 port 53088 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:30.921005 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:30.926565 systemd[1]: Started session-25.scope. Mar 17 18:54:30.927222 systemd-logind[1402]: New session 25 of user core. Mar 17 18:54:31.195977 env[1410]: time="2025-03-17T18:54:31.195149720Z" level=info msg="CreateContainer within sandbox \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 18:54:31.225381 env[1410]: time="2025-03-17T18:54:31.225332214Z" level=info msg="CreateContainer within sandbox \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\"" Mar 17 18:54:31.226340 env[1410]: time="2025-03-17T18:54:31.226305833Z" level=info msg="StartContainer for \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\"" Mar 17 18:54:31.243027 systemd[1]: Started cri-containerd-e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62.scope. Mar 17 18:54:31.255851 systemd[1]: cri-containerd-e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62.scope: Deactivated successfully. Mar 17 18:54:31.275245 env[1410]: time="2025-03-17T18:54:31.275175695Z" level=info msg="shim disconnected" id=e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62 Mar 17 18:54:31.275245 env[1410]: time="2025-03-17T18:54:31.275238197Z" level=warning msg="cleaning up after shim disconnected" id=e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62 namespace=k8s.io Mar 17 18:54:31.275245 env[1410]: time="2025-03-17T18:54:31.275250097Z" level=info msg="cleaning up dead shim" Mar 17 18:54:31.285410 env[1410]: time="2025-03-17T18:54:31.285364796Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4378 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:54:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:54:31.285699 env[1410]: time="2025-03-17T18:54:31.285636901Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Mar 17 18:54:31.286183 env[1410]: time="2025-03-17T18:54:31.286135011Z" level=error msg="Failed to pipe stdout of container \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\"" error="reading from a closed fifo" Mar 17 18:54:31.287372 env[1410]: time="2025-03-17T18:54:31.287309634Z" level=error msg="Failed to pipe stderr of container \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\"" error="reading from a closed fifo" Mar 17 18:54:31.290992 env[1410]: time="2025-03-17T18:54:31.290948006Z" level=error msg="StartContainer for \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:54:31.291541 kubelet[2533]: E0317 18:54:31.291211 2533 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62" Mar 17 18:54:31.292270 kubelet[2533]: E0317 18:54:31.292190 2533 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:54:31.292270 kubelet[2533]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:54:31.292270 kubelet[2533]: rm /hostbin/cilium-mount Mar 17 18:54:31.292270 kubelet[2533]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rz6ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4bvdr_kube-system(6d3f6657-201d-4314-a219-82654560fc52): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:54:31.292270 kubelet[2533]: E0317 18:54:31.292235 2533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4bvdr" podUID="6d3f6657-201d-4314-a219-82654560fc52" Mar 17 18:54:31.452260 sshd[4257]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:31.455730 systemd[1]: sshd@22-10.200.8.24:22-10.200.16.10:53088.service: Deactivated successfully. Mar 17 18:54:31.456706 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:54:31.457412 systemd-logind[1402]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:54:31.458384 systemd-logind[1402]: Removed session 25. Mar 17 18:54:31.557863 systemd[1]: Started sshd@23-10.200.8.24:22-10.200.16.10:53090.service. Mar 17 18:54:32.183146 sshd[4395]: Accepted publickey for core from 10.200.16.10 port 53090 ssh2: RSA SHA256:Id7fTtJmja0nOLdf0IQA3jnxxJrUKKdGU1UW83zjTQg Mar 17 18:54:32.184653 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:32.189758 systemd[1]: Started session-26.scope. Mar 17 18:54:32.190416 systemd-logind[1402]: New session 26 of user core. Mar 17 18:54:32.204614 kubelet[2533]: I0317 18:54:32.204593 2533 scope.go:117] "RemoveContainer" containerID="2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9" Mar 17 18:54:32.206106 env[1410]: time="2025-03-17T18:54:32.205237961Z" level=info msg="StopPodSandbox for \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\"" Mar 17 18:54:32.206106 env[1410]: time="2025-03-17T18:54:32.205297162Z" level=info msg="Container to stop \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:32.206106 env[1410]: time="2025-03-17T18:54:32.205314863Z" level=info msg="Container to stop \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:32.209872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f-shm.mount: Deactivated successfully. Mar 17 18:54:32.220609 env[1410]: time="2025-03-17T18:54:32.215150454Z" level=info msg="RemoveContainer for \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\"" Mar 17 18:54:32.219688 systemd[1]: cri-containerd-6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f.scope: Deactivated successfully. Mar 17 18:54:32.238299 env[1410]: time="2025-03-17T18:54:32.238258105Z" level=info msg="RemoveContainer for \"2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9\" returns successfully" Mar 17 18:54:32.248677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f-rootfs.mount: Deactivated successfully. Mar 17 18:54:32.266024 env[1410]: time="2025-03-17T18:54:32.265973945Z" level=info msg="shim disconnected" id=6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f Mar 17 18:54:32.266256 env[1410]: time="2025-03-17T18:54:32.266027446Z" level=warning msg="cleaning up after shim disconnected" id=6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f namespace=k8s.io Mar 17 18:54:32.266256 env[1410]: time="2025-03-17T18:54:32.266039646Z" level=info msg="cleaning up dead shim" Mar 17 18:54:32.274174 env[1410]: time="2025-03-17T18:54:32.274139404Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4416 runtime=io.containerd.runc.v2\n" Mar 17 18:54:32.274472 env[1410]: time="2025-03-17T18:54:32.274444510Z" level=info msg="TearDown network for sandbox \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\" successfully" Mar 17 18:54:32.274582 env[1410]: time="2025-03-17T18:54:32.274472310Z" level=info msg="StopPodSandbox for \"6992084694419d8657dfb4533ae4200e02a72f98a11c10d5d1558c3cad40a27f\" returns successfully" Mar 17 18:54:32.385385 kubelet[2533]: I0317 18:54:32.385327 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-lib-modules\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385391 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d3f6657-201d-4314-a219-82654560fc52-cilium-config-path\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385426 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-clustermesh-secrets\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385453 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-kernel\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385478 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-cgroup\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385504 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-bpf-maps\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385534 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-hubble-tls\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385562 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-etc-cni-netd\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385607 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-hostproc\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385635 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz6ws\" (UniqueName: \"kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-kube-api-access-rz6ws\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385663 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-net\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385688 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cni-path\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385718 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-run\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385744 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-xtables-lock\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.385956 kubelet[2533]: I0317 18:54:32.385776 2533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-cilium-ipsec-secrets\") pod \"6d3f6657-201d-4314-a219-82654560fc52\" (UID: \"6d3f6657-201d-4314-a219-82654560fc52\") " Mar 17 18:54:32.386994 kubelet[2533]: I0317 18:54:32.386947 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.387110 kubelet[2533]: I0317 18:54:32.387009 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.387858 kubelet[2533]: I0317 18:54:32.387817 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.390797 kubelet[2533]: I0317 18:54:32.390766 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.390998 kubelet[2533]: I0317 18:54:32.390974 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.391204 kubelet[2533]: I0317 18:54:32.391181 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.391364 kubelet[2533]: I0317 18:54:32.391343 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.394470 systemd[1]: var-lib-kubelet-pods-6d3f6657\x2d201d\x2d4314\x2da219\x2d82654560fc52-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:32.395840 kubelet[2533]: I0317 18:54:32.395812 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:32.397120 kubelet[2533]: I0317 18:54:32.397040 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3f6657-201d-4314-a219-82654560fc52-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:32.397207 kubelet[2533]: I0317 18:54:32.397109 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.397207 kubelet[2533]: I0317 18:54:32.397141 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.397207 kubelet[2533]: I0317 18:54:32.397162 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:32.400848 systemd[1]: var-lib-kubelet-pods-6d3f6657\x2d201d\x2d4314\x2da219\x2d82654560fc52-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:54:32.405323 kubelet[2533]: I0317 18:54:32.405295 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:32.405547 kubelet[2533]: I0317 18:54:32.405502 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:32.406610 systemd[1]: var-lib-kubelet-pods-6d3f6657\x2d201d\x2d4314\x2da219\x2d82654560fc52-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drz6ws.mount: Deactivated successfully. Mar 17 18:54:32.406720 systemd[1]: var-lib-kubelet-pods-6d3f6657\x2d201d\x2d4314\x2da219\x2d82654560fc52-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:32.408516 kubelet[2533]: I0317 18:54:32.408488 2533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-kube-api-access-rz6ws" (OuterVolumeSpecName: "kube-api-access-rz6ws") pod "6d3f6657-201d-4314-a219-82654560fc52" (UID: "6d3f6657-201d-4314-a219-82654560fc52"). InnerVolumeSpecName "kube-api-access-rz6ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486818 2533 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486865 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-cgroup\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486881 2533 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-bpf-maps\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486893 2533 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-hubble-tls\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486906 2533 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-etc-cni-netd\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486918 2533 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-hostproc\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486933 2533 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rz6ws\" (UniqueName: \"kubernetes.io/projected/6d3f6657-201d-4314-a219-82654560fc52-kube-api-access-rz6ws\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486947 2533 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-host-proc-sys-net\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.486966 2533 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cni-path\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.487007 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-cilium-run\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.487023 2533 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-xtables-lock\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.487036 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-cilium-ipsec-secrets\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.487050 2533 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3f6657-201d-4314-a219-82654560fc52-lib-modules\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.487065 2533 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d3f6657-201d-4314-a219-82654560fc52-cilium-config-path\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.487158 kubelet[2533]: I0317 18:54:32.487105 2533 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d3f6657-201d-4314-a219-82654560fc52-clustermesh-secrets\") on node \"ci-3510.3.7-a-961279aa07\" DevicePath \"\"" Mar 17 18:54:32.709326 systemd[1]: Removed slice kubepods-burstable-pod6d3f6657_201d_4314_a219_82654560fc52.slice. Mar 17 18:54:33.207657 kubelet[2533]: I0317 18:54:33.207613 2533 scope.go:117] "RemoveContainer" containerID="e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62" Mar 17 18:54:33.209551 env[1410]: time="2025-03-17T18:54:33.209512192Z" level=info msg="RemoveContainer for \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\"" Mar 17 18:54:33.219622 env[1410]: time="2025-03-17T18:54:33.219576086Z" level=info msg="RemoveContainer for \"e2783c873e05a6dd3df0b8ae53a0fd0e111cd0e721d93e42b02d062b5b4b1b62\" returns successfully" Mar 17 18:54:33.247407 kubelet[2533]: I0317 18:54:33.247355 2533 topology_manager.go:215] "Topology Admit Handler" podUID="ca9cead8-8ca3-4489-a083-ed0f6bdfbd90" podNamespace="kube-system" podName="cilium-2fg7v" Mar 17 18:54:33.247580 kubelet[2533]: E0317 18:54:33.247433 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3f6657-201d-4314-a219-82654560fc52" containerName="mount-cgroup" Mar 17 18:54:33.247580 kubelet[2533]: E0317 18:54:33.247446 2533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3f6657-201d-4314-a219-82654560fc52" containerName="mount-cgroup" Mar 17 18:54:33.247580 kubelet[2533]: I0317 18:54:33.247471 2533 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3f6657-201d-4314-a219-82654560fc52" containerName="mount-cgroup" Mar 17 18:54:33.247580 kubelet[2533]: I0317 18:54:33.247507 2533 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3f6657-201d-4314-a219-82654560fc52" containerName="mount-cgroup" Mar 17 18:54:33.254252 systemd[1]: Created slice kubepods-burstable-podca9cead8_8ca3_4489_a083_ed0f6bdfbd90.slice. Mar 17 18:54:33.392524 kubelet[2533]: I0317 18:54:33.392478 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-lib-modules\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392533 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-xtables-lock\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392561 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-host-proc-sys-kernel\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392585 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-hostproc\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392606 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-clustermesh-secrets\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392626 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-cilium-config-path\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392651 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p7d\" (UniqueName: \"kubernetes.io/projected/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-kube-api-access-72p7d\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392675 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-etc-cni-netd\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392699 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-host-proc-sys-net\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392719 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-hubble-tls\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392742 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-cilium-run\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392762 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-cni-path\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392782 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-cilium-ipsec-secrets\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392805 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-bpf-maps\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.393009 kubelet[2533]: I0317 18:54:33.392829 2533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca9cead8-8ca3-4489-a083-ed0f6bdfbd90-cilium-cgroup\") pod \"cilium-2fg7v\" (UID: \"ca9cead8-8ca3-4489-a083-ed0f6bdfbd90\") " pod="kube-system/cilium-2fg7v" Mar 17 18:54:33.559130 env[1410]: time="2025-03-17T18:54:33.558623129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fg7v,Uid:ca9cead8-8ca3-4489-a083-ed0f6bdfbd90,Namespace:kube-system,Attempt:0,}" Mar 17 18:54:33.606028 env[1410]: time="2025-03-17T18:54:33.605948942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:33.606205 env[1410]: time="2025-03-17T18:54:33.606045744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:33.606205 env[1410]: time="2025-03-17T18:54:33.606087444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:33.606437 env[1410]: time="2025-03-17T18:54:33.606393750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6 pid=4449 runtime=io.containerd.runc.v2 Mar 17 18:54:33.631117 systemd[1]: Started cri-containerd-135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6.scope. Mar 17 18:54:33.735350 env[1410]: time="2025-03-17T18:54:33.735270537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fg7v,Uid:ca9cead8-8ca3-4489-a083-ed0f6bdfbd90,Namespace:kube-system,Attempt:0,} returns sandbox id \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\"" Mar 17 18:54:33.738491 env[1410]: time="2025-03-17T18:54:33.738446698Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:54:33.740500 kubelet[2533]: W0317 18:54:33.740378 2533 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d3f6657_201d_4314_a219_82654560fc52.slice/cri-containerd-2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9.scope WatchSource:0}: container "2f0fc1e0c3127e9ae883e78087f5aea1d48e1368d67d8ac7da7100cae484cef9" in namespace "k8s.io": not found Mar 17 18:54:33.778090 env[1410]: time="2025-03-17T18:54:33.778037762Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182\"" Mar 17 18:54:33.780200 env[1410]: time="2025-03-17T18:54:33.779521591Z" level=info msg="StartContainer for \"24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182\"" Mar 17 18:54:33.795603 systemd[1]: Started cri-containerd-24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182.scope. Mar 17 18:54:33.829145 env[1410]: time="2025-03-17T18:54:33.828550637Z" level=info msg="StartContainer for \"24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182\" returns successfully" Mar 17 18:54:33.836274 systemd[1]: cri-containerd-24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182.scope: Deactivated successfully. Mar 17 18:54:33.881093 env[1410]: time="2025-03-17T18:54:33.881015249Z" level=info msg="shim disconnected" id=24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182 Mar 17 18:54:33.881093 env[1410]: time="2025-03-17T18:54:33.881085751Z" level=warning msg="cleaning up after shim disconnected" id=24578db21e87b379f7256c5f378285847d777aabc0412c3419d0304b2a442182 namespace=k8s.io Mar 17 18:54:33.881093 env[1410]: time="2025-03-17T18:54:33.881098651Z" level=info msg="cleaning up dead shim" Mar 17 18:54:33.888517 env[1410]: time="2025-03-17T18:54:33.888475893Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4535 runtime=io.containerd.runc.v2\n" Mar 17 18:54:34.217486 env[1410]: time="2025-03-17T18:54:34.217014292Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:54:34.247225 env[1410]: time="2025-03-17T18:54:34.247176768Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959\"" Mar 17 18:54:34.247705 env[1410]: time="2025-03-17T18:54:34.247674878Z" level=info msg="StartContainer for \"924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959\"" Mar 17 18:54:34.263616 systemd[1]: Started cri-containerd-924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959.scope. Mar 17 18:54:34.298600 env[1410]: time="2025-03-17T18:54:34.298537650Z" level=info msg="StartContainer for \"924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959\" returns successfully" Mar 17 18:54:34.301746 systemd[1]: cri-containerd-924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959.scope: Deactivated successfully. Mar 17 18:54:34.328817 env[1410]: time="2025-03-17T18:54:34.328768327Z" level=info msg="shim disconnected" id=924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959 Mar 17 18:54:34.333212 env[1410]: time="2025-03-17T18:54:34.329356739Z" level=warning msg="cleaning up after shim disconnected" id=924f8b5e41e88dd336ae2c9509f3e78fcd683e4cbb6281518ca4bf9baffbd959 namespace=k8s.io Mar 17 18:54:34.333212 env[1410]: time="2025-03-17T18:54:34.329383639Z" level=info msg="cleaning up dead shim" Mar 17 18:54:34.337824 env[1410]: time="2025-03-17T18:54:34.337753199Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4601 runtime=io.containerd.runc.v2\n" Mar 17 18:54:34.706314 kubelet[2533]: I0317 18:54:34.706267 2533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3f6657-201d-4314-a219-82654560fc52" path="/var/lib/kubelet/pods/6d3f6657-201d-4314-a219-82654560fc52/volumes" Mar 17 18:54:34.824195 kubelet[2533]: E0317 18:54:34.824147 2533 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:35.218641 env[1410]: time="2025-03-17T18:54:35.218590989Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:54:35.259441 env[1410]: time="2025-03-17T18:54:35.259390361Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066\"" Mar 17 18:54:35.260258 env[1410]: time="2025-03-17T18:54:35.260226977Z" level=info msg="StartContainer for \"824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066\"" Mar 17 18:54:35.289587 systemd[1]: Started cri-containerd-824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066.scope. Mar 17 18:54:35.326288 systemd[1]: cri-containerd-824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066.scope: Deactivated successfully. Mar 17 18:54:35.330001 env[1410]: time="2025-03-17T18:54:35.329962696Z" level=info msg="StartContainer for \"824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066\" returns successfully" Mar 17 18:54:35.361471 env[1410]: time="2025-03-17T18:54:35.361411491Z" level=info msg="shim disconnected" id=824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066 Mar 17 18:54:35.361471 env[1410]: time="2025-03-17T18:54:35.361468193Z" level=warning msg="cleaning up after shim disconnected" id=824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066 namespace=k8s.io Mar 17 18:54:35.361764 env[1410]: time="2025-03-17T18:54:35.361480993Z" level=info msg="cleaning up dead shim" Mar 17 18:54:35.368559 env[1410]: time="2025-03-17T18:54:35.368522526Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4662 runtime=io.containerd.runc.v2\n" Mar 17 18:54:35.501403 systemd[1]: run-containerd-runc-k8s.io-824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066-runc.yGbhlH.mount: Deactivated successfully. Mar 17 18:54:35.501544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824bffb634662ed67f8c551b418997d609d334327ec1f244f299edf461b08066-rootfs.mount: Deactivated successfully. Mar 17 18:54:36.224521 env[1410]: time="2025-03-17T18:54:36.224457581Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:54:36.255759 env[1410]: time="2025-03-17T18:54:36.255712866Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029\"" Mar 17 18:54:36.256552 env[1410]: time="2025-03-17T18:54:36.256514481Z" level=info msg="StartContainer for \"2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029\"" Mar 17 18:54:36.282936 systemd[1]: Started cri-containerd-2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029.scope. Mar 17 18:54:36.310784 systemd[1]: cri-containerd-2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029.scope: Deactivated successfully. Mar 17 18:54:36.314565 env[1410]: time="2025-03-17T18:54:36.314519668Z" level=info msg="StartContainer for \"2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029\" returns successfully" Mar 17 18:54:36.348704 env[1410]: time="2025-03-17T18:54:36.348651408Z" level=info msg="shim disconnected" id=2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029 Mar 17 18:54:36.349043 env[1410]: time="2025-03-17T18:54:36.349020715Z" level=warning msg="cleaning up after shim disconnected" id=2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029 namespace=k8s.io Mar 17 18:54:36.349177 env[1410]: time="2025-03-17T18:54:36.349160918Z" level=info msg="cleaning up dead shim" Mar 17 18:54:36.357883 env[1410]: time="2025-03-17T18:54:36.357841080Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4719 runtime=io.containerd.runc.v2\n" Mar 17 18:54:36.501393 systemd[1]: run-containerd-runc-k8s.io-2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029-runc.kJcZYA.mount: Deactivated successfully. Mar 17 18:54:36.501510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2230578f8fcec37b1695bee5b4f82b49f1c26688f727d1e371eb6e6378239029-rootfs.mount: Deactivated successfully. Mar 17 18:54:37.232009 env[1410]: time="2025-03-17T18:54:37.231959518Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:54:37.286574 env[1410]: time="2025-03-17T18:54:37.286530431Z" level=info msg="CreateContainer within sandbox \"135855025b2123fa8efb2ac9167527159018649e725feae733013da5d1fb81e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600\"" Mar 17 18:54:37.287318 env[1410]: time="2025-03-17T18:54:37.287283645Z" level=info msg="StartContainer for \"752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600\"" Mar 17 18:54:37.313939 systemd[1]: Started cri-containerd-752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600.scope. Mar 17 18:54:37.352381 env[1410]: time="2025-03-17T18:54:37.352331252Z" level=info msg="StartContainer for \"752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600\" returns successfully" Mar 17 18:54:37.503721 systemd[1]: run-containerd-runc-k8s.io-752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600-runc.zGgabC.mount: Deactivated successfully. Mar 17 18:54:37.837098 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:54:40.531902 systemd-networkd[1564]: lxc_health: Link UP Mar 17 18:54:40.548533 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:54:40.546554 systemd-networkd[1564]: lxc_health: Gained carrier Mar 17 18:54:41.017467 systemd[1]: run-containerd-runc-k8s.io-752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600-runc.6mY5iu.mount: Deactivated successfully. Mar 17 18:54:41.588574 kubelet[2533]: I0317 18:54:41.588496 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2fg7v" podStartSLOduration=8.588476611 podStartE2EDuration="8.588476611s" podCreationTimestamp="2025-03-17 18:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:54:38.249207953 +0000 UTC m=+223.649491568" watchObservedRunningTime="2025-03-17 18:54:41.588476611 +0000 UTC m=+226.988760226" Mar 17 18:54:41.920349 systemd-networkd[1564]: lxc_health: Gained IPv6LL Mar 17 18:54:43.193619 systemd[1]: run-containerd-runc-k8s.io-752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600-runc.RTy4nY.mount: Deactivated successfully. Mar 17 18:54:45.346840 systemd[1]: run-containerd-runc-k8s.io-752d6eb286b6a0f34bba0b76cc25277f07af2954a627167d735f25e6307ac600-runc.gaspv8.mount: Deactivated successfully. Mar 17 18:54:47.650363 sshd[4395]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:47.654262 systemd[1]: sshd@23-10.200.8.24:22-10.200.16.10:53090.service: Deactivated successfully. Mar 17 18:54:47.655343 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:54:47.656235 systemd-logind[1402]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:54:47.657044 systemd-logind[1402]: Removed session 26.