Dec 13 14:30:01.015993 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:30:01.016024 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:01.016039 kernel: BIOS-provided physical RAM map: Dec 13 14:30:01.016049 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:30:01.016059 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 14:30:01.016069 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 14:30:01.016084 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 14:30:01.016095 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 14:30:01.016106 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 14:30:01.016117 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 14:30:01.016127 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 14:30:01.016138 kernel: printk: bootconsole [earlyser0] enabled Dec 13 14:30:01.016148 kernel: NX (Execute Disable) protection: active Dec 13 14:30:01.016159 kernel: efi: EFI v2.70 by Microsoft Dec 13 14:30:01.016175 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 Dec 13 14:30:01.016187 kernel: random: crng init done Dec 13 14:30:01.016198 kernel: SMBIOS 3.1.0 present. Dec 13 14:30:01.016210 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 14:30:01.016222 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 14:30:01.016234 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 14:30:01.016245 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 14:30:01.016256 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 14:30:01.016270 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 14:30:01.016282 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 14:30:01.016294 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 14:30:01.016305 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 14:30:01.016318 kernel: tsc: Detected 2593.906 MHz processor Dec 13 14:30:01.016330 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:30:01.016342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:30:01.016354 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 14:30:01.016366 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:30:01.016378 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 14:30:01.016392 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 14:30:01.016403 kernel: Using GB pages for direct mapping Dec 13 14:30:01.016415 kernel: Secure boot disabled Dec 13 14:30:01.016427 kernel: ACPI: Early table checksum verification disabled Dec 13 14:30:01.016438 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 14:30:01.016451 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016463 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016475 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 14:30:01.016494 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 14:30:01.016506 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016519 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016532 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016544 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016558 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016572 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016586 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:01.016599 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 14:30:01.016611 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 14:30:01.016624 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 14:30:01.016637 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 14:30:01.016649 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 14:30:01.016662 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 14:30:01.016677 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 14:30:01.016690 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 14:30:01.016703 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 14:30:01.016715 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 14:30:01.016728 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:30:01.016741 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:30:01.016753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 14:30:01.016766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 14:30:01.016779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 14:30:01.016794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 14:30:01.016807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 14:30:01.016820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 14:30:01.016833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 14:30:01.016846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 14:30:01.016858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 14:30:01.016871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 14:30:01.016884 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 14:30:01.016897 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 14:30:01.018955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 14:30:01.018973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 14:30:01.018986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 14:30:01.018999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 14:30:01.019013 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 14:30:01.019026 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 14:30:01.019039 kernel: Zone ranges: Dec 13 14:30:01.019052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:30:01.019064 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:30:01.019081 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:30:01.019094 kernel: Movable zone start for each node Dec 13 14:30:01.019107 kernel: Early memory node ranges Dec 13 14:30:01.019120 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:30:01.019133 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 14:30:01.019146 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 14:30:01.019159 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:30:01.019171 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 14:30:01.019184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:30:01.019199 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:30:01.019212 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 14:30:01.019225 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 14:30:01.019238 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 14:30:01.019251 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:30:01.019263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:30:01.019276 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:30:01.019289 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 14:30:01.019302 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:30:01.019317 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 14:30:01.019330 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 14:30:01.019343 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:30:01.019356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:30:01.019369 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:30:01.019382 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:30:01.019395 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:30:01.019407 kernel: Hyper-V: PV spinlocks enabled Dec 13 14:30:01.019420 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:30:01.019435 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 14:30:01.019448 kernel: Policy zone: Normal Dec 13 14:30:01.019463 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:01.019476 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:30:01.019489 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:30:01.019501 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:30:01.019514 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:30:01.019527 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 308056K reserved, 0K cma-reserved) Dec 13 14:30:01.019542 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:30:01.019556 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:30:01.019578 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:30:01.019594 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:30:01.019608 kernel: rcu: RCU event tracing is enabled. Dec 13 14:30:01.019621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:30:01.019635 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:30:01.019648 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:30:01.019662 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:30:01.019676 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:30:01.019689 kernel: Using NULL legacy PIC Dec 13 14:30:01.019706 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 14:30:01.019719 kernel: Console: colour dummy device 80x25 Dec 13 14:30:01.019732 kernel: printk: console [tty1] enabled Dec 13 14:30:01.019746 kernel: printk: console [ttyS0] enabled Dec 13 14:30:01.019759 kernel: printk: bootconsole [earlyser0] disabled Dec 13 14:30:01.019775 kernel: ACPI: Core revision 20210730 Dec 13 14:30:01.019788 kernel: Failed to register legacy timer interrupt Dec 13 14:30:01.019802 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:30:01.019815 kernel: Hyper-V: Using IPI hypercalls Dec 13 14:30:01.019829 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Dec 13 14:30:01.019843 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:30:01.019857 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:30:01.019870 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:30:01.019884 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:30:01.019897 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:30:01.019926 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:30:01.019937 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:30:01.019949 kernel: RETBleed: Vulnerable Dec 13 14:30:01.019960 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:30:01.019971 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:30:01.019979 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:30:01.019989 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:30:01.019998 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:30:01.020007 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:30:01.020014 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:30:01.020027 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:30:01.020036 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:30:01.020044 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:30:01.020055 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:30:01.020063 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 14:30:01.020072 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 14:30:01.020082 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 14:30:01.020091 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 14:30:01.020108 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:30:01.020118 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:30:01.020126 kernel: LSM: Security Framework initializing Dec 13 14:30:01.020134 kernel: SELinux: Initializing. Dec 13 14:30:01.020143 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:30:01.020151 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:30:01.020158 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:30:01.020165 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:30:01.020172 kernel: signal: max sigframe size: 3632 Dec 13 14:30:01.020179 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:30:01.020186 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:30:01.020193 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:30:01.020200 kernel: x86: Booting SMP configuration: Dec 13 14:30:01.020208 kernel: .... node #0, CPUs: #1 Dec 13 14:30:01.020217 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 14:30:01.020225 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:30:01.020232 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:30:01.020239 kernel: smpboot: Max logical packages: 1 Dec 13 14:30:01.020249 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 14:30:01.020257 kernel: devtmpfs: initialized Dec 13 14:30:01.020264 kernel: x86/mm: Memory block size: 128MB Dec 13 14:30:01.020274 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 14:30:01.020283 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:30:01.020294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:30:01.020302 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:30:01.020310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:30:01.020319 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:30:01.020326 kernel: audit: type=2000 audit(1734100199.023:1): state=initialized audit_enabled=0 res=1 Dec 13 14:30:01.020337 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:30:01.020345 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:30:01.020352 kernel: cpuidle: using governor menu Dec 13 14:30:01.020364 kernel: ACPI: bus type PCI registered Dec 13 14:30:01.020371 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:30:01.020381 kernel: dca service started, version 1.12.1 Dec 13 14:30:01.020389 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:30:01.020397 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:30:01.020406 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:30:01.020413 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:30:01.020423 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:30:01.020431 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:30:01.020443 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:30:01.020453 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:30:01.020461 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:30:01.020472 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:30:01.020481 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:30:01.020489 kernel: ACPI: Interpreter enabled Dec 13 14:30:01.020499 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:30:01.020506 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:30:01.020516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:30:01.020529 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 14:30:01.020539 kernel: iommu: Default domain type: Translated Dec 13 14:30:01.020547 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:30:01.020555 kernel: vgaarb: loaded Dec 13 14:30:01.020564 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:30:01.020571 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:30:01.020581 kernel: PTP clock support registered Dec 13 14:30:01.020588 kernel: Registered efivars operations Dec 13 14:30:01.020596 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:30:01.020606 kernel: PCI: System does not support PCI Dec 13 14:30:01.020615 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 14:30:01.020625 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:30:01.020632 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:30:01.020640 kernel: pnp: PnP ACPI init Dec 13 14:30:01.020649 kernel: pnp: PnP ACPI: found 3 devices Dec 13 14:30:01.020657 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:30:01.020666 kernel: NET: Registered PF_INET protocol family Dec 13 14:30:01.020675 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:30:01.020686 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:30:01.020695 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:30:01.020702 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:30:01.020712 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:30:01.020719 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:30:01.020728 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:30:01.020737 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:30:01.020744 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:30:01.020755 kernel: NET: Registered PF_XDP protocol family Dec 13 14:30:01.020764 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:30:01.020773 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:30:01.020782 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 14:30:01.020789 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:30:01.020800 kernel: Initialise system trusted keyrings Dec 13 14:30:01.020807 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:30:01.020816 kernel: Key type asymmetric registered Dec 13 14:30:01.020824 kernel: Asymmetric key parser 'x509' registered Dec 13 14:30:01.020834 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:30:01.020846 kernel: io scheduler mq-deadline registered Dec 13 14:30:01.020854 kernel: io scheduler kyber registered Dec 13 14:30:01.020864 kernel: io scheduler bfq registered Dec 13 14:30:01.020873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:30:01.020881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:30:01.020891 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:30:01.020898 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:30:01.020915 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:30:01.021041 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 14:30:01.021127 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T14:30:00 UTC (1734100200) Dec 13 14:30:01.021207 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 14:30:01.021220 kernel: fail to initialize ptp_kvm Dec 13 14:30:01.021227 kernel: intel_pstate: CPU model not supported Dec 13 14:30:01.021236 kernel: efifb: probing for efifb Dec 13 14:30:01.021245 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:30:01.021255 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:30:01.021264 kernel: efifb: scrolling: redraw Dec 13 14:30:01.021275 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:30:01.021285 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:30:01.021295 kernel: fb0: EFI VGA frame buffer device Dec 13 14:30:01.021304 kernel: pstore: Registered efi as persistent store backend Dec 13 14:30:01.021312 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:30:01.021320 kernel: Segment Routing with IPv6 Dec 13 14:30:01.021330 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:30:01.021337 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:30:01.021345 kernel: Key type dns_resolver registered Dec 13 14:30:01.021357 kernel: IPI shorthand broadcast: enabled Dec 13 14:30:01.021364 kernel: sched_clock: Marking stable (741209800, 21920800)->(948405200, -185274600) Dec 13 14:30:01.021375 kernel: registered taskstats version 1 Dec 13 14:30:01.021382 kernel: Loading compiled-in X.509 certificates Dec 13 14:30:01.021391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:30:01.021399 kernel: Key type .fscrypt registered Dec 13 14:30:01.021406 kernel: Key type fscrypt-provisioning registered Dec 13 14:30:01.021417 kernel: pstore: Using crash dump compression: deflate Dec 13 14:30:01.021426 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:30:01.021435 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:30:01.021443 kernel: ima: No architecture policies found Dec 13 14:30:01.021451 kernel: clk: Disabling unused clocks Dec 13 14:30:01.021461 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:30:01.021469 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:30:01.021477 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:30:01.021486 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:30:01.021493 kernel: Run /init as init process Dec 13 14:30:01.021504 kernel: with arguments: Dec 13 14:30:01.021513 kernel: /init Dec 13 14:30:01.021522 kernel: with environment: Dec 13 14:30:01.021530 kernel: HOME=/ Dec 13 14:30:01.021537 kernel: TERM=linux Dec 13 14:30:01.021547 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:30:01.021556 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:30:01.021567 systemd[1]: Detected virtualization microsoft. Dec 13 14:30:01.021578 systemd[1]: Detected architecture x86-64. Dec 13 14:30:01.021585 systemd[1]: Running in initrd. Dec 13 14:30:01.021596 systemd[1]: No hostname configured, using default hostname. Dec 13 14:30:01.021604 systemd[1]: Hostname set to . Dec 13 14:30:01.021612 systemd[1]: Initializing machine ID from random generator. Dec 13 14:30:01.021623 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:30:01.021630 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:30:01.021641 systemd[1]: Reached target cryptsetup.target. Dec 13 14:30:01.021650 systemd[1]: Reached target paths.target. Dec 13 14:30:01.021661 systemd[1]: Reached target slices.target. Dec 13 14:30:01.021672 systemd[1]: Reached target swap.target. Dec 13 14:30:01.021681 systemd[1]: Reached target timers.target. Dec 13 14:30:01.021691 systemd[1]: Listening on iscsid.socket. Dec 13 14:30:01.021700 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:30:01.021711 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:30:01.021718 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:30:01.021731 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:30:01.021739 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:30:01.021748 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:30:01.021757 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:30:01.021765 systemd[1]: Reached target sockets.target. Dec 13 14:30:01.021776 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:30:01.021785 systemd[1]: Finished network-cleanup.service. Dec 13 14:30:01.021794 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:30:01.021805 systemd[1]: Starting systemd-journald.service... Dec 13 14:30:01.021817 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:30:01.021829 systemd[1]: Starting systemd-resolved.service... Dec 13 14:30:01.021838 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:30:01.021848 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:30:01.021863 systemd-journald[183]: Journal started Dec 13 14:30:01.021923 systemd-journald[183]: Runtime Journal (/run/log/journal/f4f183a234e341e0875150f46e0517af) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:30:01.004071 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 14:30:01.034027 kernel: audit: type=1130 audit(1734100201.021:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.038864 systemd[1]: Started systemd-journald.service. Dec 13 14:30:01.044346 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:30:01.048147 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:30:01.066003 kernel: audit: type=1130 audit(1734100201.043:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.064967 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:30:01.072730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:30:01.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.091681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:30:01.100698 kernel: audit: type=1130 audit(1734100201.047:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.100975 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:30:01.104107 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:30:01.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.118940 systemd-resolved[185]: Positive Trust Anchors: Dec 13 14:30:01.138568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:30:01.138597 kernel: audit: type=1130 audit(1734100201.052:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.138615 dracut-cmdline[200]: dracut-dracut-053 Dec 13 14:30:01.118959 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:30:01.118994 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:30:01.130071 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 14:30:01.162643 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 14:30:01.165164 kernel: Bridge firewalling registered Dec 13 14:30:01.165300 systemd[1]: Started systemd-resolved.service. Dec 13 14:30:01.167473 systemd[1]: Reached target nss-lookup.target. Dec 13 14:30:01.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.186081 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:01.199622 kernel: audit: type=1130 audit(1734100201.098:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.223979 kernel: audit: type=1130 audit(1734100201.103:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.224033 kernel: SCSI subsystem initialized Dec 13 14:30:01.224051 kernel: audit: type=1130 audit(1734100201.167:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.249969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:30:01.250033 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:30:01.251198 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:30:01.259280 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 14:30:01.262066 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:30:01.283315 kernel: audit: type=1130 audit(1734100201.268:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.283386 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:30:01.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.284506 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:01.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.297820 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:01.314057 kernel: audit: type=1130 audit(1734100201.299:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.319934 kernel: iscsi: registered transport (tcp) Dec 13 14:30:01.347347 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:30:01.347424 kernel: QLogic iSCSI HBA Driver Dec 13 14:30:01.376745 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:30:01.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.382093 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:30:01.432927 kernel: raid6: avx512x4 gen() 18682 MB/s Dec 13 14:30:01.452920 kernel: raid6: avx512x4 xor() 8557 MB/s Dec 13 14:30:01.471923 kernel: raid6: avx512x2 gen() 18706 MB/s Dec 13 14:30:01.491926 kernel: raid6: avx512x2 xor() 29941 MB/s Dec 13 14:30:01.511918 kernel: raid6: avx512x1 gen() 18603 MB/s Dec 13 14:30:01.531919 kernel: raid6: avx512x1 xor() 26830 MB/s Dec 13 14:30:01.551922 kernel: raid6: avx2x4 gen() 18600 MB/s Dec 13 14:30:01.571920 kernel: raid6: avx2x4 xor() 7695 MB/s Dec 13 14:30:01.591918 kernel: raid6: avx2x2 gen() 18546 MB/s Dec 13 14:30:01.612921 kernel: raid6: avx2x2 xor() 22117 MB/s Dec 13 14:30:01.632916 kernel: raid6: avx2x1 gen() 14052 MB/s Dec 13 14:30:01.652917 kernel: raid6: avx2x1 xor() 19472 MB/s Dec 13 14:30:01.673920 kernel: raid6: sse2x4 gen() 11740 MB/s Dec 13 14:30:01.693915 kernel: raid6: sse2x4 xor() 7280 MB/s Dec 13 14:30:01.713916 kernel: raid6: sse2x2 gen() 12880 MB/s Dec 13 14:30:01.733919 kernel: raid6: sse2x2 xor() 7502 MB/s Dec 13 14:30:01.753923 kernel: raid6: sse2x1 gen() 11655 MB/s Dec 13 14:30:01.777136 kernel: raid6: sse2x1 xor() 5912 MB/s Dec 13 14:30:01.777153 kernel: raid6: using algorithm avx512x2 gen() 18706 MB/s Dec 13 14:30:01.777165 kernel: raid6: .... xor() 29941 MB/s, rmw enabled Dec 13 14:30:01.784048 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:30:01.799931 kernel: xor: automatically using best checksumming function avx Dec 13 14:30:01.895935 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:30:01.904200 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:30:01.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.907000 audit: BPF prog-id=7 op=LOAD Dec 13 14:30:01.907000 audit: BPF prog-id=8 op=LOAD Dec 13 14:30:01.908480 systemd[1]: Starting systemd-udevd.service... Dec 13 14:30:01.929879 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:30:01.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.936440 systemd[1]: Started systemd-udevd.service. Dec 13 14:30:01.939591 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:30:01.959357 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Dec 13 14:30:01.987589 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:30:01.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:01.990929 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:30:02.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:02.027950 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:30:02.073932 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:30:02.084927 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 14:30:02.125964 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:30:02.133930 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:30:02.133979 kernel: AES CTR mode by8 optimization enabled Dec 13 14:30:02.141922 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:30:02.141959 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:30:02.154942 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:30:02.154989 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:30:02.163433 kernel: scsi host1: storvsc_host_t Dec 13 14:30:02.163623 kernel: scsi host0: storvsc_host_t Dec 13 14:30:02.172938 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:30:02.178934 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:30:02.188960 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:30:02.197923 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:30:02.204926 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:30:02.218006 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:30:02.220241 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:30:02.220261 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:30:02.235864 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:30:02.255032 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:30:02.255226 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:30:02.255382 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:30:02.255551 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:30:02.255703 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:02.255721 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:30:02.321665 kernel: hv_netvsc 7c1e5234-1afe-7c1e-5234-1afe7c1e5234 eth0: VF slot 1 added Dec 13 14:30:02.330925 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:30:02.343596 kernel: hv_pci f399b16b-348b-4ab6-9682-22ea5ece1297: PCI VMBus probing: Using version 0x10004 Dec 13 14:30:02.421072 kernel: hv_pci f399b16b-348b-4ab6-9682-22ea5ece1297: PCI host bridge to bus 348b:00 Dec 13 14:30:02.421247 kernel: pci_bus 348b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 14:30:02.421422 kernel: pci_bus 348b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:30:02.421570 kernel: pci 348b:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 14:30:02.421740 kernel: pci 348b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:30:02.421899 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (438) Dec 13 14:30:02.421932 kernel: pci 348b:00:02.0: enabling Extended Tags Dec 13 14:30:02.422104 kernel: pci 348b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 348b:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:30:02.422267 kernel: pci_bus 348b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:30:02.422409 kernel: pci 348b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:30:02.394223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:30:02.407607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:30:02.472602 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:30:02.483158 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:30:02.495720 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:30:02.501639 systemd[1]: Starting disk-uuid.service... Dec 13 14:30:02.522936 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:02.541931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:02.555929 kernel: mlx5_core 348b:00:02.0: firmware version: 14.30.5000 Dec 13 14:30:02.817259 kernel: mlx5_core 348b:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:30:02.817399 kernel: mlx5_core 348b:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 14:30:02.817511 kernel: mlx5_core 348b:00:02.0: mlx5e_tc_post_act_init:40:(pid 357): firmware level support is missing Dec 13 14:30:02.817608 kernel: hv_netvsc 7c1e5234-1afe-7c1e-5234-1afe7c1e5234 eth0: VF registering: eth1 Dec 13 14:30:02.817702 kernel: mlx5_core 348b:00:02.0 eth1: joined to eth0 Dec 13 14:30:02.824925 kernel: mlx5_core 348b:00:02.0 enP13451s1: renamed from eth1 Dec 13 14:30:03.533931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:03.534286 disk-uuid[552]: The operation has completed successfully. Dec 13 14:30:03.610372 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:30:03.610474 systemd[1]: Finished disk-uuid.service. Dec 13 14:30:03.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.617645 systemd[1]: Starting verity-setup.service... Dec 13 14:30:03.645043 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:30:03.735494 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:30:03.740125 systemd[1]: Finished verity-setup.service. Dec 13 14:30:03.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.744813 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:30:03.824771 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:30:03.830348 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:30:03.826620 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:30:03.827429 systemd[1]: Starting ignition-setup.service... Dec 13 14:30:03.833785 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:30:03.856260 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:03.856304 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:30:03.856317 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:30:03.891023 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:30:03.913163 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:30:03.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.918000 audit: BPF prog-id=9 op=LOAD Dec 13 14:30:03.919762 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:03.941762 systemd-networkd[807]: lo: Link UP Dec 13 14:30:03.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.941771 systemd-networkd[807]: lo: Gained carrier Dec 13 14:30:03.942472 systemd-networkd[807]: Enumeration completed Dec 13 14:30:03.942995 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:03.946733 systemd[1]: Reached target network.target. Dec 13 14:30:03.948512 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:03.949989 systemd[1]: Starting iscsiuio.service... Dec 13 14:30:03.963175 systemd[1]: Finished ignition-setup.service. Dec 13 14:30:03.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.970499 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:30:03.977988 systemd[1]: Started iscsiuio.service. Dec 13 14:30:03.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:03.982650 systemd[1]: Starting iscsid.service... Dec 13 14:30:03.988821 iscsid[814]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:30:03.988821 iscsid[814]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:30:03.988821 iscsid[814]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:30:03.988821 iscsid[814]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:30:04.010087 iscsid[814]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:30:04.010087 iscsid[814]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:30:04.017429 systemd[1]: Started iscsid.service. Dec 13 14:30:04.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:04.020585 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:30:04.029938 kernel: mlx5_core 348b:00:02.0 enP13451s1: Link up Dec 13 14:30:04.037967 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:30:04.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:04.040240 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:30:04.044026 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:30:04.046140 systemd[1]: Reached target remote-fs.target. Dec 13 14:30:04.049964 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:30:04.063212 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:30:04.072876 kernel: hv_netvsc 7c1e5234-1afe-7c1e-5234-1afe7c1e5234 eth0: Data path switched to VF: enP13451s1 Dec 13 14:30:04.073060 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:30:04.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:04.071595 systemd-networkd[807]: enP13451s1: Link UP Dec 13 14:30:04.071805 systemd-networkd[807]: eth0: Link UP Dec 13 14:30:04.072233 systemd-networkd[807]: eth0: Gained carrier Dec 13 14:30:04.079334 systemd-networkd[807]: enP13451s1: Gained carrier Dec 13 14:30:04.110991 systemd-networkd[807]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:30:04.831488 ignition[812]: Ignition 2.14.0 Dec 13 14:30:04.831501 ignition[812]: Stage: fetch-offline Dec 13 14:30:04.831586 ignition[812]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:04.831628 ignition[812]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:04.869554 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:04.869787 ignition[812]: parsed url from cmdline: "" Dec 13 14:30:04.871096 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:30:04.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:04.869792 ignition[812]: no config URL provided Dec 13 14:30:04.876040 systemd[1]: Starting ignition-fetch.service... Dec 13 14:30:04.869797 ignition[812]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:30:04.869806 ignition[812]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:30:04.869813 ignition[812]: failed to fetch config: resource requires networking Dec 13 14:30:04.870197 ignition[812]: Ignition finished successfully Dec 13 14:30:04.886212 ignition[833]: Ignition 2.14.0 Dec 13 14:30:04.886219 ignition[833]: Stage: fetch Dec 13 14:30:04.886342 ignition[833]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:04.886369 ignition[833]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:04.893272 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:04.893460 ignition[833]: parsed url from cmdline: "" Dec 13 14:30:04.893465 ignition[833]: no config URL provided Dec 13 14:30:04.893474 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:30:04.893482 ignition[833]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:30:04.893515 ignition[833]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:30:04.997643 ignition[833]: GET result: OK Dec 13 14:30:04.997790 ignition[833]: config has been read from IMDS userdata Dec 13 14:30:04.997824 ignition[833]: parsing config with SHA512: fa31dafd98030750f1614adfe5221eae54fc6a5bbd26e70d76500e85c34a4edcebddf963c4c5d0208a3fcce9719fb6f2242cd4f5cc026851c9ddc4f584fb96ba Dec 13 14:30:05.002528 unknown[833]: fetched base config from "system" Dec 13 14:30:05.002544 unknown[833]: fetched base config from "system" Dec 13 14:30:05.003298 ignition[833]: fetch: fetch complete Dec 13 14:30:05.002555 unknown[833]: fetched user config from "azure" Dec 13 14:30:05.003305 ignition[833]: fetch: fetch passed Dec 13 14:30:05.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.008530 systemd[1]: Finished ignition-fetch.service. Dec 13 14:30:05.003351 ignition[833]: Ignition finished successfully Dec 13 14:30:05.018205 systemd[1]: Starting ignition-kargs.service... Dec 13 14:30:05.030249 ignition[839]: Ignition 2.14.0 Dec 13 14:30:05.031538 ignition[839]: Stage: kargs Dec 13 14:30:05.031652 ignition[839]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:05.031671 ignition[839]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:05.034527 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:05.036503 ignition[839]: kargs: kargs passed Dec 13 14:30:05.036549 ignition[839]: Ignition finished successfully Dec 13 14:30:05.042847 systemd[1]: Finished ignition-kargs.service. Dec 13 14:30:05.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.045683 systemd[1]: Starting ignition-disks.service... Dec 13 14:30:05.055497 ignition[845]: Ignition 2.14.0 Dec 13 14:30:05.055507 ignition[845]: Stage: disks Dec 13 14:30:05.055639 ignition[845]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:05.055673 ignition[845]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:05.059957 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:05.062503 ignition[845]: disks: disks passed Dec 13 14:30:05.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.063379 systemd[1]: Finished ignition-disks.service. Dec 13 14:30:05.062549 ignition[845]: Ignition finished successfully Dec 13 14:30:05.067277 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:30:05.070963 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:30:05.072858 systemd[1]: Reached target local-fs.target. Dec 13 14:30:05.074749 systemd[1]: Reached target sysinit.target. Dec 13 14:30:05.078298 systemd[1]: Reached target basic.target. Dec 13 14:30:05.080848 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:30:05.108924 systemd-fsck[853]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 14:30:05.112275 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:30:05.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.117220 systemd[1]: Mounting sysroot.mount... Dec 13 14:30:05.135955 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:30:05.136163 systemd[1]: Mounted sysroot.mount. Dec 13 14:30:05.139502 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:30:05.151631 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:30:05.154159 systemd-networkd[807]: eth0: Gained IPv6LL Dec 13 14:30:05.155075 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:30:05.157241 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:30:05.157280 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:30:05.173847 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:30:05.187743 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:30:05.190752 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:30:05.202162 initrd-setup-root[868]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:30:05.208547 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (863) Dec 13 14:30:05.210972 initrd-setup-root[876]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:30:05.220218 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:05.220242 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:30:05.220252 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:30:05.224552 initrd-setup-root[884]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:30:05.230921 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:30:05.236061 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:30:05.370819 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:30:05.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.382614 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:30:05.382657 kernel: audit: type=1130 audit(1734100205.374:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.377807 systemd[1]: Starting ignition-mount.service... Dec 13 14:30:05.400920 systemd[1]: Starting sysroot-boot.service... Dec 13 14:30:05.406688 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:05.409223 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:05.425840 ignition[930]: INFO : Ignition 2.14.0 Dec 13 14:30:05.425840 ignition[930]: INFO : Stage: mount Dec 13 14:30:05.429328 ignition[930]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:05.429328 ignition[930]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:05.442015 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:05.444943 systemd[1]: Finished sysroot-boot.service. Dec 13 14:30:05.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.453547 ignition[930]: INFO : mount: mount passed Dec 13 14:30:05.453547 ignition[930]: INFO : Ignition finished successfully Dec 13 14:30:05.478030 kernel: audit: type=1130 audit(1734100205.447:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.478061 kernel: audit: type=1130 audit(1734100205.463:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.454248 systemd[1]: Finished ignition-mount.service. Dec 13 14:30:05.585169 coreos-metadata[862]: Dec 13 14:30:05.585 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:30:05.592082 coreos-metadata[862]: Dec 13 14:30:05.592 INFO Fetch successful Dec 13 14:30:05.627195 coreos-metadata[862]: Dec 13 14:30:05.627 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:30:05.645002 coreos-metadata[862]: Dec 13 14:30:05.644 INFO Fetch successful Dec 13 14:30:05.650705 coreos-metadata[862]: Dec 13 14:30:05.650 INFO wrote hostname ci-3510.3.6-a-34fc77c933 to /sysroot/etc/hostname Dec 13 14:30:05.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.652438 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:30:05.657581 systemd[1]: Starting ignition-files.service... Dec 13 14:30:05.672597 kernel: audit: type=1130 audit(1734100205.656:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:05.677567 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:30:05.696459 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (941) Dec 13 14:30:05.696507 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:05.696522 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:30:05.703256 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:30:05.707751 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:30:05.721328 ignition[960]: INFO : Ignition 2.14.0 Dec 13 14:30:05.721328 ignition[960]: INFO : Stage: files Dec 13 14:30:05.724799 ignition[960]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:05.724799 ignition[960]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:05.737378 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:05.742582 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:30:05.745526 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:30:05.745526 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:30:05.754657 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:30:05.758271 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:30:05.761429 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:30:05.758738 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 14:30:05.767278 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:30:05.767278 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:30:05.894155 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:30:06.023151 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:30:06.028766 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:30:06.033447 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:30:06.585756 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:30:06.734668 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:30:06.739148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:30:06.743439 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:30:06.743439 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:30:06.752578 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:30:06.757001 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:06.809722 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (965) Dec 13 14:30:06.777500 systemd[1]: mnt-oem1089641336.mount: Deactivated successfully. Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1089641336" Dec 13 14:30:06.812204 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1089641336": device or resource busy Dec 13 14:30:06.812204 ignition[960]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1089641336", trying btrfs: device or resource busy Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1089641336" Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1089641336" Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1089641336" Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1089641336" Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1337558240" Dec 13 14:30:06.812204 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1337558240": device or resource busy Dec 13 14:30:06.812204 ignition[960]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1337558240", trying btrfs: device or resource busy Dec 13 14:30:06.812204 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1337558240" Dec 13 14:30:06.798392 systemd[1]: mnt-oem1337558240.mount: Deactivated successfully. Dec 13 14:30:06.884292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1337558240" Dec 13 14:30:06.884292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1337558240" Dec 13 14:30:06.884292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1337558240" Dec 13 14:30:06.884292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:30:06.884292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:30:06.884292 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:30:07.140519 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Dec 13 14:30:07.513133 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:30:07.513133 ignition[960]: INFO : files: op(14): [started] processing unit "waagent.service" Dec 13 14:30:07.513133 ignition[960]: INFO : files: op(14): [finished] processing unit "waagent.service" Dec 13 14:30:07.513133 ignition[960]: INFO : files: op(15): [started] processing unit "nvidia.service" Dec 13 14:30:07.513133 ignition[960]: INFO : files: op(15): [finished] processing unit "nvidia.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:30:07.531285 ignition[960]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:30:07.531285 ignition[960]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:30:07.575609 ignition[960]: INFO : files: files passed Dec 13 14:30:07.575609 ignition[960]: INFO : Ignition finished successfully Dec 13 14:30:07.580511 systemd[1]: Finished ignition-files.service. Dec 13 14:30:07.604255 kernel: audit: type=1130 audit(1734100207.582:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.585096 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:30:07.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.598759 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:30:07.652545 kernel: audit: type=1130 audit(1734100207.606:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.652579 kernel: audit: type=1131 audit(1734100207.606:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.652601 kernel: audit: type=1130 audit(1734100207.620:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.652826 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:30:07.599885 systemd[1]: Starting ignition-quench.service... Dec 13 14:30:07.603609 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:30:07.603705 systemd[1]: Finished ignition-quench.service. Dec 13 14:30:07.607044 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:30:07.621201 systemd[1]: Reached target ignition-complete.target. Dec 13 14:30:07.645764 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:30:07.668851 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:30:07.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.668956 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:30:07.701032 kernel: audit: type=1130 audit(1734100207.672:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.701064 kernel: audit: type=1131 audit(1734100207.672:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.672358 systemd[1]: Reached target initrd-fs.target. Dec 13 14:30:07.686081 systemd[1]: Reached target initrd.target. Dec 13 14:30:07.701103 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:30:07.713347 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:30:07.724110 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:30:07.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.729171 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:30:07.739165 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:30:07.741340 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:30:07.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.745183 systemd[1]: Stopped target timers.target. Dec 13 14:30:07.747149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:30:07.747277 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:30:07.747822 systemd[1]: Stopped target initrd.target. Dec 13 14:30:07.760468 systemd[1]: Stopped target basic.target. Dec 13 14:30:07.764050 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:30:07.768195 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:30:07.772558 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:30:07.776744 systemd[1]: Stopped target remote-fs.target. Dec 13 14:30:07.780618 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:30:07.784631 systemd[1]: Stopped target sysinit.target. Dec 13 14:30:07.788731 systemd[1]: Stopped target local-fs.target. Dec 13 14:30:07.792792 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:30:07.796740 systemd[1]: Stopped target swap.target. Dec 13 14:30:07.800308 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:30:07.802926 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:30:07.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.806867 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:30:07.810739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:30:07.810935 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:30:07.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.818841 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:30:07.821531 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:30:07.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.826376 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:30:07.828674 systemd[1]: Stopped ignition-files.service. Dec 13 14:30:07.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.832616 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:30:07.835181 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:30:07.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.840531 systemd[1]: Stopping ignition-mount.service... Dec 13 14:30:07.842746 systemd[1]: Stopping iscsiuio.service... Dec 13 14:30:07.844421 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:30:07.856792 ignition[998]: INFO : Ignition 2.14.0 Dec 13 14:30:07.856792 ignition[998]: INFO : Stage: umount Dec 13 14:30:07.856792 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:07.856792 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:07.844574 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:30:07.875405 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:07.875405 ignition[998]: INFO : umount: umount passed Dec 13 14:30:07.875405 ignition[998]: INFO : Ignition finished successfully Dec 13 14:30:07.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.849864 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:30:07.866012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:30:07.866162 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:30:07.875710 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:30:07.875857 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:30:07.897258 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:30:07.899947 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:30:07.901966 systemd[1]: Stopped iscsiuio.service. Dec 13 14:30:07.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.905826 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:30:07.908033 systemd[1]: Stopped ignition-mount.service. Dec 13 14:30:07.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.912248 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:30:07.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.912366 systemd[1]: Stopped ignition-disks.service. Dec 13 14:30:07.916515 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:30:07.916565 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:30:07.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.924186 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:30:07.924236 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:30:07.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.932221 systemd[1]: Stopped target network.target. Dec 13 14:30:07.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.934059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:30:07.934112 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:30:07.937837 systemd[1]: Stopped target paths.target. Dec 13 14:30:07.939522 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:30:07.944949 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:30:07.947052 systemd[1]: Stopped target slices.target. Dec 13 14:30:07.948879 systemd[1]: Stopped target sockets.target. Dec 13 14:30:07.952503 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:30:07.952539 systemd[1]: Closed iscsid.socket. Dec 13 14:30:07.960563 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:30:07.960619 systemd[1]: Closed iscsiuio.socket. Dec 13 14:30:07.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.965951 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:30:07.967383 systemd[1]: Stopped ignition-setup.service. Dec 13 14:30:07.975094 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:30:07.980185 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:30:07.983925 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:30:07.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.984023 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:30:07.988376 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:30:07.990156 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:30:07.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:07.997962 systemd-networkd[807]: eth0: DHCPv6 lease lost Dec 13 14:30:07.999000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:30:08.000283 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:30:08.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.000384 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:30:08.004580 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:30:08.008000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:30:08.004613 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:30:08.009826 systemd[1]: Stopping network-cleanup.service... Dec 13 14:30:08.014838 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:30:08.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.014898 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:30:08.019141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:30:08.019184 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:30:08.023585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:30:08.023633 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:30:08.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.030654 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:30:08.034504 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:30:08.039616 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:30:08.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.039750 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:30:08.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.045058 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:30:08.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.045104 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:30:08.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.047229 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:30:08.047265 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:30:08.052048 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:30:08.052096 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:30:08.056511 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:30:08.089726 kernel: hv_netvsc 7c1e5234-1afe-7c1e-5234-1afe7c1e5234 eth0: Data path switched from VF: enP13451s1 Dec 13 14:30:08.056562 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:30:08.060016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:30:08.060062 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:30:08.062918 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:30:08.063030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:30:08.063072 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:30:08.070241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:30:08.070342 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:30:08.109488 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:30:08.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:08.109606 systemd[1]: Stopped network-cleanup.service. Dec 13 14:30:09.147667 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:30:09.147832 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:30:09.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.152726 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:30:09.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.156245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:30:09.156305 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:30:09.159230 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:30:09.173227 systemd[1]: Switching root. Dec 13 14:30:09.200878 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 14:30:09.200954 iscsid[814]: iscsid shutting down. Dec 13 14:30:09.203364 systemd-journald[183]: Journal stopped Dec 13 14:30:20.560013 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:30:20.560055 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:30:20.560073 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:30:20.560086 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:30:20.560098 kernel: SELinux: policy capability open_perms=1 Dec 13 14:30:20.560111 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:30:20.560126 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:30:20.560143 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:30:20.560158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:30:20.560172 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:30:20.560185 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:30:20.560199 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 13 14:30:20.560213 kernel: audit: type=1403 audit(1734100210.717:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:30:20.560229 systemd[1]: Successfully loaded SELinux policy in 214.467ms. Dec 13 14:30:20.560251 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.926ms. Dec 13 14:30:20.560267 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:30:20.560283 systemd[1]: Detected virtualization microsoft. Dec 13 14:30:20.560297 systemd[1]: Detected architecture x86-64. Dec 13 14:30:20.560310 systemd[1]: Detected first boot. Dec 13 14:30:20.560329 systemd[1]: Hostname set to . Dec 13 14:30:20.560344 systemd[1]: Initializing machine ID from random generator. Dec 13 14:30:20.560359 kernel: audit: type=1400 audit(1734100211.148:81): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:30:20.560375 kernel: audit: type=1400 audit(1734100211.163:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:30:20.560390 kernel: audit: type=1400 audit(1734100211.163:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:30:20.560404 kernel: audit: type=1334 audit(1734100211.174:84): prog-id=10 op=LOAD Dec 13 14:30:20.560420 kernel: audit: type=1334 audit(1734100211.174:85): prog-id=10 op=UNLOAD Dec 13 14:30:20.560435 kernel: audit: type=1334 audit(1734100211.188:86): prog-id=11 op=LOAD Dec 13 14:30:20.560449 kernel: audit: type=1334 audit(1734100211.188:87): prog-id=11 op=UNLOAD Dec 13 14:30:20.560463 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:30:20.560478 kernel: audit: type=1400 audit(1734100212.664:88): avc: denied { associate } for pid=1032 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:30:20.560493 kernel: audit: type=1300 audit(1734100212.664:88): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.560507 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:30:20.560526 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:20.560544 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:20.560560 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:20.560575 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:30:20.560589 kernel: audit: type=1334 audit(1734100219.961:90): prog-id=12 op=LOAD Dec 13 14:30:20.560602 kernel: audit: type=1334 audit(1734100219.961:91): prog-id=3 op=UNLOAD Dec 13 14:30:20.560616 kernel: audit: type=1334 audit(1734100219.966:92): prog-id=13 op=LOAD Dec 13 14:30:20.560634 kernel: audit: type=1334 audit(1734100219.975:93): prog-id=14 op=LOAD Dec 13 14:30:20.560649 kernel: audit: type=1334 audit(1734100219.975:94): prog-id=4 op=UNLOAD Dec 13 14:30:20.560666 kernel: audit: type=1334 audit(1734100219.975:95): prog-id=5 op=UNLOAD Dec 13 14:30:20.560681 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:30:20.560697 kernel: audit: type=1131 audit(1734100219.979:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.560712 systemd[1]: Stopped iscsid.service. Dec 13 14:30:20.560729 kernel: audit: type=1131 audit(1734100220.016:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.560745 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:30:20.560763 kernel: audit: type=1334 audit(1734100220.016:98): prog-id=12 op=UNLOAD Dec 13 14:30:20.560778 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:30:20.560794 kernel: audit: type=1130 audit(1734100220.043:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.560810 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:30:20.560826 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:30:20.560842 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:30:20.560858 systemd[1]: Created slice system-getty.slice. Dec 13 14:30:20.560876 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:30:20.560895 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:30:20.561898 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:30:20.561936 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:30:20.561950 systemd[1]: Created slice user.slice. Dec 13 14:30:20.561961 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:30:20.561973 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:30:20.561986 systemd[1]: Set up automount boot.automount. Dec 13 14:30:20.561997 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:30:20.562009 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:30:20.562026 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:30:20.562036 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:30:20.562050 systemd[1]: Reached target integritysetup.target. Dec 13 14:30:20.562063 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:30:20.562073 systemd[1]: Reached target remote-fs.target. Dec 13 14:30:20.562085 systemd[1]: Reached target slices.target. Dec 13 14:30:20.562098 systemd[1]: Reached target swap.target. Dec 13 14:30:20.562108 systemd[1]: Reached target torcx.target. Dec 13 14:30:20.562123 systemd[1]: Reached target veritysetup.target. Dec 13 14:30:20.562136 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:30:20.562147 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:30:20.562160 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:30:20.562176 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:30:20.562187 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:30:20.562198 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:30:20.562211 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:30:20.562222 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:30:20.562234 systemd[1]: Mounting media.mount... Dec 13 14:30:20.562249 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:20.562260 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:30:20.562273 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:30:20.562287 systemd[1]: Mounting tmp.mount... Dec 13 14:30:20.562300 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:30:20.562312 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:20.562323 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:30:20.562335 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:30:20.562347 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:20.562359 systemd[1]: Starting modprobe@drm.service... Dec 13 14:30:20.562369 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:20.562381 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:30:20.562396 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:20.562408 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:30:20.562421 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:30:20.562432 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:30:20.562444 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:30:20.562455 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:30:20.562468 systemd[1]: Stopped systemd-journald.service. Dec 13 14:30:20.562478 systemd[1]: Starting systemd-journald.service... Dec 13 14:30:20.562490 kernel: loop: module loaded Dec 13 14:30:20.562505 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:30:20.562516 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:30:20.562528 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:30:20.562541 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:30:20.562551 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:30:20.562563 systemd[1]: Stopped verity-setup.service. Dec 13 14:30:20.562577 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:20.562587 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:30:20.562599 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:30:20.562614 systemd[1]: Mounted media.mount. Dec 13 14:30:20.562623 kernel: fuse: init (API version 7.34) Dec 13 14:30:20.562635 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:30:20.562648 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:30:20.562658 systemd[1]: Mounted tmp.mount. Dec 13 14:30:20.562671 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:30:20.562691 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:30:20.562701 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:30:20.562714 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:30:20.562732 systemd-journald[1141]: Journal started Dec 13 14:30:20.562786 systemd-journald[1141]: Runtime Journal (/run/log/journal/e69bb42628874e84bdd68e94261f41c0) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:30:10.717000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:30:11.148000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:30:11.163000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:30:11.163000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:30:11.174000 audit: BPF prog-id=10 op=LOAD Dec 13 14:30:11.174000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:30:11.188000 audit: BPF prog-id=11 op=LOAD Dec 13 14:30:11.188000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:30:12.664000 audit[1032]: AVC avc: denied { associate } for pid=1032 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:30:12.664000 audit[1032]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:12.664000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:30:12.672000 audit[1032]: AVC avc: denied { associate } for pid=1032 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:30:12.672000 audit[1032]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:12.672000 audit: CWD cwd="/" Dec 13 14:30:12.672000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:12.672000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:12.672000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:30:19.961000 audit: BPF prog-id=12 op=LOAD Dec 13 14:30:19.961000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:30:19.966000 audit: BPF prog-id=13 op=LOAD Dec 13 14:30:19.975000 audit: BPF prog-id=14 op=LOAD Dec 13 14:30:19.975000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:30:19.975000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:30:19.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.016000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:30:20.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.381000 audit: BPF prog-id=15 op=LOAD Dec 13 14:30:20.381000 audit: BPF prog-id=16 op=LOAD Dec 13 14:30:20.381000 audit: BPF prog-id=17 op=LOAD Dec 13 14:30:20.381000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:30:20.381000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:30:20.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.555000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:30:20.555000 audit[1141]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc45677ec0 a2=4000 a3=7ffc45677f5c items=0 ppid=1 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.555000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:30:19.960555 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:30:12.654393 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:19.976820 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:30:12.654841 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:30:12.654862 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:30:12.654901 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:30:12.654936 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:30:12.654991 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:30:12.655007 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:30:12.655233 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:30:12.655287 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:30:12.655304 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:30:12.659665 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:30:12.659706 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:30:12.659727 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:30:12.659759 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:30:12.659787 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:30:12.659803 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:30:19.326083 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:30:19.326557 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:30:19.326663 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:30:19.326826 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:30:19.326874 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:30:19.326943 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T14:30:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:30:20.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.574221 systemd[1]: Started systemd-journald.service. Dec 13 14:30:20.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.574790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:20.575003 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:20.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.577362 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:30:20.577503 systemd[1]: Finished modprobe@drm.service. Dec 13 14:30:20.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.579634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:20.579769 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:20.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.582524 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:30:20.582662 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:30:20.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.584938 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:20.585113 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:20.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.587228 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:30:20.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.589552 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:30:20.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.592244 systemd[1]: Reached target network-pre.target. Dec 13 14:30:20.595270 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:30:20.598976 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:30:20.604962 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:30:20.653048 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:30:20.656489 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:30:20.658459 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:20.659634 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:30:20.662258 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:20.668148 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:30:20.673749 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:30:20.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.676355 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:30:20.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.678640 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:30:20.680868 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:30:20.684304 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:20.687955 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:30:20.698984 udevadm[1155]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:30:20.706548 systemd-journald[1141]: Time spent on flushing to /var/log/journal/e69bb42628874e84bdd68e94261f41c0 is 21.778ms for 1155 entries. Dec 13 14:30:20.706548 systemd-journald[1141]: System Journal (/var/log/journal/e69bb42628874e84bdd68e94261f41c0) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:30:21.104167 systemd-journald[1141]: Received client request to flush runtime journal. Dec 13 14:30:20.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.755996 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:30:20.758649 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:30:20.802793 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:21.105479 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:30:21.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.455468 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:30:21.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.935744 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:30:21.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.938000 audit: BPF prog-id=18 op=LOAD Dec 13 14:30:21.938000 audit: BPF prog-id=19 op=LOAD Dec 13 14:30:21.938000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:30:21.938000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:30:21.939893 systemd[1]: Starting systemd-udevd.service... Dec 13 14:30:21.957712 systemd-udevd[1158]: Using default interface naming scheme 'v252'. Dec 13 14:30:22.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.030000 audit: BPF prog-id=20 op=LOAD Dec 13 14:30:22.027003 systemd[1]: Started systemd-udevd.service. Dec 13 14:30:22.032136 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:22.061000 audit: BPF prog-id=21 op=LOAD Dec 13 14:30:22.061000 audit: BPF prog-id=22 op=LOAD Dec 13 14:30:22.061000 audit: BPF prog-id=23 op=LOAD Dec 13 14:30:22.063561 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:30:22.097376 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:30:22.114691 systemd[1]: Started systemd-userdbd.service. Dec 13 14:30:22.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.184000 audit[1175]: AVC avc: denied { confidentiality } for pid=1175 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:30:22.196289 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:30:22.213220 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:30:22.217934 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:30:22.238006 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:30:22.238114 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:30:22.244803 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:30:22.253304 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:30:22.263086 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:30:22.271392 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:30:22.271671 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:30:22.184000 audit[1175]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56432febf750 a1=f884 a2=7f6791b20bc5 a3=5 items=12 ppid=1158 pid=1175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:22.184000 audit: CWD cwd="/" Dec 13 14:30:22.184000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=1 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=2 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=3 name=(null) inode=15918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=4 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=5 name=(null) inode=15919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=6 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=7 name=(null) inode=15920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=8 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=9 name=(null) inode=15921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=10 name=(null) inode=15917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PATH item=11 name=(null) inode=15922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:22.184000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:30:22.294291 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:30:22.294371 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:30:22.294403 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:30:22.770091 systemd-networkd[1171]: lo: Link UP Dec 13 14:30:22.770391 systemd-networkd[1171]: lo: Gained carrier Dec 13 14:30:22.771123 systemd-networkd[1171]: Enumeration completed Dec 13 14:30:22.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.771340 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:22.775147 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:30:22.785089 systemd-networkd[1171]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:22.837698 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1161) Dec 13 14:30:22.841700 kernel: mlx5_core 348b:00:02.0 enP13451s1: Link up Dec 13 14:30:22.866738 kernel: hv_netvsc 7c1e5234-1afe-7c1e-5234-1afe7c1e5234 eth0: Data path switched to VF: enP13451s1 Dec 13 14:30:22.867714 systemd-networkd[1171]: enP13451s1: Link UP Dec 13 14:30:22.867982 systemd-networkd[1171]: eth0: Link UP Dec 13 14:30:22.868081 systemd-networkd[1171]: eth0: Gained carrier Dec 13 14:30:22.872987 systemd-networkd[1171]: enP13451s1: Gained carrier Dec 13 14:30:22.896491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:30:22.912796 systemd-networkd[1171]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:30:22.969745 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 14:30:23.007021 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:30:23.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:23.010824 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:30:23.093973 lvm[1235]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:30:23.122720 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:30:23.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:23.125201 systemd[1]: Reached target cryptsetup.target. Dec 13 14:30:23.128561 systemd[1]: Starting lvm2-activation.service... Dec 13 14:30:23.133136 lvm[1236]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:30:23.158730 systemd[1]: Finished lvm2-activation.service. Dec 13 14:30:23.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:23.161541 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:30:23.163743 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:30:23.163778 systemd[1]: Reached target local-fs.target. Dec 13 14:30:23.165936 systemd[1]: Reached target machines.target. Dec 13 14:30:23.169088 systemd[1]: Starting ldconfig.service... Dec 13 14:30:23.173193 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:23.173286 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:23.174556 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:30:23.177553 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:30:23.181511 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:30:23.184749 systemd[1]: Starting systemd-sysext.service... Dec 13 14:30:23.200094 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1238 (bootctl) Dec 13 14:30:23.201551 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:30:23.213464 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:30:23.216311 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:30:23.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:23.229029 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:23.229247 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:30:23.277677 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:30:24.088689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:30:24.105681 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:30:24.119927 (sd-sysext)[1250]: Using extensions 'kubernetes'. Dec 13 14:30:24.121838 (sd-sysext)[1250]: Merged extensions into '/usr'. Dec 13 14:30:24.137936 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.139546 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:30:24.141864 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.145642 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:24.148711 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:24.152169 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:24.154230 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.154423 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:24.154601 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.157617 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:30:24.160269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:24.160427 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:24.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.163166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:24.163320 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:24.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.168130 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:24.168275 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:24.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.172071 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:30:24.172946 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:30:24.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.175978 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:24.176134 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.177545 systemd[1]: Finished systemd-sysext.service. Dec 13 14:30:24.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.181302 systemd[1]: Starting ensure-sysext.service... Dec 13 14:30:24.184594 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:30:24.187563 systemd-networkd[1171]: eth0: Gained IPv6LL Dec 13 14:30:24.193147 systemd[1]: Reloading. Dec 13 14:30:24.218792 systemd-fsck[1246]: fsck.fat 4.2 (2021-01-31) Dec 13 14:30:24.218792 systemd-fsck[1246]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:30:24.252651 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:30:24.286690 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:30:24.291291 /usr/lib/systemd/system-generators/torcx-generator[1279]: time="2024-12-13T14:30:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:24.293975 /usr/lib/systemd/system-generators/torcx-generator[1279]: time="2024-12-13T14:30:24Z" level=info msg="torcx already run" Dec 13 14:30:24.317334 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:30:24.376298 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:24.376318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:24.394462 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:24.475000 audit: BPF prog-id=24 op=LOAD Dec 13 14:30:24.475000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:30:24.476000 audit: BPF prog-id=25 op=LOAD Dec 13 14:30:24.476000 audit: BPF prog-id=26 op=LOAD Dec 13 14:30:24.476000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:30:24.476000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:30:24.477000 audit: BPF prog-id=27 op=LOAD Dec 13 14:30:24.477000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:30:24.478000 audit: BPF prog-id=28 op=LOAD Dec 13 14:30:24.478000 audit: BPF prog-id=29 op=LOAD Dec 13 14:30:24.478000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:30:24.478000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:30:24.479000 audit: BPF prog-id=30 op=LOAD Dec 13 14:30:24.479000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:30:24.479000 audit: BPF prog-id=31 op=LOAD Dec 13 14:30:24.479000 audit: BPF prog-id=32 op=LOAD Dec 13 14:30:24.479000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:30:24.479000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:30:24.484000 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:30:24.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.488115 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:30:24.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.499421 systemd[1]: Mounting boot.mount... Dec 13 14:30:24.506147 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.506863 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.508845 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:24.512783 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:24.516078 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:24.518219 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.518553 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:24.519321 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.522537 systemd[1]: Mounted boot.mount. Dec 13 14:30:24.525598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:24.525798 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:24.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.529221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:24.529369 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:24.532044 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:24.532170 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:24.534812 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:24.534947 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.537486 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.537897 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.540098 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:24.545040 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:24.548271 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:24.550274 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.550434 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:24.550592 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.552174 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:30:24.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.555879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:24.556039 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:24.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.558831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:24.558973 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:24.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.561988 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:24.562145 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:24.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.569273 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.569954 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.572549 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:24.577052 systemd[1]: Starting modprobe@drm.service... Dec 13 14:30:24.580476 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:24.584744 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:24.586710 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.586875 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:24.587091 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:24.588818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:24.589009 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:24.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.592264 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:30:24.592428 systemd[1]: Finished modprobe@drm.service. Dec 13 14:30:24.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.595300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:24.595465 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:24.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.598430 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:24.598592 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:24.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.602784 systemd[1]: Finished ensure-sysext.service. Dec 13 14:30:24.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.606526 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:24.606585 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.645836 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:30:24.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.649534 systemd[1]: Starting audit-rules.service... Dec 13 14:30:24.653028 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:30:24.656726 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:30:24.660000 audit: BPF prog-id=33 op=LOAD Dec 13 14:30:24.663174 systemd[1]: Starting systemd-resolved.service... Dec 13 14:30:24.666000 audit: BPF prog-id=34 op=LOAD Dec 13 14:30:24.668222 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:30:24.675369 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:30:24.688000 audit[1362]: SYSTEM_BOOT pid=1362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.694133 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:30:24.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.696678 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:30:24.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.698868 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:30:24.743099 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:30:24.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.780868 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:30:24.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.783172 systemd[1]: Reached target time-set.target. Dec 13 14:30:24.797000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:30:24.797000 audit[1375]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1d874410 a2=420 a3=0 items=0 ppid=1354 pid=1375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:24.797000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:30:24.798478 augenrules[1375]: No rules Dec 13 14:30:24.798543 systemd[1]: Finished audit-rules.service. Dec 13 14:30:24.804425 systemd-resolved[1358]: Positive Trust Anchors: Dec 13 14:30:24.804438 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:30:24.804478 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:30:24.824992 systemd-resolved[1358]: Using system hostname 'ci-3510.3.6-a-34fc77c933'. Dec 13 14:30:24.826409 systemd[1]: Started systemd-resolved.service. Dec 13 14:30:24.832054 systemd[1]: Reached target network.target. Dec 13 14:30:24.834129 systemd[1]: Reached target network-online.target. Dec 13 14:30:24.836334 systemd[1]: Reached target nss-lookup.target. Dec 13 14:30:24.846798 systemd-timesyncd[1359]: Contacted time server 193.1.12.167:123 (0.flatcar.pool.ntp.org). Dec 13 14:30:24.846861 systemd-timesyncd[1359]: Initial clock synchronization to Fri 2024-12-13 14:30:24.847305 UTC. Dec 13 14:30:25.874596 ldconfig[1237]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:30:25.890851 systemd[1]: Finished ldconfig.service. Dec 13 14:30:25.894475 systemd[1]: Starting systemd-update-done.service... Dec 13 14:30:25.901152 systemd[1]: Finished systemd-update-done.service. Dec 13 14:30:25.903304 systemd[1]: Reached target sysinit.target. Dec 13 14:30:25.905347 systemd[1]: Started motdgen.path. Dec 13 14:30:25.907102 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:30:25.909847 systemd[1]: Started logrotate.timer. Dec 13 14:30:25.911684 systemd[1]: Started mdadm.timer. Dec 13 14:30:25.913322 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:30:25.915364 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:30:25.915400 systemd[1]: Reached target paths.target. Dec 13 14:30:25.922505 systemd[1]: Reached target timers.target. Dec 13 14:30:25.924575 systemd[1]: Listening on dbus.socket. Dec 13 14:30:25.927232 systemd[1]: Starting docker.socket... Dec 13 14:30:25.934965 systemd[1]: Listening on sshd.socket. Dec 13 14:30:25.937289 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:25.937814 systemd[1]: Listening on docker.socket. Dec 13 14:30:25.939709 systemd[1]: Reached target sockets.target. Dec 13 14:30:25.941548 systemd[1]: Reached target basic.target. Dec 13 14:30:25.943376 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:30:25.943406 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:30:25.944328 systemd[1]: Starting containerd.service... Dec 13 14:30:25.947511 systemd[1]: Starting dbus.service... Dec 13 14:30:25.950122 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:30:25.953210 systemd[1]: Starting extend-filesystems.service... Dec 13 14:30:25.955288 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:30:25.956600 systemd[1]: Starting kubelet.service... Dec 13 14:30:25.960957 systemd[1]: Starting motdgen.service... Dec 13 14:30:25.964638 systemd[1]: Started nvidia.service. Dec 13 14:30:25.968851 systemd[1]: Starting prepare-helm.service... Dec 13 14:30:25.972626 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:30:25.976796 systemd[1]: Starting sshd-keygen.service... Dec 13 14:30:25.983966 jq[1385]: false Dec 13 14:30:25.984630 systemd[1]: Starting systemd-logind.service... Dec 13 14:30:25.986519 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:25.986603 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:30:25.987137 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:30:25.987954 systemd[1]: Starting update-engine.service... Dec 13 14:30:25.991041 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:30:25.998256 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:30:25.999759 jq[1399]: true Dec 13 14:30:25.998499 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:30:26.005519 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:30:26.005729 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:30:26.030522 jq[1405]: true Dec 13 14:30:26.034327 tar[1404]: linux-amd64/helm Dec 13 14:30:26.069113 dbus-daemon[1384]: [system] SELinux support is enabled Dec 13 14:30:26.069691 systemd[1]: Started dbus.service. Dec 13 14:30:26.074292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:30:26.074341 systemd[1]: Reached target system-config.target. Dec 13 14:30:26.076210 extend-filesystems[1386]: Found loop1 Dec 13 14:30:26.076513 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:30:26.076535 systemd[1]: Reached target user-config.target. Dec 13 14:30:26.081721 extend-filesystems[1386]: Found sda Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda1 Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda2 Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda3 Dec 13 14:30:26.083378 extend-filesystems[1386]: Found usr Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda4 Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda6 Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda7 Dec 13 14:30:26.083378 extend-filesystems[1386]: Found sda9 Dec 13 14:30:26.083378 extend-filesystems[1386]: Checking size of /dev/sda9 Dec 13 14:30:26.092938 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:30:26.148991 extend-filesystems[1386]: Old size kept for /dev/sda9 Dec 13 14:30:26.148991 extend-filesystems[1386]: Found sr0 Dec 13 14:30:26.093126 systemd[1]: Finished motdgen.service. Dec 13 14:30:26.127350 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:30:26.127533 systemd[1]: Finished extend-filesystems.service. Dec 13 14:30:26.194386 env[1408]: time="2024-12-13T14:30:26.194326870Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:30:26.319275 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:30:26.331042 systemd-logind[1397]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:30:26.331288 systemd-logind[1397]: New seat seat0. Dec 13 14:30:26.334580 systemd[1]: Started systemd-logind.service. Dec 13 14:30:26.357917 env[1408]: time="2024-12-13T14:30:26.357862572Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:30:26.358052 env[1408]: time="2024-12-13T14:30:26.358035478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:26.360420 env[1408]: time="2024-12-13T14:30:26.360374059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:30:26.360420 env[1408]: time="2024-12-13T14:30:26.360417461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:26.360769 env[1408]: time="2024-12-13T14:30:26.360735672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:30:26.360849 env[1408]: time="2024-12-13T14:30:26.360768573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:26.360849 env[1408]: time="2024-12-13T14:30:26.360786674Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:30:26.360849 env[1408]: time="2024-12-13T14:30:26.360799874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:26.360986 env[1408]: time="2024-12-13T14:30:26.360903178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:26.361234 env[1408]: time="2024-12-13T14:30:26.361205288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:26.361409 env[1408]: time="2024-12-13T14:30:26.361379594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:30:26.361463 env[1408]: time="2024-12-13T14:30:26.361410595Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:30:26.361519 env[1408]: time="2024-12-13T14:30:26.361477998Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:30:26.361519 env[1408]: time="2024-12-13T14:30:26.361493798Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580222125Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580296627Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580315728Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580369630Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580389131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580408731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580475434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580494634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580512435Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580530036Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580547436Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.580628 env[1408]: time="2024-12-13T14:30:26.580567837Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:30:26.585154 env[1408]: time="2024-12-13T14:30:26.584672880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:30:26.585154 env[1408]: time="2024-12-13T14:30:26.584802385Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.585842321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.585886922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.585908023Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.585972025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.585989026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586005426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586075329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586092630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586108630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586124031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586140731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.586210 env[1408]: time="2024-12-13T14:30:26.586159732Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590672789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590706290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590729291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590746792Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590771093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590786993Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590816594Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:30:26.593150 env[1408]: time="2024-12-13T14:30:26.590857296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:30:26.592621 systemd[1]: Started containerd.service. Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.591121905Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.591202008Z" level=info msg="Connect containerd service" Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.591244809Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.592109939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.592423950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.592472752Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:30:26.593467 env[1408]: time="2024-12-13T14:30:26.592527054Z" level=info msg="containerd successfully booted in 0.406985s" Dec 13 14:30:26.680507 env[1408]: time="2024-12-13T14:30:26.593585491Z" level=info msg="Start subscribing containerd event" Dec 13 14:30:26.680507 env[1408]: time="2024-12-13T14:30:26.593629392Z" level=info msg="Start recovering state" Dec 13 14:30:26.680507 env[1408]: time="2024-12-13T14:30:26.593695295Z" level=info msg="Start event monitor" Dec 13 14:30:26.680507 env[1408]: time="2024-12-13T14:30:26.593718595Z" level=info msg="Start snapshots syncer" Dec 13 14:30:26.680507 env[1408]: time="2024-12-13T14:30:26.593728896Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:30:26.680507 env[1408]: time="2024-12-13T14:30:26.593738796Z" level=info msg="Start streaming server" Dec 13 14:30:26.628420 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:30:26.680849 bash[1440]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:30:26.840443 update_engine[1398]: I1213 14:30:26.839788 1398 main.cc:92] Flatcar Update Engine starting Dec 13 14:30:26.892867 systemd[1]: Started update-engine.service. Dec 13 14:30:26.893303 update_engine[1398]: I1213 14:30:26.892925 1398 update_check_scheduler.cc:74] Next update check in 7m14s Dec 13 14:30:26.900725 systemd[1]: Started locksmithd.service. Dec 13 14:30:27.237860 tar[1404]: linux-amd64/LICENSE Dec 13 14:30:27.237860 tar[1404]: linux-amd64/README.md Dec 13 14:30:27.244209 systemd[1]: Finished prepare-helm.service. Dec 13 14:30:27.446815 systemd[1]: Started kubelet.service. Dec 13 14:30:27.544049 sshd_keygen[1414]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:30:27.575205 systemd[1]: Finished sshd-keygen.service. Dec 13 14:30:27.579526 systemd[1]: Starting issuegen.service... Dec 13 14:30:27.583208 systemd[1]: Started waagent.service. Dec 13 14:30:27.596185 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:30:27.596377 systemd[1]: Finished issuegen.service. Dec 13 14:30:27.600355 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:30:27.633686 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:30:27.638233 systemd[1]: Started getty@tty1.service. Dec 13 14:30:27.642197 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:30:27.646081 systemd[1]: Reached target getty.target. Dec 13 14:30:27.648156 systemd[1]: Reached target multi-user.target. Dec 13 14:30:27.652587 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:30:27.668704 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:30:27.668894 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:30:27.671544 systemd[1]: Startup finished in 600ms (firmware) + 7.773s (loader) + 897ms (kernel) + 9.647s (initrd) + 16.844s (userspace) = 35.762s. Dec 13 14:30:28.004983 kubelet[1500]: E1213 14:30:28.004926 1500 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:28.006729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:28.006898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:28.007176 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Dec 13 14:30:28.234693 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:30:28.236219 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:30:28.326259 systemd[1]: Created slice user-500.slice. Dec 13 14:30:28.327891 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:30:28.330685 systemd-logind[1397]: New session 1 of user core. Dec 13 14:30:28.333807 systemd-logind[1397]: New session 2 of user core. Dec 13 14:30:28.395979 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:30:28.398233 systemd[1]: Starting user@500.service... Dec 13 14:30:28.401580 (systemd)[1526]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:28.684554 systemd[1526]: Queued start job for default target default.target. Dec 13 14:30:28.685299 systemd[1526]: Reached target paths.target. Dec 13 14:30:28.685336 systemd[1526]: Reached target sockets.target. Dec 13 14:30:28.685358 systemd[1526]: Reached target timers.target. Dec 13 14:30:28.685379 systemd[1526]: Reached target basic.target. Dec 13 14:30:28.685522 systemd[1]: Started user@500.service. Dec 13 14:30:28.687092 systemd[1]: Started session-1.scope. Dec 13 14:30:28.688068 systemd[1]: Started session-2.scope. Dec 13 14:30:28.689100 systemd[1526]: Reached target default.target. Dec 13 14:30:28.689324 systemd[1526]: Startup finished in 281ms. Dec 13 14:30:28.693196 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:30:30.940158 waagent[1517]: 2024-12-13T14:30:30.940039Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:30:30.954122 waagent[1517]: 2024-12-13T14:30:30.943559Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:30:30.954122 waagent[1517]: 2024-12-13T14:30:30.944475Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:30:30.954122 waagent[1517]: 2024-12-13T14:30:30.945716Z INFO Daemon Daemon Run daemon Dec 13 14:30:30.954122 waagent[1517]: 2024-12-13T14:30:30.946939Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:30:30.961843 waagent[1517]: 2024-12-13T14:30:30.961716Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:30:30.969717 waagent[1517]: 2024-12-13T14:30:30.969590Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:30:30.974434 waagent[1517]: 2024-12-13T14:30:30.974366Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:30:30.976882 waagent[1517]: 2024-12-13T14:30:30.976819Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:30:30.979916 waagent[1517]: 2024-12-13T14:30:30.979855Z INFO Daemon Daemon Activate resource disk Dec 13 14:30:30.982264 waagent[1517]: 2024-12-13T14:30:30.982205Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:30:30.992026 waagent[1517]: 2024-12-13T14:30:30.991964Z INFO Daemon Daemon Found device: None Dec 13 14:30:30.994316 waagent[1517]: 2024-12-13T14:30:30.994254Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:30:30.998319 waagent[1517]: 2024-12-13T14:30:30.998258Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:30:31.004343 waagent[1517]: 2024-12-13T14:30:31.004280Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:30:31.007298 waagent[1517]: 2024-12-13T14:30:31.007238Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:30:31.016378 waagent[1517]: 2024-12-13T14:30:31.016236Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:30:31.025253 waagent[1517]: 2024-12-13T14:30:31.025132Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:30:31.030215 waagent[1517]: 2024-12-13T14:30:31.030139Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:30:31.032847 waagent[1517]: 2024-12-13T14:30:31.032780Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:30:31.075537 waagent[1517]: 2024-12-13T14:30:31.071008Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:30:31.104485 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:30:31.120964 waagent[1517]: 2024-12-13T14:30:31.120828Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:30:31.123737 waagent[1517]: 2024-12-13T14:30:31.123648Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:30:31.126717 waagent[1517]: 2024-12-13T14:30:31.126636Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:30:31.130138 waagent[1517]: 2024-12-13T14:30:31.130077Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:30:31.132917 waagent[1517]: 2024-12-13T14:30:31.132857Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:30:31.135427 waagent[1517]: 2024-12-13T14:30:31.135369Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:30:31.178049 waagent[1517]: 2024-12-13T14:30:31.177970Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:30:31.185306 waagent[1517]: 2024-12-13T14:30:31.179669Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:30:31.185306 waagent[1517]: 2024-12-13T14:30:31.180381Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:30:33.087881 waagent[1517]: 2024-12-13T14:30:33.087722Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:30:33.099888 waagent[1517]: 2024-12-13T14:30:33.099804Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:30:33.103374 waagent[1517]: 2024-12-13T14:30:33.103298Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:30:36.892703 waagent[1517]: 2024-12-13T14:30:36.892539Z INFO Daemon Daemon Found private key matching thumbprint 5B869954F94D8046DD7D0CDD2014398BC5E7E1B8 Dec 13 14:30:36.902475 waagent[1517]: 2024-12-13T14:30:36.894148Z INFO Daemon Daemon Certificate with thumbprint 2A17FDC532C7B0640FABBD8DA78A5E6A314C0A76 has no matching private key. Dec 13 14:30:36.902475 waagent[1517]: 2024-12-13T14:30:36.895105Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:30:36.916738 waagent[1517]: 2024-12-13T14:30:36.916651Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 4602743b-0dc2-4670-af2b-c7d75295e575 New eTag: 13534712704681181081] Dec 13 14:30:36.924215 waagent[1517]: 2024-12-13T14:30:36.918330Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:30:36.930720 waagent[1517]: 2024-12-13T14:30:36.930637Z INFO Daemon Daemon Starting provisioning Dec 13 14:30:36.932089 waagent[1517]: 2024-12-13T14:30:36.932029Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:30:36.933009 waagent[1517]: 2024-12-13T14:30:36.932959Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-34fc77c933] Dec 13 14:30:36.988401 waagent[1517]: 2024-12-13T14:30:36.988263Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-34fc77c933] Dec 13 14:30:36.991822 waagent[1517]: 2024-12-13T14:30:36.991745Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:30:36.994994 waagent[1517]: 2024-12-13T14:30:36.994926Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:30:37.009066 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:30:37.009321 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:30:37.009401 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:30:37.009793 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:30:37.013700 systemd-networkd[1171]: eth0: DHCPv6 lease lost Dec 13 14:30:37.015517 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:30:37.015687 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:30:37.018048 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:37.050181 systemd-networkd[1571]: enP13451s1: Link UP Dec 13 14:30:37.050192 systemd-networkd[1571]: enP13451s1: Gained carrier Dec 13 14:30:37.051537 systemd-networkd[1571]: eth0: Link UP Dec 13 14:30:37.051545 systemd-networkd[1571]: eth0: Gained carrier Dec 13 14:30:37.052093 systemd-networkd[1571]: lo: Link UP Dec 13 14:30:37.052103 systemd-networkd[1571]: lo: Gained carrier Dec 13 14:30:37.052425 systemd-networkd[1571]: eth0: Gained IPv6LL Dec 13 14:30:37.052707 systemd-networkd[1571]: Enumeration completed Dec 13 14:30:37.052810 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:37.055174 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:30:37.056237 systemd-networkd[1571]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:37.061109 waagent[1517]: 2024-12-13T14:30:37.060815Z INFO Daemon Daemon Create user account if not exists Dec 13 14:30:37.065419 waagent[1517]: 2024-12-13T14:30:37.065322Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:30:37.068642 waagent[1517]: 2024-12-13T14:30:37.068573Z INFO Daemon Daemon Configure sudoer Dec 13 14:30:37.071229 waagent[1517]: 2024-12-13T14:30:37.071166Z INFO Daemon Daemon Configure sshd Dec 13 14:30:37.073256 waagent[1517]: 2024-12-13T14:30:37.073192Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:30:37.092760 systemd-networkd[1571]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:30:37.096030 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:30:38.189552 waagent[1517]: 2024-12-13T14:30:38.189435Z INFO Daemon Daemon Provisioning complete Dec 13 14:30:38.204978 waagent[1517]: 2024-12-13T14:30:38.204905Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:30:38.211626 waagent[1517]: 2024-12-13T14:30:38.206228Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:30:38.211626 waagent[1517]: 2024-12-13T14:30:38.207884Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:30:38.221868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:30:38.222139 systemd[1]: Stopped kubelet.service. Dec 13 14:30:38.222191 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Dec 13 14:30:38.224008 systemd[1]: Starting kubelet.service... Dec 13 14:30:38.495177 waagent[1580]: 2024-12-13T14:30:38.495067Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:30:38.495970 waagent[1580]: 2024-12-13T14:30:38.495905Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:30:38.496116 waagent[1580]: 2024-12-13T14:30:38.496062Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:30:38.507347 waagent[1580]: 2024-12-13T14:30:38.507273Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:30:38.507507 waagent[1580]: 2024-12-13T14:30:38.507454Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:30:38.669055 waagent[1580]: 2024-12-13T14:30:38.668189Z INFO ExtHandler ExtHandler Found private key matching thumbprint 5B869954F94D8046DD7D0CDD2014398BC5E7E1B8 Dec 13 14:30:38.669055 waagent[1580]: 2024-12-13T14:30:38.668526Z INFO ExtHandler ExtHandler Certificate with thumbprint 2A17FDC532C7B0640FABBD8DA78A5E6A314C0A76 has no matching private key. Dec 13 14:30:38.669055 waagent[1580]: 2024-12-13T14:30:38.668845Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:30:38.674573 systemd[1]: Started kubelet.service. Dec 13 14:30:38.686803 waagent[1580]: 2024-12-13T14:30:38.686735Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: da9fa1e0-c7a6-4ad0-bcbb-bf73865305f3 New eTag: 13534712704681181081] Dec 13 14:30:38.687500 waagent[1580]: 2024-12-13T14:30:38.687425Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:30:38.906461 kubelet[1594]: E1213 14:30:38.906345 1594 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:38.909467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:38.909625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:41.488312 waagent[1580]: 2024-12-13T14:30:41.488146Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:30:41.497770 waagent[1580]: 2024-12-13T14:30:41.497688Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1580 Dec 13 14:30:41.501165 waagent[1580]: 2024-12-13T14:30:41.501097Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:30:41.502385 waagent[1580]: 2024-12-13T14:30:41.502328Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:30:41.676209 waagent[1580]: 2024-12-13T14:30:41.676144Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:30:41.676610 waagent[1580]: 2024-12-13T14:30:41.676549Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:30:41.684644 waagent[1580]: 2024-12-13T14:30:41.684588Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:30:41.685126 waagent[1580]: 2024-12-13T14:30:41.685067Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:30:41.686212 waagent[1580]: 2024-12-13T14:30:41.686148Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:30:41.687457 waagent[1580]: 2024-12-13T14:30:41.687399Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:30:41.687921 waagent[1580]: 2024-12-13T14:30:41.687867Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:30:41.688075 waagent[1580]: 2024-12-13T14:30:41.688027Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:30:41.688587 waagent[1580]: 2024-12-13T14:30:41.688531Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:30:41.688888 waagent[1580]: 2024-12-13T14:30:41.688832Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:30:41.688888 waagent[1580]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:30:41.688888 waagent[1580]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:30:41.688888 waagent[1580]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:30:41.688888 waagent[1580]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:30:41.688888 waagent[1580]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:30:41.688888 waagent[1580]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:30:41.691671 waagent[1580]: 2024-12-13T14:30:41.691506Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:30:41.691984 waagent[1580]: 2024-12-13T14:30:41.691928Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:30:41.692624 waagent[1580]: 2024-12-13T14:30:41.692565Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:30:41.692793 waagent[1580]: 2024-12-13T14:30:41.692744Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:30:41.692923 waagent[1580]: 2024-12-13T14:30:41.692878Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:30:41.693961 waagent[1580]: 2024-12-13T14:30:41.693887Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:30:41.694438 waagent[1580]: 2024-12-13T14:30:41.694376Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:30:41.694525 waagent[1580]: 2024-12-13T14:30:41.694471Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:30:41.695323 waagent[1580]: 2024-12-13T14:30:41.695260Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:30:41.695428 waagent[1580]: 2024-12-13T14:30:41.695368Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:30:41.695837 waagent[1580]: 2024-12-13T14:30:41.695785Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:30:41.711401 waagent[1580]: 2024-12-13T14:30:41.711333Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:30:41.712118 waagent[1580]: 2024-12-13T14:30:41.712066Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:30:41.713044 waagent[1580]: 2024-12-13T14:30:41.712985Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:30:41.753282 waagent[1580]: 2024-12-13T14:30:41.753118Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:30:41.889923 waagent[1580]: 2024-12-13T14:30:41.889812Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1571' Dec 13 14:30:42.169467 waagent[1580]: 2024-12-13T14:30:42.169337Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:30:42.215642 waagent[1517]: 2024-12-13T14:30:42.215493Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:30:42.220612 waagent[1517]: 2024-12-13T14:30:42.220554Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:30:43.671348 waagent[1617]: 2024-12-13T14:30:43.671240Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:30:43.672085 waagent[1617]: 2024-12-13T14:30:43.672016Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:30:43.672231 waagent[1617]: 2024-12-13T14:30:43.672176Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:30:43.672374 waagent[1617]: 2024-12-13T14:30:43.672327Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 14:30:43.682075 waagent[1617]: 2024-12-13T14:30:43.681976Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:30:43.682450 waagent[1617]: 2024-12-13T14:30:43.682393Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:30:43.682611 waagent[1617]: 2024-12-13T14:30:43.682563Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:30:43.694454 waagent[1617]: 2024-12-13T14:30:43.694381Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:30:43.703429 waagent[1617]: 2024-12-13T14:30:43.703368Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:30:43.704332 waagent[1617]: 2024-12-13T14:30:43.704272Z INFO ExtHandler Dec 13 14:30:43.704475 waagent[1617]: 2024-12-13T14:30:43.704424Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 27cd2c24-b2dc-4728-b272-c8f7434436d0 eTag: 13534712704681181081 source: Fabric] Dec 13 14:30:43.705172 waagent[1617]: 2024-12-13T14:30:43.705115Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:30:43.768059 waagent[1617]: 2024-12-13T14:30:43.767900Z INFO ExtHandler Dec 13 14:30:43.768339 waagent[1617]: 2024-12-13T14:30:43.768263Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:30:43.776750 waagent[1617]: 2024-12-13T14:30:43.776692Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:30:43.777229 waagent[1617]: 2024-12-13T14:30:43.777177Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:30:43.799157 waagent[1617]: 2024-12-13T14:30:43.799091Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:30:43.954380 waagent[1617]: 2024-12-13T14:30:43.954194Z INFO ExtHandler Downloaded certificate {'thumbprint': '5B869954F94D8046DD7D0CDD2014398BC5E7E1B8', 'hasPrivateKey': True} Dec 13 14:30:43.955310 waagent[1617]: 2024-12-13T14:30:43.955241Z INFO ExtHandler Downloaded certificate {'thumbprint': '2A17FDC532C7B0640FABBD8DA78A5E6A314C0A76', 'hasPrivateKey': False} Dec 13 14:30:43.956278 waagent[1617]: 2024-12-13T14:30:43.956210Z INFO ExtHandler Fetch goal state completed Dec 13 14:30:43.980936 waagent[1617]: 2024-12-13T14:30:43.980830Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:30:43.992396 waagent[1617]: 2024-12-13T14:30:43.992309Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1617 Dec 13 14:30:43.995441 waagent[1617]: 2024-12-13T14:30:43.995377Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:30:43.996394 waagent[1617]: 2024-12-13T14:30:43.996335Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:30:43.996671 waagent[1617]: 2024-12-13T14:30:43.996612Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:30:43.998647 waagent[1617]: 2024-12-13T14:30:43.998590Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:30:44.003306 waagent[1617]: 2024-12-13T14:30:44.003252Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:30:44.003675 waagent[1617]: 2024-12-13T14:30:44.003606Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:30:44.011506 waagent[1617]: 2024-12-13T14:30:44.011452Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:30:44.011977 waagent[1617]: 2024-12-13T14:30:44.011918Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:30:44.017896 waagent[1617]: 2024-12-13T14:30:44.017803Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 14:30:44.018921 waagent[1617]: 2024-12-13T14:30:44.018856Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:30:44.020339 waagent[1617]: 2024-12-13T14:30:44.020275Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:30:44.020971 waagent[1617]: 2024-12-13T14:30:44.020917Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:30:44.021134 waagent[1617]: 2024-12-13T14:30:44.021085Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:30:44.021701 waagent[1617]: 2024-12-13T14:30:44.021613Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:30:44.021981 waagent[1617]: 2024-12-13T14:30:44.021926Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:30:44.021981 waagent[1617]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:30:44.021981 waagent[1617]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:30:44.021981 waagent[1617]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:30:44.021981 waagent[1617]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:30:44.021981 waagent[1617]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:30:44.021981 waagent[1617]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:30:44.024365 waagent[1617]: 2024-12-13T14:30:44.024278Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:30:44.025368 waagent[1617]: 2024-12-13T14:30:44.025311Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:30:44.025525 waagent[1617]: 2024-12-13T14:30:44.025475Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:30:44.025949 waagent[1617]: 2024-12-13T14:30:44.025895Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:30:44.026105 waagent[1617]: 2024-12-13T14:30:44.026059Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:30:44.026245 waagent[1617]: 2024-12-13T14:30:44.026200Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:30:44.026728 waagent[1617]: 2024-12-13T14:30:44.026640Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:30:44.029310 waagent[1617]: 2024-12-13T14:30:44.029186Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:30:44.033625 waagent[1617]: 2024-12-13T14:30:44.033533Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:30:44.037141 waagent[1617]: 2024-12-13T14:30:44.033230Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:30:44.037141 waagent[1617]: 2024-12-13T14:30:44.035994Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:30:44.048084 waagent[1617]: 2024-12-13T14:30:44.048021Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:30:44.059847 waagent[1617]: 2024-12-13T14:30:44.059779Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:30:44.059847 waagent[1617]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:30:44.059847 waagent[1617]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:30:44.059847 waagent[1617]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:1a:fe brd ff:ff:ff:ff:ff:ff Dec 13 14:30:44.059847 waagent[1617]: 3: enP13451s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:1a:fe brd ff:ff:ff:ff:ff:ff\ altname enP13451p0s2 Dec 13 14:30:44.059847 waagent[1617]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:30:44.059847 waagent[1617]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:30:44.059847 waagent[1617]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:30:44.059847 waagent[1617]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:30:44.059847 waagent[1617]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:30:44.059847 waagent[1617]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:1afe/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:30:44.086735 waagent[1617]: 2024-12-13T14:30:44.086645Z INFO ExtHandler ExtHandler Dec 13 14:30:44.087568 waagent[1617]: 2024-12-13T14:30:44.087506Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 45aadc28-edc7-4f01-8dd7-1371cdee7977 correlation b877ecba-2573-4726-8a0a-cae12922e8f8 created: 2024-12-13T14:29:41.650162Z] Dec 13 14:30:44.093300 waagent[1617]: 2024-12-13T14:30:44.092117Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:30:44.098387 waagent[1617]: 2024-12-13T14:30:44.098326Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 11 ms] Dec 13 14:30:44.127892 waagent[1617]: 2024-12-13T14:30:44.127823Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:30:44.154141 waagent[1617]: 2024-12-13T14:30:44.154076Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0E566634-B241-4826-B6D6-9C6A44811DAB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:30:44.173267 waagent[1617]: 2024-12-13T14:30:44.173161Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 14:30:44.173267 waagent[1617]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:30:44.173267 waagent[1617]: pkts bytes target prot opt in out source destination Dec 13 14:30:44.173267 waagent[1617]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:30:44.173267 waagent[1617]: pkts bytes target prot opt in out source destination Dec 13 14:30:44.173267 waagent[1617]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:30:44.173267 waagent[1617]: pkts bytes target prot opt in out source destination Dec 13 14:30:44.173267 waagent[1617]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:30:44.173267 waagent[1617]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:30:44.173267 waagent[1617]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:30:44.180323 waagent[1617]: 2024-12-13T14:30:44.180222Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:30:44.180323 waagent[1617]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:30:44.180323 waagent[1617]: pkts bytes target prot opt in out source destination Dec 13 14:30:44.180323 waagent[1617]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:30:44.180323 waagent[1617]: pkts bytes target prot opt in out source destination Dec 13 14:30:44.180323 waagent[1617]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:30:44.180323 waagent[1617]: pkts bytes target prot opt in out source destination Dec 13 14:30:44.180323 waagent[1617]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:30:44.180323 waagent[1617]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:30:44.180323 waagent[1617]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:30:44.180910 waagent[1617]: 2024-12-13T14:30:44.180856Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:30:48.971899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:30:48.972212 systemd[1]: Stopped kubelet.service. Dec 13 14:30:48.974248 systemd[1]: Starting kubelet.service... Dec 13 14:30:49.199000 systemd[1]: Started kubelet.service. Dec 13 14:30:49.696323 kubelet[1674]: E1213 14:30:49.696271 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:49.698038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:49.698151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:59.721937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:30:59.722256 systemd[1]: Stopped kubelet.service. Dec 13 14:30:59.724217 systemd[1]: Starting kubelet.service... Dec 13 14:31:00.043732 systemd[1]: Started kubelet.service. Dec 13 14:31:00.377208 kubelet[1684]: E1213 14:31:00.377090 1684 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:00.378649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:00.378824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:10.471938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:31:10.472239 systemd[1]: Stopped kubelet.service. Dec 13 14:31:10.474266 systemd[1]: Starting kubelet.service... Dec 13 14:31:10.798745 systemd[1]: Started kubelet.service. Dec 13 14:31:10.821159 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 14:31:11.146831 kubelet[1694]: E1213 14:31:11.146716 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:11.148518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:11.148691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:12.285904 update_engine[1398]: I1213 14:31:12.285818 1398 update_attempter.cc:509] Updating boot flags... Dec 13 14:31:19.172150 systemd[1]: Created slice system-sshd.slice. Dec 13 14:31:19.174468 systemd[1]: Started sshd@0-10.200.8.20:22-10.200.16.10:60016.service. Dec 13 14:31:19.983263 sshd[1740]: Accepted publickey for core from 10.200.16.10 port 60016 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:19.985003 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:19.990698 systemd[1]: Started session-3.scope. Dec 13 14:31:19.991440 systemd-logind[1397]: New session 3 of user core. Dec 13 14:31:20.598441 systemd[1]: Started sshd@1-10.200.8.20:22-10.200.16.10:60026.service. Dec 13 14:31:21.187346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:31:21.187760 systemd[1]: Stopped kubelet.service. Dec 13 14:31:21.189855 systemd[1]: Starting kubelet.service... Dec 13 14:31:21.310903 sshd[1745]: Accepted publickey for core from 10.200.16.10 port 60026 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:21.312309 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:21.316531 systemd-logind[1397]: New session 4 of user core. Dec 13 14:31:21.317146 systemd[1]: Started session-4.scope. Dec 13 14:31:21.520848 systemd[1]: Started kubelet.service. Dec 13 14:31:21.559537 kubelet[1752]: E1213 14:31:21.559490 1752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:21.561167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:21.561328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:21.811250 sshd[1745]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:21.814718 systemd[1]: sshd@1-10.200.8.20:22-10.200.16.10:60026.service: Deactivated successfully. Dec 13 14:31:21.815861 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:31:21.816621 systemd-logind[1397]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:31:21.817567 systemd-logind[1397]: Removed session 4. Dec 13 14:31:21.930072 systemd[1]: Started sshd@2-10.200.8.20:22-10.200.16.10:60036.service. Dec 13 14:31:22.639391 sshd[1761]: Accepted publickey for core from 10.200.16.10 port 60036 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:22.641008 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:22.645611 systemd[1]: Started session-5.scope. Dec 13 14:31:22.646225 systemd-logind[1397]: New session 5 of user core. Dec 13 14:31:23.137373 sshd[1761]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:23.140593 systemd[1]: sshd@2-10.200.8.20:22-10.200.16.10:60036.service: Deactivated successfully. Dec 13 14:31:23.141448 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:31:23.142080 systemd-logind[1397]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:31:23.142843 systemd-logind[1397]: Removed session 5. Dec 13 14:31:23.256793 systemd[1]: Started sshd@3-10.200.8.20:22-10.200.16.10:60038.service. Dec 13 14:31:23.967851 sshd[1770]: Accepted publickey for core from 10.200.16.10 port 60038 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:23.969536 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:23.974903 systemd[1]: Started session-6.scope. Dec 13 14:31:23.975481 systemd-logind[1397]: New session 6 of user core. Dec 13 14:31:24.472958 sshd[1770]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:24.475450 systemd[1]: sshd@3-10.200.8.20:22-10.200.16.10:60038.service: Deactivated successfully. Dec 13 14:31:24.476321 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:31:24.477020 systemd-logind[1397]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:31:24.477821 systemd-logind[1397]: Removed session 6. Dec 13 14:31:24.590158 systemd[1]: Started sshd@4-10.200.8.20:22-10.200.16.10:60046.service. Dec 13 14:31:25.301150 sshd[1776]: Accepted publickey for core from 10.200.16.10 port 60046 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:25.302886 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:25.308389 systemd-logind[1397]: New session 7 of user core. Dec 13 14:31:25.308893 systemd[1]: Started session-7.scope. Dec 13 14:31:25.767629 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:31:25.767950 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:31:25.790509 systemd[1]: Starting docker.service... Dec 13 14:31:25.829817 env[1789]: time="2024-12-13T14:31:25.829766076Z" level=info msg="Starting up" Dec 13 14:31:25.831008 env[1789]: time="2024-12-13T14:31:25.830986277Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:31:25.831117 env[1789]: time="2024-12-13T14:31:25.831106677Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:31:25.831169 env[1789]: time="2024-12-13T14:31:25.831159577Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:31:25.831210 env[1789]: time="2024-12-13T14:31:25.831202977Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:31:25.833196 env[1789]: time="2024-12-13T14:31:25.833177678Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:31:25.833289 env[1789]: time="2024-12-13T14:31:25.833278778Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:31:25.833347 env[1789]: time="2024-12-13T14:31:25.833336478Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:31:25.833388 env[1789]: time="2024-12-13T14:31:25.833380178Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:31:25.839584 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1209248030-merged.mount: Deactivated successfully. Dec 13 14:31:25.921115 env[1789]: time="2024-12-13T14:31:25.921069446Z" level=info msg="Loading containers: start." Dec 13 14:31:26.017679 kernel: Initializing XFRM netlink socket Dec 13 14:31:26.030084 env[1789]: time="2024-12-13T14:31:26.030042229Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:31:26.089711 systemd-networkd[1571]: docker0: Link UP Dec 13 14:31:26.111864 env[1789]: time="2024-12-13T14:31:26.111820489Z" level=info msg="Loading containers: done." Dec 13 14:31:26.123461 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1868325680-merged.mount: Deactivated successfully. Dec 13 14:31:26.135284 env[1789]: time="2024-12-13T14:31:26.135242606Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:31:26.135476 env[1789]: time="2024-12-13T14:31:26.135459406Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:31:26.135586 env[1789]: time="2024-12-13T14:31:26.135563006Z" level=info msg="Daemon has completed initialization" Dec 13 14:31:26.169648 systemd[1]: Started docker.service. Dec 13 14:31:26.179553 env[1789]: time="2024-12-13T14:31:26.179487238Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:31:30.900828 env[1408]: time="2024-12-13T14:31:30.900746685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:31:31.692476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:31:31.692707 systemd[1]: Stopped kubelet.service. Dec 13 14:31:31.694598 systemd[1]: Starting kubelet.service... Dec 13 14:31:31.732721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203483401.mount: Deactivated successfully. Dec 13 14:31:32.238018 systemd[1]: Started kubelet.service. Dec 13 14:31:32.277401 kubelet[1915]: E1213 14:31:32.277359 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:32.279072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:32.279232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:34.264959 env[1408]: time="2024-12-13T14:31:34.264902434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:34.270693 env[1408]: time="2024-12-13T14:31:34.270641730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:34.274353 env[1408]: time="2024-12-13T14:31:34.274320591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:34.279058 env[1408]: time="2024-12-13T14:31:34.279023269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:34.279687 env[1408]: time="2024-12-13T14:31:34.279640079Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 14:31:34.289110 env[1408]: time="2024-12-13T14:31:34.289080836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:31:36.339503 env[1408]: time="2024-12-13T14:31:36.339438737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:36.345340 env[1408]: time="2024-12-13T14:31:36.345259529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:36.349390 env[1408]: time="2024-12-13T14:31:36.349355793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:36.353259 env[1408]: time="2024-12-13T14:31:36.353228754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:36.353911 env[1408]: time="2024-12-13T14:31:36.353878364Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 14:31:36.364075 env[1408]: time="2024-12-13T14:31:36.364042324Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:31:37.842666 env[1408]: time="2024-12-13T14:31:37.842599007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:37.848965 env[1408]: time="2024-12-13T14:31:37.848922703Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:37.853909 env[1408]: time="2024-12-13T14:31:37.853872279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:37.857718 env[1408]: time="2024-12-13T14:31:37.857686337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:37.858373 env[1408]: time="2024-12-13T14:31:37.858341247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 14:31:37.868068 env[1408]: time="2024-12-13T14:31:37.868041796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:31:38.997182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675561274.mount: Deactivated successfully. Dec 13 14:31:39.587393 env[1408]: time="2024-12-13T14:31:39.587335403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:39.592342 env[1408]: time="2024-12-13T14:31:39.592290174Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:39.598530 env[1408]: time="2024-12-13T14:31:39.598482364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:39.602082 env[1408]: time="2024-12-13T14:31:39.602026315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:39.602564 env[1408]: time="2024-12-13T14:31:39.602530223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:31:39.612814 env[1408]: time="2024-12-13T14:31:39.612775971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:31:40.224578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568344996.mount: Deactivated successfully. Dec 13 14:31:41.491290 env[1408]: time="2024-12-13T14:31:41.491232118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:41.497796 env[1408]: time="2024-12-13T14:31:41.497753107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:41.513872 env[1408]: time="2024-12-13T14:31:41.513826028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:41.517981 env[1408]: time="2024-12-13T14:31:41.517930184Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:41.518731 env[1408]: time="2024-12-13T14:31:41.518697695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:31:41.529006 env[1408]: time="2024-12-13T14:31:41.528972636Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:31:42.064853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577938160.mount: Deactivated successfully. Dec 13 14:31:42.092649 env[1408]: time="2024-12-13T14:31:42.092600141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:42.100702 env[1408]: time="2024-12-13T14:31:42.100639448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:42.104612 env[1408]: time="2024-12-13T14:31:42.104572901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:42.109191 env[1408]: time="2024-12-13T14:31:42.109156662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:42.109677 env[1408]: time="2024-12-13T14:31:42.109632268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:31:42.119415 env[1408]: time="2024-12-13T14:31:42.119381798Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:31:42.471924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:31:42.472172 systemd[1]: Stopped kubelet.service. Dec 13 14:31:42.473898 systemd[1]: Starting kubelet.service... Dec 13 14:31:42.557555 systemd[1]: Started kubelet.service. Dec 13 14:31:42.594225 kubelet[1955]: E1213 14:31:42.594181 1955 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:42.595983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:42.596141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:43.309959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117632380.mount: Deactivated successfully. Dec 13 14:31:45.958499 env[1408]: time="2024-12-13T14:31:45.958420269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:45.964859 env[1408]: time="2024-12-13T14:31:45.964803148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:45.969438 env[1408]: time="2024-12-13T14:31:45.969395704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:45.974260 env[1408]: time="2024-12-13T14:31:45.974221664Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:45.975012 env[1408]: time="2024-12-13T14:31:45.974980773Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 14:31:49.282965 systemd[1]: Stopped kubelet.service. Dec 13 14:31:49.286043 systemd[1]: Starting kubelet.service... Dec 13 14:31:49.303977 systemd[1]: Reloading. Dec 13 14:31:49.417355 /usr/lib/systemd/system-generators/torcx-generator[2049]: time="2024-12-13T14:31:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:49.417889 /usr/lib/systemd/system-generators/torcx-generator[2049]: time="2024-12-13T14:31:49Z" level=info msg="torcx already run" Dec 13 14:31:49.527326 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:49.527346 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:49.543472 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:49.642711 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:31:49.642913 systemd[1]: Stopped kubelet.service. Dec 13 14:31:49.644810 systemd[1]: Starting kubelet.service... Dec 13 14:31:57.278608 systemd[1]: Started kubelet.service. Dec 13 14:31:57.323568 kubelet[2117]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:57.323946 kubelet[2117]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:31:57.323990 kubelet[2117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:57.324159 kubelet[2117]: I1213 14:31:57.324111 2117 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:31:57.525819 kubelet[2117]: I1213 14:31:57.525780 2117 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:31:57.525819 kubelet[2117]: I1213 14:31:57.525806 2117 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:31:57.526094 kubelet[2117]: I1213 14:31:57.526074 2117 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:31:57.538896 kubelet[2117]: I1213 14:31:57.538639 2117 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:31:57.539506 kubelet[2117]: E1213 14:31:57.539451 2117 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.553802 kubelet[2117]: I1213 14:31:57.553773 2117 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:31:57.556006 kubelet[2117]: I1213 14:31:57.555969 2117 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:31:57.556198 kubelet[2117]: I1213 14:31:57.556004 2117 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-34fc77c933","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:31:57.556702 kubelet[2117]: I1213 14:31:57.556684 2117 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:31:57.556778 kubelet[2117]: I1213 14:31:57.556708 2117 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:31:57.556850 kubelet[2117]: I1213 14:31:57.556834 2117 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:57.557873 kubelet[2117]: I1213 14:31:57.557856 2117 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:31:57.557873 kubelet[2117]: I1213 14:31:57.557875 2117 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:31:57.558004 kubelet[2117]: I1213 14:31:57.557899 2117 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:31:57.558004 kubelet[2117]: I1213 14:31:57.557919 2117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:31:57.571235 kubelet[2117]: W1213 14:31:57.570915 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.571235 kubelet[2117]: E1213 14:31:57.571025 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.572013 kubelet[2117]: W1213 14:31:57.571951 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.572151 kubelet[2117]: E1213 14:31:57.572137 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.572311 kubelet[2117]: I1213 14:31:57.572297 2117 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:31:57.574112 kubelet[2117]: I1213 14:31:57.574088 2117 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:31:57.574265 kubelet[2117]: W1213 14:31:57.574242 2117 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:31:57.574752 kubelet[2117]: I1213 14:31:57.574731 2117 server.go:1264] "Started kubelet" Dec 13 14:31:57.586621 kubelet[2117]: I1213 14:31:57.586571 2117 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:31:57.587755 kubelet[2117]: I1213 14:31:57.587732 2117 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:31:57.588005 kubelet[2117]: I1213 14:31:57.587958 2117 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:31:57.588429 kubelet[2117]: I1213 14:31:57.588410 2117 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:31:57.588765 kubelet[2117]: E1213 14:31:57.588634 2117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-34fc77c933.1810c30d19eeca85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-34fc77c933,UID:ci-3510.3.6-a-34fc77c933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-34fc77c933,},FirstTimestamp:2024-12-13 14:31:57.574711941 +0000 UTC m=+0.290146742,LastTimestamp:2024-12-13 14:31:57.574711941 +0000 UTC m=+0.290146742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-34fc77c933,}" Dec 13 14:31:57.594183 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:31:57.594378 kubelet[2117]: I1213 14:31:57.594362 2117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:31:57.601494 kubelet[2117]: E1213 14:31:57.601472 2117 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:31:57.601686 kubelet[2117]: I1213 14:31:57.601674 2117 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:31:57.601878 kubelet[2117]: I1213 14:31:57.601863 2117 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:31:57.602042 kubelet[2117]: I1213 14:31:57.602030 2117 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:31:57.602528 kubelet[2117]: W1213 14:31:57.602481 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.602613 kubelet[2117]: E1213 14:31:57.602532 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.603150 kubelet[2117]: E1213 14:31:57.603114 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="200ms" Dec 13 14:31:57.603873 kubelet[2117]: I1213 14:31:57.603850 2117 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:31:57.603967 kubelet[2117]: I1213 14:31:57.603932 2117 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:31:57.604584 kubelet[2117]: E1213 14:31:57.604562 2117 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:31:57.605615 kubelet[2117]: I1213 14:31:57.605596 2117 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:31:57.666899 kubelet[2117]: I1213 14:31:57.666866 2117 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:31:57.667083 kubelet[2117]: I1213 14:31:57.667071 2117 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:31:57.667160 kubelet[2117]: I1213 14:31:57.667151 2117 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:57.704631 kubelet[2117]: I1213 14:31:57.704588 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:57.705200 kubelet[2117]: E1213 14:31:57.705163 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:57.804164 kubelet[2117]: E1213 14:31:57.804029 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="400ms" Dec 13 14:31:57.907586 kubelet[2117]: I1213 14:31:57.907555 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:57.920171 kubelet[2117]: E1213 14:31:57.907908 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:57.922387 kubelet[2117]: I1213 14:31:57.922361 2117 policy_none.go:49] "None policy: Start" Dec 13 14:31:57.923548 kubelet[2117]: I1213 14:31:57.923531 2117 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:31:57.923746 kubelet[2117]: I1213 14:31:57.923735 2117 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:31:57.929536 kubelet[2117]: I1213 14:31:57.929498 2117 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:31:57.931196 kubelet[2117]: I1213 14:31:57.931170 2117 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:31:57.931196 kubelet[2117]: I1213 14:31:57.931200 2117 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:31:57.931321 kubelet[2117]: I1213 14:31:57.931221 2117 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:31:57.931321 kubelet[2117]: E1213 14:31:57.931267 2117 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:31:57.932596 kubelet[2117]: W1213 14:31:57.932568 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:57.932772 kubelet[2117]: E1213 14:31:57.932757 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.032385 kubelet[2117]: E1213 14:31:58.032323 2117 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:31:58.080721 systemd[1]: Created slice kubepods.slice. Dec 13 14:31:58.085801 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:31:58.093731 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:31:58.095100 kubelet[2117]: I1213 14:31:58.095078 2117 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:31:58.095380 kubelet[2117]: I1213 14:31:58.095337 2117 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:31:58.095492 kubelet[2117]: I1213 14:31:58.095476 2117 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:31:58.096996 kubelet[2117]: E1213 14:31:58.096972 2117 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:31:58.205430 kubelet[2117]: E1213 14:31:58.205375 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="800ms" Dec 13 14:31:58.232552 kubelet[2117]: I1213 14:31:58.232480 2117 topology_manager.go:215] "Topology Admit Handler" podUID="aba4bf60e8bd81b12ce19fe5ab0c676e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.234172 kubelet[2117]: I1213 14:31:58.234145 2117 topology_manager.go:215] "Topology Admit Handler" podUID="b0624d75f859b17792bf683562f35245" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.235401 kubelet[2117]: I1213 14:31:58.235375 2117 topology_manager.go:215] "Topology Admit Handler" podUID="37e40a3eebe79d65e061bc6d0f3d5794" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.286981 systemd[1]: Created slice kubepods-burstable-podaba4bf60e8bd81b12ce19fe5ab0c676e.slice. Dec 13 14:31:58.301620 systemd[1]: Created slice kubepods-burstable-podb0624d75f859b17792bf683562f35245.slice. Dec 13 14:31:58.305269 kubelet[2117]: I1213 14:31:58.305244 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37e40a3eebe79d65e061bc6d0f3d5794-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-34fc77c933\" (UID: \"37e40a3eebe79d65e061bc6d0f3d5794\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.305491 kubelet[2117]: I1213 14:31:58.305472 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.305638 kubelet[2117]: I1213 14:31:58.305621 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.305791 kubelet[2117]: I1213 14:31:58.305775 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.305905 kubelet[2117]: I1213 14:31:58.305891 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37e40a3eebe79d65e061bc6d0f3d5794-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-34fc77c933\" (UID: \"37e40a3eebe79d65e061bc6d0f3d5794\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.306419 kubelet[2117]: I1213 14:31:58.306399 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.306595 kubelet[2117]: I1213 14:31:58.306576 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.306767 kubelet[2117]: I1213 14:31:58.306750 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0624d75f859b17792bf683562f35245-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-34fc77c933\" (UID: \"b0624d75f859b17792bf683562f35245\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.306907 kubelet[2117]: I1213 14:31:58.306892 2117 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37e40a3eebe79d65e061bc6d0f3d5794-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-34fc77c933\" (UID: \"37e40a3eebe79d65e061bc6d0f3d5794\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.306980 systemd[1]: Created slice kubepods-burstable-pod37e40a3eebe79d65e061bc6d0f3d5794.slice. Dec 13 14:31:58.309727 kubelet[2117]: I1213 14:31:58.309707 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.310057 kubelet[2117]: E1213 14:31:58.310034 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:58.490433 kubelet[2117]: W1213 14:31:58.490360 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.490877 kubelet[2117]: E1213 14:31:58.490443 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.540673 kubelet[2117]: W1213 14:31:58.540585 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.540673 kubelet[2117]: E1213 14:31:58.540683 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.565162 kubelet[2117]: W1213 14:31:58.565093 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.565162 kubelet[2117]: E1213 14:31:58.565168 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:58.600936 env[1408]: time="2024-12-13T14:31:58.600880051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-34fc77c933,Uid:aba4bf60e8bd81b12ce19fe5ab0c676e,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:58.606578 env[1408]: time="2024-12-13T14:31:58.606533101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-34fc77c933,Uid:b0624d75f859b17792bf683562f35245,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:58.610138 env[1408]: time="2024-12-13T14:31:58.610102633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-34fc77c933,Uid:37e40a3eebe79d65e061bc6d0f3d5794,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:59.006254 kubelet[2117]: E1213 14:31:59.006166 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="1.6s" Dec 13 14:31:59.112248 kubelet[2117]: I1213 14:31:59.112213 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:59.112637 kubelet[2117]: E1213 14:31:59.112598 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:31:59.350578 kubelet[2117]: W1213 14:31:59.350452 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:59.350578 kubelet[2117]: E1213 14:31:59.350504 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:31:59.644324 kubelet[2117]: E1213 14:31:59.644210 2117 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:00.607249 kubelet[2117]: E1213 14:32:00.607157 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="3.2s" Dec 13 14:32:00.714618 kubelet[2117]: I1213 14:32:00.714582 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:00.715100 kubelet[2117]: E1213 14:32:00.714987 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:00.854436 kubelet[2117]: W1213 14:32:00.854389 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:00.854436 kubelet[2117]: E1213 14:32:00.854440 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:01.264365 kubelet[2117]: W1213 14:32:01.264317 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:01.264365 kubelet[2117]: E1213 14:32:01.264372 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:01.567916 kubelet[2117]: W1213 14:32:01.567791 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:01.567916 kubelet[2117]: E1213 14:32:01.567841 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:01.951045 kubelet[2117]: W1213 14:32:01.950930 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:01.951045 kubelet[2117]: E1213 14:32:01.950974 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:02.860842 kubelet[2117]: E1213 14:32:02.860720 2117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-34fc77c933.1810c30d19eeca85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-34fc77c933,UID:ci-3510.3.6-a-34fc77c933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-34fc77c933,},FirstTimestamp:2024-12-13 14:31:57.574711941 +0000 UTC m=+0.290146742,LastTimestamp:2024-12-13 14:31:57.574711941 +0000 UTC m=+0.290146742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-34fc77c933,}" Dec 13 14:32:04.227402 kubelet[2117]: E1213 14:32:03.806737 2117 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:04.227402 kubelet[2117]: E1213 14:32:03.807983 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="6.4s" Dec 13 14:32:04.227402 kubelet[2117]: I1213 14:32:03.917285 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:04.227402 kubelet[2117]: E1213 14:32:03.917606 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:05.022512 kubelet[2117]: W1213 14:32:05.022464 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:05.022512 kubelet[2117]: E1213 14:32:05.022516 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:05.126445 kubelet[2117]: W1213 14:32:05.126340 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:05.126445 kubelet[2117]: E1213 14:32:05.126396 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:06.529930 kubelet[2117]: W1213 14:32:06.529890 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:06.529930 kubelet[2117]: E1213 14:32:06.529934 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-34fc77c933&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:07.133102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221833328.mount: Deactivated successfully. Dec 13 14:32:07.242725 kubelet[2117]: W1213 14:32:07.242678 2117 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:07.242725 kubelet[2117]: E1213 14:32:07.242729 2117 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Dec 13 14:32:07.474269 env[1408]: time="2024-12-13T14:32:07.474219413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:07.518889 env[1408]: time="2024-12-13T14:32:07.518824734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:08.097300 kubelet[2117]: E1213 14:32:08.097257 2117 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:32:08.575308 env[1408]: time="2024-12-13T14:32:08.575244353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.130434 env[1408]: time="2024-12-13T14:32:09.130375844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.231266 env[1408]: time="2024-12-13T14:32:09.231199339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.271930 env[1408]: time="2024-12-13T14:32:09.271868720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.276231 env[1408]: time="2024-12-13T14:32:09.276125549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.324598 env[1408]: time="2024-12-13T14:32:09.324542283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.372407 env[1408]: time="2024-12-13T14:32:09.372345112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.432362 env[1408]: time="2024-12-13T14:32:09.431740121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.526954 env[1408]: time="2024-12-13T14:32:09.526893977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:09.573932 env[1408]: time="2024-12-13T14:32:09.573867901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:10.184100 env[1408]: time="2024-12-13T14:32:10.183848977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:10.184100 env[1408]: time="2024-12-13T14:32:10.183903577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:10.184100 env[1408]: time="2024-12-13T14:32:10.183921077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:10.184642 env[1408]: time="2024-12-13T14:32:10.184233780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b68e2831f2913a60127c85af68e12306edd3e45648e5f519e21c66d320bde76 pid=2157 runtime=io.containerd.runc.v2 Dec 13 14:32:10.207400 systemd[1]: run-containerd-runc-k8s.io-7b68e2831f2913a60127c85af68e12306edd3e45648e5f519e21c66d320bde76-runc.JwjKBH.mount: Deactivated successfully. Dec 13 14:32:10.209179 kubelet[2117]: E1213 14:32:10.208825 2117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-34fc77c933?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="7s" Dec 13 14:32:10.212681 systemd[1]: Started cri-containerd-7b68e2831f2913a60127c85af68e12306edd3e45648e5f519e21c66d320bde76.scope. Dec 13 14:32:10.260689 env[1408]: time="2024-12-13T14:32:10.260632395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-34fc77c933,Uid:aba4bf60e8bd81b12ce19fe5ab0c676e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b68e2831f2913a60127c85af68e12306edd3e45648e5f519e21c66d320bde76\"" Dec 13 14:32:10.264690 env[1408]: time="2024-12-13T14:32:10.264640822Z" level=info msg="CreateContainer within sandbox \"7b68e2831f2913a60127c85af68e12306edd3e45648e5f519e21c66d320bde76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:32:10.317182 env[1408]: time="2024-12-13T14:32:10.317106775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:10.317423 env[1408]: time="2024-12-13T14:32:10.317153676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:10.317423 env[1408]: time="2024-12-13T14:32:10.317167176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:10.317423 env[1408]: time="2024-12-13T14:32:10.317346177Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d60346ec878857025b1547e5adc13796b21cffaab3b58791c100e5cb98efac5 pid=2198 runtime=io.containerd.runc.v2 Dec 13 14:32:10.319571 kubelet[2117]: I1213 14:32:10.319539 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:10.319980 kubelet[2117]: E1213 14:32:10.319953 2117 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:10.333027 systemd[1]: Started cri-containerd-7d60346ec878857025b1547e5adc13796b21cffaab3b58791c100e5cb98efac5.scope. Dec 13 14:32:10.356061 env[1408]: time="2024-12-13T14:32:10.355985038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:10.356237 env[1408]: time="2024-12-13T14:32:10.356072838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:10.356237 env[1408]: time="2024-12-13T14:32:10.356100938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:10.356348 env[1408]: time="2024-12-13T14:32:10.356246139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3acb4e540f8e888d8f7af6ce3cab7946b6be03d8d89e1a87386386e89dedcdc9 pid=2231 runtime=io.containerd.runc.v2 Dec 13 14:32:10.383355 systemd[1]: Started cri-containerd-3acb4e540f8e888d8f7af6ce3cab7946b6be03d8d89e1a87386386e89dedcdc9.scope. Dec 13 14:32:10.407971 env[1408]: time="2024-12-13T14:32:10.407930588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-34fc77c933,Uid:37e40a3eebe79d65e061bc6d0f3d5794,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d60346ec878857025b1547e5adc13796b21cffaab3b58791c100e5cb98efac5\"" Dec 13 14:32:10.410849 env[1408]: time="2024-12-13T14:32:10.410813407Z" level=info msg="CreateContainer within sandbox \"7d60346ec878857025b1547e5adc13796b21cffaab3b58791c100e5cb98efac5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:32:10.470741 env[1408]: time="2024-12-13T14:32:10.436281879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-34fc77c933,Uid:b0624d75f859b17792bf683562f35245,Namespace:kube-system,Attempt:0,} returns sandbox id \"3acb4e540f8e888d8f7af6ce3cab7946b6be03d8d89e1a87386386e89dedcdc9\"" Dec 13 14:32:10.472911 env[1408]: time="2024-12-13T14:32:10.472864926Z" level=info msg="CreateContainer within sandbox \"3acb4e540f8e888d8f7af6ce3cab7946b6be03d8d89e1a87386386e89dedcdc9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:32:11.024176 env[1408]: time="2024-12-13T14:32:11.024113839Z" level=info msg="CreateContainer within sandbox \"7b68e2831f2913a60127c85af68e12306edd3e45648e5f519e21c66d320bde76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1dce624d4e544853f2d0b70d67a5ceff7dcb220bfcaa811ae54910cc51c1d607\"" Dec 13 14:32:11.025397 env[1408]: time="2024-12-13T14:32:11.025360547Z" level=info msg="StartContainer for \"1dce624d4e544853f2d0b70d67a5ceff7dcb220bfcaa811ae54910cc51c1d607\"" Dec 13 14:32:11.071821 env[1408]: time="2024-12-13T14:32:11.071763153Z" level=info msg="CreateContainer within sandbox \"7d60346ec878857025b1547e5adc13796b21cffaab3b58791c100e5cb98efac5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a00ec01e1a76e99268602a20f552bd6a278fd89f5f19daf38bad0927a9db05b\"" Dec 13 14:32:11.073220 env[1408]: time="2024-12-13T14:32:11.073185563Z" level=info msg="StartContainer for \"4a00ec01e1a76e99268602a20f552bd6a278fd89f5f19daf38bad0927a9db05b\"" Dec 13 14:32:11.094489 systemd[1]: Started cri-containerd-1dce624d4e544853f2d0b70d67a5ceff7dcb220bfcaa811ae54910cc51c1d607.scope. Dec 13 14:32:11.105179 systemd[1]: Started cri-containerd-4a00ec01e1a76e99268602a20f552bd6a278fd89f5f19daf38bad0927a9db05b.scope. Dec 13 14:32:11.233263 env[1408]: time="2024-12-13T14:32:11.233197918Z" level=info msg="StartContainer for \"4a00ec01e1a76e99268602a20f552bd6a278fd89f5f19daf38bad0927a9db05b\" returns successfully" Dec 13 14:32:11.235140 env[1408]: time="2024-12-13T14:32:11.235099331Z" level=info msg="StartContainer for \"1dce624d4e544853f2d0b70d67a5ceff7dcb220bfcaa811ae54910cc51c1d607\" returns successfully" Dec 13 14:32:11.327523 env[1408]: time="2024-12-13T14:32:11.327393840Z" level=info msg="CreateContainer within sandbox \"3acb4e540f8e888d8f7af6ce3cab7946b6be03d8d89e1a87386386e89dedcdc9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"864bdec3522eb8ac3f6875a8070cb96070cda520fbc1caeb8b6ba85c3fdbbeac\"" Dec 13 14:32:11.328448 env[1408]: time="2024-12-13T14:32:11.328259746Z" level=info msg="StartContainer for \"864bdec3522eb8ac3f6875a8070cb96070cda520fbc1caeb8b6ba85c3fdbbeac\"" Dec 13 14:32:11.388322 systemd[1]: Started cri-containerd-864bdec3522eb8ac3f6875a8070cb96070cda520fbc1caeb8b6ba85c3fdbbeac.scope. Dec 13 14:32:11.592284 env[1408]: time="2024-12-13T14:32:11.592175087Z" level=info msg="StartContainer for \"864bdec3522eb8ac3f6875a8070cb96070cda520fbc1caeb8b6ba85c3fdbbeac\" returns successfully" Dec 13 14:32:13.280976 kubelet[2117]: E1213 14:32:13.280847 2117 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.6-a-34fc77c933.1810c30d19eeca85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-34fc77c933,UID:ci-3510.3.6-a-34fc77c933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-34fc77c933,},FirstTimestamp:2024-12-13 14:31:57.574711941 +0000 UTC m=+0.290146742,LastTimestamp:2024-12-13 14:31:57.574711941 +0000 UTC m=+0.290146742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-34fc77c933,}" Dec 13 14:32:13.444584 kubelet[2117]: E1213 14:32:13.443903 2117 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.6-a-34fc77c933.1810c30d1bb61965 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-34fc77c933,UID:ci-3510.3.6-a-34fc77c933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-34fc77c933,},FirstTimestamp:2024-12-13 14:31:57.604551013 +0000 UTC m=+0.319985814,LastTimestamp:2024-12-13 14:31:57.604551013 +0000 UTC m=+0.319985814,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-34fc77c933,}" Dec 13 14:32:13.526677 kubelet[2117]: E1213 14:32:13.526558 2117 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.6-a-34fc77c933.1810c30d1f62b2fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-34fc77c933,UID:ci-3510.3.6-a-34fc77c933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-3510.3.6-a-34fc77c933 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-34fc77c933,},FirstTimestamp:2024-12-13 14:31:57.666194174 +0000 UTC m=+0.381629075,LastTimestamp:2024-12-13 14:31:57.666194174 +0000 UTC m=+0.381629075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-34fc77c933,}" Dec 13 14:32:13.615536 kubelet[2117]: E1213 14:32:13.615388 2117 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-34fc77c933" not found Dec 13 14:32:14.133058 kubelet[2117]: E1213 14:32:14.133013 2117 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-34fc77c933" not found Dec 13 14:32:14.595801 kubelet[2117]: E1213 14:32:14.595770 2117 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-34fc77c933" not found Dec 13 14:32:15.493948 kubelet[2117]: E1213 14:32:15.493903 2117 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-34fc77c933" not found Dec 13 14:32:17.213152 kubelet[2117]: E1213 14:32:17.213115 2117 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-34fc77c933\" not found" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:17.322800 kubelet[2117]: I1213 14:32:17.322765 2117 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:17.346717 kubelet[2117]: I1213 14:32:17.346682 2117 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:17.354569 kubelet[2117]: E1213 14:32:17.354532 2117 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:32:17.460065 kubelet[2117]: E1213 14:32:17.454943 2117 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:32:17.472184 systemd[1]: Reloading. Dec 13 14:32:17.555852 kubelet[2117]: E1213 14:32:17.555811 2117 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:32:17.569565 /usr/lib/systemd/system-generators/torcx-generator[2406]: time="2024-12-13T14:32:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:17.570036 /usr/lib/systemd/system-generators/torcx-generator[2406]: time="2024-12-13T14:32:17Z" level=info msg="torcx already run" Dec 13 14:32:17.656989 kubelet[2117]: E1213 14:32:17.656955 2117 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:32:17.660008 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:17.660027 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:17.676352 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:17.760088 kubelet[2117]: E1213 14:32:17.759974 2117 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-34fc77c933\" not found" Dec 13 14:32:17.785024 systemd[1]: Stopping kubelet.service... Dec 13 14:32:17.798138 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:32:17.798339 systemd[1]: Stopped kubelet.service. Dec 13 14:32:17.800355 systemd[1]: Starting kubelet.service... Dec 13 14:32:17.963231 systemd[1]: Started kubelet.service. Dec 13 14:32:18.007086 kubelet[2473]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:32:18.007086 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:32:18.007086 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:32:18.007579 kubelet[2473]: I1213 14:32:18.007150 2473 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:32:18.012595 kubelet[2473]: I1213 14:32:18.012466 2473 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:32:18.012595 kubelet[2473]: I1213 14:32:18.012488 2473 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:32:18.013379 kubelet[2473]: I1213 14:32:18.013349 2473 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:32:18.014676 kubelet[2473]: I1213 14:32:18.014639 2473 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:32:18.439937 kubelet[2473]: I1213 14:32:18.438763 2473 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:32:18.446149 kubelet[2473]: I1213 14:32:18.446125 2473 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:32:18.446589 kubelet[2473]: I1213 14:32:18.446550 2473 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:32:18.446934 kubelet[2473]: I1213 14:32:18.446709 2473 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-34fc77c933","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:32:18.447142 kubelet[2473]: I1213 14:32:18.447127 2473 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:32:18.447222 kubelet[2473]: I1213 14:32:18.447213 2473 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:32:18.447379 kubelet[2473]: I1213 14:32:18.447366 2473 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:32:18.447570 kubelet[2473]: I1213 14:32:18.447548 2473 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:32:18.447570 kubelet[2473]: I1213 14:32:18.447568 2473 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:32:18.447708 kubelet[2473]: I1213 14:32:18.447592 2473 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:32:18.447708 kubelet[2473]: I1213 14:32:18.447622 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:32:18.449012 kubelet[2473]: I1213 14:32:18.448987 2473 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:32:18.449979 kubelet[2473]: I1213 14:32:18.449963 2473 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:32:18.451636 kubelet[2473]: I1213 14:32:18.451621 2473 server.go:1264] "Started kubelet" Dec 13 14:32:18.461675 kubelet[2473]: I1213 14:32:18.461640 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:32:18.467763 kubelet[2473]: I1213 14:32:18.467731 2473 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:32:18.468991 kubelet[2473]: I1213 14:32:18.468970 2473 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:32:18.470155 kubelet[2473]: I1213 14:32:18.470105 2473 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:32:18.470440 kubelet[2473]: I1213 14:32:18.470425 2473 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:32:18.474256 kubelet[2473]: I1213 14:32:18.474163 2473 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:32:18.474969 kubelet[2473]: I1213 14:32:18.474944 2473 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:32:18.475131 kubelet[2473]: I1213 14:32:18.475109 2473 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:32:18.480750 kubelet[2473]: I1213 14:32:18.480717 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:32:18.484287 kubelet[2473]: I1213 14:32:18.484260 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:32:18.484431 kubelet[2473]: I1213 14:32:18.484420 2473 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:32:18.484509 kubelet[2473]: I1213 14:32:18.484501 2473 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:32:18.484630 kubelet[2473]: E1213 14:32:18.484612 2473 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:32:18.503100 kubelet[2473]: I1213 14:32:18.498638 2473 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:32:18.503100 kubelet[2473]: I1213 14:32:18.498788 2473 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:32:18.503100 kubelet[2473]: E1213 14:32:18.501805 2473 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:32:18.503100 kubelet[2473]: I1213 14:32:18.502755 2473 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:32:18.545918 kubelet[2473]: I1213 14:32:18.545883 2473 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:32:18.545918 kubelet[2473]: I1213 14:32:18.545919 2473 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:32:18.546215 kubelet[2473]: I1213 14:32:18.545941 2473 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:32:18.546215 kubelet[2473]: I1213 14:32:18.546113 2473 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:32:18.546215 kubelet[2473]: I1213 14:32:18.546128 2473 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:32:18.546215 kubelet[2473]: I1213 14:32:18.546147 2473 policy_none.go:49] "None policy: Start" Dec 13 14:32:18.546982 kubelet[2473]: I1213 14:32:18.546961 2473 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:32:18.546982 kubelet[2473]: I1213 14:32:18.546985 2473 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:32:18.547161 kubelet[2473]: I1213 14:32:18.547139 2473 state_mem.go:75] "Updated machine memory state" Dec 13 14:32:18.550899 kubelet[2473]: I1213 14:32:18.550884 2473 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:32:18.551239 kubelet[2473]: I1213 14:32:18.551205 2473 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:32:18.552704 kubelet[2473]: I1213 14:32:18.552690 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:32:18.578137 kubelet[2473]: I1213 14:32:18.578103 2473 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.585834 kubelet[2473]: I1213 14:32:18.585792 2473 topology_manager.go:215] "Topology Admit Handler" podUID="37e40a3eebe79d65e061bc6d0f3d5794" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.586095 kubelet[2473]: I1213 14:32:18.586071 2473 topology_manager.go:215] "Topology Admit Handler" podUID="aba4bf60e8bd81b12ce19fe5ab0c676e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.586296 kubelet[2473]: I1213 14:32:18.586275 2473 topology_manager.go:215] "Topology Admit Handler" podUID="b0624d75f859b17792bf683562f35245" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.593985 kubelet[2473]: I1213 14:32:18.593962 2473 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.594152 kubelet[2473]: I1213 14:32:18.594144 2473 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.602176 kubelet[2473]: W1213 14:32:18.601239 2473 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:32:18.602176 kubelet[2473]: W1213 14:32:18.601788 2473 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:32:18.602833 kubelet[2473]: W1213 14:32:18.602813 2473 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:32:18.675947 kubelet[2473]: I1213 14:32:18.675911 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37e40a3eebe79d65e061bc6d0f3d5794-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-34fc77c933\" (UID: \"37e40a3eebe79d65e061bc6d0f3d5794\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.675947 kubelet[2473]: I1213 14:32:18.675946 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37e40a3eebe79d65e061bc6d0f3d5794-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-34fc77c933\" (UID: \"37e40a3eebe79d65e061bc6d0f3d5794\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676200 kubelet[2473]: I1213 14:32:18.675974 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37e40a3eebe79d65e061bc6d0f3d5794-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-34fc77c933\" (UID: \"37e40a3eebe79d65e061bc6d0f3d5794\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676200 kubelet[2473]: I1213 14:32:18.675997 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676200 kubelet[2473]: I1213 14:32:18.676018 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676200 kubelet[2473]: I1213 14:32:18.676037 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0624d75f859b17792bf683562f35245-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-34fc77c933\" (UID: \"b0624d75f859b17792bf683562f35245\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676200 kubelet[2473]: I1213 14:32:18.676070 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676365 kubelet[2473]: I1213 14:32:18.676098 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:18.676365 kubelet[2473]: I1213 14:32:18.676146 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aba4bf60e8bd81b12ce19fe5ab0c676e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-34fc77c933\" (UID: \"aba4bf60e8bd81b12ce19fe5ab0c676e\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" Dec 13 14:32:19.455499 kubelet[2473]: I1213 14:32:19.455453 2473 apiserver.go:52] "Watching apiserver" Dec 13 14:32:19.475257 kubelet[2473]: I1213 14:32:19.475227 2473 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:32:19.560923 kubelet[2473]: I1213 14:32:19.560850 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-34fc77c933" podStartSLOduration=1.56082852 podStartE2EDuration="1.56082852s" podCreationTimestamp="2024-12-13 14:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:19.55194257 +0000 UTC m=+1.581443866" watchObservedRunningTime="2024-12-13 14:32:19.56082852 +0000 UTC m=+1.590329816" Dec 13 14:32:19.561284 kubelet[2473]: I1213 14:32:19.561237 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-34fc77c933" podStartSLOduration=1.5612235220000001 podStartE2EDuration="1.561223522s" podCreationTimestamp="2024-12-13 14:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:19.560664319 +0000 UTC m=+1.590165615" watchObservedRunningTime="2024-12-13 14:32:19.561223522 +0000 UTC m=+1.590724818" Dec 13 14:32:19.584850 kubelet[2473]: I1213 14:32:19.584781 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-34fc77c933" podStartSLOduration=1.584762454 podStartE2EDuration="1.584762454s" podCreationTimestamp="2024-12-13 14:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:19.574176795 +0000 UTC m=+1.603678091" watchObservedRunningTime="2024-12-13 14:32:19.584762454 +0000 UTC m=+1.614263750" Dec 13 14:32:21.079480 sudo[2504]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:32:21.079894 sudo[2504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:32:21.593048 sudo[2504]: pam_unix(sudo:session): session closed for user root Dec 13 14:32:22.951949 sudo[1779]: pam_unix(sudo:session): session closed for user root Dec 13 14:32:23.071794 sshd[1776]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:23.075189 systemd[1]: sshd@4-10.200.8.20:22-10.200.16.10:60046.service: Deactivated successfully. Dec 13 14:32:23.076299 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:32:23.076509 systemd[1]: session-7.scope: Consumed 4.386s CPU time. Dec 13 14:32:23.077381 systemd-logind[1397]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:32:23.078422 systemd-logind[1397]: Removed session 7. Dec 13 14:32:31.051257 kubelet[2473]: I1213 14:32:31.051212 2473 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:32:31.051969 env[1408]: time="2024-12-13T14:32:31.051927681Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:32:31.052304 kubelet[2473]: I1213 14:32:31.052171 2473 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:32:31.525623 kubelet[2473]: I1213 14:32:31.525569 2473 topology_manager.go:215] "Topology Admit Handler" podUID="228866ac-2b56-4212-ad1e-e56a43d2a836" podNamespace="kube-system" podName="kube-proxy-htcm7" Dec 13 14:32:31.532475 systemd[1]: Created slice kubepods-besteffort-pod228866ac_2b56_4212_ad1e_e56a43d2a836.slice. Dec 13 14:32:31.540135 kubelet[2473]: I1213 14:32:31.540104 2473 topology_manager.go:215] "Topology Admit Handler" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" podNamespace="kube-system" podName="cilium-nzwkh" Dec 13 14:32:31.546008 systemd[1]: Created slice kubepods-burstable-podc003f275_d71c_4136_ad85_62cf1aee22f9.slice. Dec 13 14:32:31.560159 kubelet[2473]: I1213 14:32:31.560126 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-net\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.560366 kubelet[2473]: I1213 14:32:31.560349 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/228866ac-2b56-4212-ad1e-e56a43d2a836-kube-proxy\") pod \"kube-proxy-htcm7\" (UID: \"228866ac-2b56-4212-ad1e-e56a43d2a836\") " pod="kube-system/kube-proxy-htcm7" Dec 13 14:32:31.560458 kubelet[2473]: I1213 14:32:31.560445 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-hostproc\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.560547 kubelet[2473]: I1213 14:32:31.560536 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-etc-cni-netd\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.560626 kubelet[2473]: I1213 14:32:31.560615 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cni-path\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.560743 kubelet[2473]: I1213 14:32:31.560728 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/228866ac-2b56-4212-ad1e-e56a43d2a836-lib-modules\") pod \"kube-proxy-htcm7\" (UID: \"228866ac-2b56-4212-ad1e-e56a43d2a836\") " pod="kube-system/kube-proxy-htcm7" Dec 13 14:32:31.560845 kubelet[2473]: I1213 14:32:31.560833 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/228866ac-2b56-4212-ad1e-e56a43d2a836-xtables-lock\") pod \"kube-proxy-htcm7\" (UID: \"228866ac-2b56-4212-ad1e-e56a43d2a836\") " pod="kube-system/kube-proxy-htcm7" Dec 13 14:32:31.560934 kubelet[2473]: I1213 14:32:31.560920 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf599\" (UniqueName: \"kubernetes.io/projected/228866ac-2b56-4212-ad1e-e56a43d2a836-kube-api-access-hf599\") pod \"kube-proxy-htcm7\" (UID: \"228866ac-2b56-4212-ad1e-e56a43d2a836\") " pod="kube-system/kube-proxy-htcm7" Dec 13 14:32:31.561023 kubelet[2473]: I1213 14:32:31.561011 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-bpf-maps\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561127 kubelet[2473]: I1213 14:32:31.561097 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-cgroup\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561219 kubelet[2473]: I1213 14:32:31.561205 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-lib-modules\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561298 kubelet[2473]: I1213 14:32:31.561286 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-xtables-lock\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561385 kubelet[2473]: I1213 14:32:31.561374 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-config-path\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561467 kubelet[2473]: I1213 14:32:31.561456 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-kernel\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561545 kubelet[2473]: I1213 14:32:31.561533 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-hubble-tls\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561624 kubelet[2473]: I1213 14:32:31.561612 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfbkz\" (UniqueName: \"kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-kube-api-access-wfbkz\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561726 kubelet[2473]: I1213 14:32:31.561711 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c003f275-d71c-4136-ad85-62cf1aee22f9-clustermesh-secrets\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.561829 kubelet[2473]: I1213 14:32:31.561812 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-run\") pod \"cilium-nzwkh\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " pod="kube-system/cilium-nzwkh" Dec 13 14:32:31.688021 kubelet[2473]: E1213 14:32:31.686707 2473 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 14:32:31.688021 kubelet[2473]: E1213 14:32:31.686736 2473 projected.go:200] Error preparing data for projected volume kube-api-access-wfbkz for pod kube-system/cilium-nzwkh: configmap "kube-root-ca.crt" not found Dec 13 14:32:31.688021 kubelet[2473]: E1213 14:32:31.686797 2473 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-kube-api-access-wfbkz podName:c003f275-d71c-4136-ad85-62cf1aee22f9 nodeName:}" failed. No retries permitted until 2024-12-13 14:32:32.186776535 +0000 UTC m=+14.216277831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wfbkz" (UniqueName: "kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-kube-api-access-wfbkz") pod "cilium-nzwkh" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9") : configmap "kube-root-ca.crt" not found Dec 13 14:32:31.689172 kubelet[2473]: E1213 14:32:31.689144 2473 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 14:32:31.689172 kubelet[2473]: E1213 14:32:31.689166 2473 projected.go:200] Error preparing data for projected volume kube-api-access-hf599 for pod kube-system/kube-proxy-htcm7: configmap "kube-root-ca.crt" not found Dec 13 14:32:31.689325 kubelet[2473]: E1213 14:32:31.689211 2473 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/228866ac-2b56-4212-ad1e-e56a43d2a836-kube-api-access-hf599 podName:228866ac-2b56-4212-ad1e-e56a43d2a836 nodeName:}" failed. No retries permitted until 2024-12-13 14:32:32.189194045 +0000 UTC m=+14.218695341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hf599" (UniqueName: "kubernetes.io/projected/228866ac-2b56-4212-ad1e-e56a43d2a836-kube-api-access-hf599") pod "kube-proxy-htcm7" (UID: "228866ac-2b56-4212-ad1e-e56a43d2a836") : configmap "kube-root-ca.crt" not found Dec 13 14:32:32.089024 kubelet[2473]: I1213 14:32:32.088978 2473 topology_manager.go:215] "Topology Admit Handler" podUID="1d76133e-6973-48a2-a28f-d827290a64f6" podNamespace="kube-system" podName="cilium-operator-599987898-4n62q" Dec 13 14:32:32.096445 systemd[1]: Created slice kubepods-besteffort-pod1d76133e_6973_48a2_a28f_d827290a64f6.slice. Dec 13 14:32:32.166845 kubelet[2473]: I1213 14:32:32.166807 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d76133e-6973-48a2-a28f-d827290a64f6-cilium-config-path\") pod \"cilium-operator-599987898-4n62q\" (UID: \"1d76133e-6973-48a2-a28f-d827290a64f6\") " pod="kube-system/cilium-operator-599987898-4n62q" Dec 13 14:32:32.167105 kubelet[2473]: I1213 14:32:32.167066 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwk8\" (UniqueName: \"kubernetes.io/projected/1d76133e-6973-48a2-a28f-d827290a64f6-kube-api-access-mbwk8\") pod \"cilium-operator-599987898-4n62q\" (UID: \"1d76133e-6973-48a2-a28f-d827290a64f6\") " pod="kube-system/cilium-operator-599987898-4n62q" Dec 13 14:32:32.404989 env[1408]: time="2024-12-13T14:32:32.404825332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-4n62q,Uid:1d76133e-6973-48a2-a28f-d827290a64f6,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:32.438184 env[1408]: time="2024-12-13T14:32:32.438143079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htcm7,Uid:228866ac-2b56-4212-ad1e-e56a43d2a836,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:32.449192 env[1408]: time="2024-12-13T14:32:32.449150428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nzwkh,Uid:c003f275-d71c-4136-ad85-62cf1aee22f9,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:32.464682 env[1408]: time="2024-12-13T14:32:32.460625678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:32.464682 env[1408]: time="2024-12-13T14:32:32.460681179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:32.464682 env[1408]: time="2024-12-13T14:32:32.460699079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:32.464682 env[1408]: time="2024-12-13T14:32:32.460872279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4 pid=2563 runtime=io.containerd.runc.v2 Dec 13 14:32:32.478244 systemd[1]: Started cri-containerd-db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4.scope. Dec 13 14:32:32.523838 env[1408]: time="2024-12-13T14:32:32.523766357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:32.524007 env[1408]: time="2024-12-13T14:32:32.523850458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:32.524007 env[1408]: time="2024-12-13T14:32:32.523877858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:32.524172 env[1408]: time="2024-12-13T14:32:32.524025859Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079 pid=2597 runtime=io.containerd.runc.v2 Dec 13 14:32:32.534805 env[1408]: time="2024-12-13T14:32:32.534397004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:32.534805 env[1408]: time="2024-12-13T14:32:32.534472705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:32.534805 env[1408]: time="2024-12-13T14:32:32.534500705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:32.534805 env[1408]: time="2024-12-13T14:32:32.534639805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f635af2767e2df7b9d1bb19370f09271d1e6d371129b4de031943b8625e4624 pid=2614 runtime=io.containerd.runc.v2 Dec 13 14:32:32.536371 env[1408]: time="2024-12-13T14:32:32.536325113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-4n62q,Uid:1d76133e-6973-48a2-a28f-d827290a64f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4\"" Dec 13 14:32:32.539525 env[1408]: time="2024-12-13T14:32:32.538490222Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:32:32.549891 systemd[1]: Started cri-containerd-362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079.scope. Dec 13 14:32:32.558482 systemd[1]: Started cri-containerd-4f635af2767e2df7b9d1bb19370f09271d1e6d371129b4de031943b8625e4624.scope. Dec 13 14:32:32.594619 env[1408]: time="2024-12-13T14:32:32.594572070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nzwkh,Uid:c003f275-d71c-4136-ad85-62cf1aee22f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\"" Dec 13 14:32:32.615973 env[1408]: time="2024-12-13T14:32:32.615927065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htcm7,Uid:228866ac-2b56-4212-ad1e-e56a43d2a836,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f635af2767e2df7b9d1bb19370f09271d1e6d371129b4de031943b8625e4624\"" Dec 13 14:32:32.619231 env[1408]: time="2024-12-13T14:32:32.619182579Z" level=info msg="CreateContainer within sandbox \"4f635af2767e2df7b9d1bb19370f09271d1e6d371129b4de031943b8625e4624\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:32:32.661219 env[1408]: time="2024-12-13T14:32:32.661108664Z" level=info msg="CreateContainer within sandbox \"4f635af2767e2df7b9d1bb19370f09271d1e6d371129b4de031943b8625e4624\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a0d51adc9ad0ba77f994a1c613166fbce30387ebcbda58f086ba9aec140da22\"" Dec 13 14:32:32.662370 env[1408]: time="2024-12-13T14:32:32.662330870Z" level=info msg="StartContainer for \"4a0d51adc9ad0ba77f994a1c613166fbce30387ebcbda58f086ba9aec140da22\"" Dec 13 14:32:32.693929 systemd[1]: run-containerd-runc-k8s.io-4a0d51adc9ad0ba77f994a1c613166fbce30387ebcbda58f086ba9aec140da22-runc.A2s9Ht.mount: Deactivated successfully. Dec 13 14:32:32.700431 systemd[1]: Started cri-containerd-4a0d51adc9ad0ba77f994a1c613166fbce30387ebcbda58f086ba9aec140da22.scope. Dec 13 14:32:32.734007 env[1408]: time="2024-12-13T14:32:32.733953386Z" level=info msg="StartContainer for \"4a0d51adc9ad0ba77f994a1c613166fbce30387ebcbda58f086ba9aec140da22\" returns successfully" Dec 13 14:32:33.585061 kubelet[2473]: I1213 14:32:33.584991 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-htcm7" podStartSLOduration=2.5849692060000002 podStartE2EDuration="2.584969206s" podCreationTimestamp="2024-12-13 14:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:33.584968906 +0000 UTC m=+15.614470202" watchObservedRunningTime="2024-12-13 14:32:33.584969206 +0000 UTC m=+15.614470502" Dec 13 14:32:34.904687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533513345.mount: Deactivated successfully. Dec 13 14:32:35.637903 env[1408]: time="2024-12-13T14:32:35.637853876Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:35.645721 env[1408]: time="2024-12-13T14:32:35.645683009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:35.650371 env[1408]: time="2024-12-13T14:32:35.650336228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:35.650905 env[1408]: time="2024-12-13T14:32:35.650872931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:32:35.653660 env[1408]: time="2024-12-13T14:32:35.652967140Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:32:35.654403 env[1408]: time="2024-12-13T14:32:35.654366545Z" level=info msg="CreateContainer within sandbox \"db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:32:35.686396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797220352.mount: Deactivated successfully. Dec 13 14:32:35.701596 env[1408]: time="2024-12-13T14:32:35.701555044Z" level=info msg="CreateContainer within sandbox \"db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\"" Dec 13 14:32:35.702128 env[1408]: time="2024-12-13T14:32:35.702040546Z" level=info msg="StartContainer for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\"" Dec 13 14:32:35.719981 systemd[1]: Started cri-containerd-2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78.scope. Dec 13 14:32:35.756172 env[1408]: time="2024-12-13T14:32:35.756122374Z" level=info msg="StartContainer for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" returns successfully" Dec 13 14:32:38.506693 kubelet[2473]: I1213 14:32:38.506182 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-4n62q" podStartSLOduration=3.391959249 podStartE2EDuration="6.506162866s" podCreationTimestamp="2024-12-13 14:32:32 +0000 UTC" firstStartedPulling="2024-12-13 14:32:32.537806019 +0000 UTC m=+14.567307415" lastFinishedPulling="2024-12-13 14:32:35.652009636 +0000 UTC m=+17.681511032" observedRunningTime="2024-12-13 14:32:36.590474649 +0000 UTC m=+18.619976045" watchObservedRunningTime="2024-12-13 14:32:38.506162866 +0000 UTC m=+20.535664162" Dec 13 14:32:41.823706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604195829.mount: Deactivated successfully. Dec 13 14:32:44.556498 env[1408]: time="2024-12-13T14:32:44.556446960Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:44.562745 env[1408]: time="2024-12-13T14:32:44.562708284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:44.567135 env[1408]: time="2024-12-13T14:32:44.567096600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:44.567749 env[1408]: time="2024-12-13T14:32:44.567699702Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:32:44.570546 env[1408]: time="2024-12-13T14:32:44.570498512Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:32:44.599294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638521437.mount: Deactivated successfully. Dec 13 14:32:44.608827 env[1408]: time="2024-12-13T14:32:44.608780154Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\"" Dec 13 14:32:44.609478 env[1408]: time="2024-12-13T14:32:44.609444856Z" level=info msg="StartContainer for \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\"" Dec 13 14:32:44.637911 systemd[1]: Started cri-containerd-fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80.scope. Dec 13 14:32:44.670736 env[1408]: time="2024-12-13T14:32:44.670684182Z" level=info msg="StartContainer for \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\" returns successfully" Dec 13 14:32:44.676650 systemd[1]: cri-containerd-fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80.scope: Deactivated successfully. Dec 13 14:32:45.595443 systemd[1]: run-containerd-runc-k8s.io-fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80-runc.Del2BR.mount: Deactivated successfully. Dec 13 14:32:45.595600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80-rootfs.mount: Deactivated successfully. Dec 13 14:32:54.679285 env[1408]: time="2024-12-13T14:32:54.679227913Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80,ID:fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80,Pid:2888,ExitStatus:0,ExitedAt:2024-12-13 14:32:44.678418111 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Dec 13 14:32:56.595117 env[1408]: time="2024-12-13T14:32:56.594955891Z" level=info msg="TaskExit event &TaskExit{ContainerID:fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80,ID:fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80,Pid:2888,ExitStatus:0,ExitedAt:2024-12-13 14:32:44.678418111 +0000 UTC,XXX_unrecognized:[],}" Dec 13 14:33:00.285679 env[1408]: time="2024-12-13T14:32:58.595817110Z" level=error msg="get state for fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80" error="context deadline exceeded: unknown" Dec 13 14:33:00.285679 env[1408]: time="2024-12-13T14:32:58.595843510Z" level=warning msg="unknown status" status=0 Dec 13 14:33:00.632624 env[1408]: time="2024-12-13T14:33:00.632523912Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:33:00.876633 env[1408]: time="2024-12-13T14:33:00.876569361Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\"" Dec 13 14:33:00.877334 env[1408]: time="2024-12-13T14:33:00.877277163Z" level=info msg="StartContainer for \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\"" Dec 13 14:33:00.904032 systemd[1]: Started cri-containerd-ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0.scope. Dec 13 14:33:00.938352 env[1408]: time="2024-12-13T14:33:00.937568548Z" level=info msg="StartContainer for \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\" returns successfully" Dec 13 14:33:00.947411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:33:00.948488 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:33:00.949783 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:33:00.952448 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:00.952914 systemd[1]: cri-containerd-ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0.scope: Deactivated successfully. Dec 13 14:33:00.968260 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:01.737881 systemd[1]: run-containerd-runc-k8s.io-ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0-runc.9Cq9dP.mount: Deactivated successfully. Dec 13 14:33:01.738197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0-rootfs.mount: Deactivated successfully. Dec 13 14:33:01.981632 env[1408]: time="2024-12-13T14:33:01.981564221Z" level=info msg="shim disconnected" id=ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0 Dec 13 14:33:01.981632 env[1408]: time="2024-12-13T14:33:01.981625321Z" level=warning msg="cleaning up after shim disconnected" id=ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0 namespace=k8s.io Dec 13 14:33:01.982247 env[1408]: time="2024-12-13T14:33:01.981640421Z" level=info msg="cleaning up dead shim" Dec 13 14:33:01.990789 env[1408]: time="2024-12-13T14:33:01.990392548Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2969 runtime=io.containerd.runc.v2\n" Dec 13 14:33:02.638437 env[1408]: time="2024-12-13T14:33:02.638385198Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:33:02.929410 env[1408]: time="2024-12-13T14:33:02.929176873Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\"" Dec 13 14:33:02.930073 env[1408]: time="2024-12-13T14:33:02.930007275Z" level=info msg="StartContainer for \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\"" Dec 13 14:33:02.957349 systemd[1]: run-containerd-runc-k8s.io-ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d-runc.u9d7Df.mount: Deactivated successfully. Dec 13 14:33:02.962500 systemd[1]: Started cri-containerd-ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d.scope. Dec 13 14:33:02.992825 systemd[1]: cri-containerd-ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d.scope: Deactivated successfully. Dec 13 14:33:02.997892 env[1408]: time="2024-12-13T14:33:02.997850279Z" level=info msg="StartContainer for \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\" returns successfully" Dec 13 14:33:03.773934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d-rootfs.mount: Deactivated successfully. Dec 13 14:33:03.879136 env[1408]: time="2024-12-13T14:33:03.879059807Z" level=info msg="shim disconnected" id=ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d Dec 13 14:33:03.879136 env[1408]: time="2024-12-13T14:33:03.879127807Z" level=warning msg="cleaning up after shim disconnected" id=ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d namespace=k8s.io Dec 13 14:33:03.879136 env[1408]: time="2024-12-13T14:33:03.879143107Z" level=info msg="cleaning up dead shim" Dec 13 14:33:03.887228 env[1408]: time="2024-12-13T14:33:03.887184931Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3028 runtime=io.containerd.runc.v2\n" Dec 13 14:33:04.646271 env[1408]: time="2024-12-13T14:33:04.646224976Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:33:04.759980 env[1408]: time="2024-12-13T14:33:04.759939512Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\"" Dec 13 14:33:04.760534 env[1408]: time="2024-12-13T14:33:04.760498714Z" level=info msg="StartContainer for \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\"" Dec 13 14:33:04.784968 systemd[1]: Started cri-containerd-365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde.scope. Dec 13 14:33:04.791843 systemd[1]: run-containerd-runc-k8s.io-365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde-runc.Ux8BfF.mount: Deactivated successfully. Dec 13 14:33:04.824023 systemd[1]: cri-containerd-365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde.scope: Deactivated successfully. Dec 13 14:33:04.830946 env[1408]: time="2024-12-13T14:33:04.830900322Z" level=info msg="StartContainer for \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\" returns successfully" Dec 13 14:33:04.849619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde-rootfs.mount: Deactivated successfully. Dec 13 14:33:04.863745 env[1408]: time="2024-12-13T14:33:04.863688819Z" level=info msg="shim disconnected" id=365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde Dec 13 14:33:04.863745 env[1408]: time="2024-12-13T14:33:04.863737719Z" level=warning msg="cleaning up after shim disconnected" id=365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde namespace=k8s.io Dec 13 14:33:04.863745 env[1408]: time="2024-12-13T14:33:04.863749619Z" level=info msg="cleaning up dead shim" Dec 13 14:33:04.871321 env[1408]: time="2024-12-13T14:33:04.871284341Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3085 runtime=io.containerd.runc.v2\n" Dec 13 14:33:05.652504 env[1408]: time="2024-12-13T14:33:05.652419832Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:33:05.723346 env[1408]: time="2024-12-13T14:33:05.723300040Z" level=info msg="CreateContainer within sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\"" Dec 13 14:33:05.725648 env[1408]: time="2024-12-13T14:33:05.723963942Z" level=info msg="StartContainer for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\"" Dec 13 14:33:05.741508 systemd[1]: Started cri-containerd-5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593.scope. Dec 13 14:33:05.783438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214929731.mount: Deactivated successfully. Dec 13 14:33:05.793424 env[1408]: time="2024-12-13T14:33:05.792728043Z" level=info msg="StartContainer for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" returns successfully" Dec 13 14:33:05.818782 systemd[1]: run-containerd-runc-k8s.io-5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593-runc.lIz3ZL.mount: Deactivated successfully. Dec 13 14:33:05.888988 kubelet[2473]: I1213 14:33:05.888944 2473 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:33:05.930359 kubelet[2473]: I1213 14:33:05.930241 2473 topology_manager.go:215] "Topology Admit Handler" podUID="5ba009bb-a2dc-44f4-a417-7362ad693523" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mcm5k" Dec 13 14:33:05.936489 systemd[1]: Created slice kubepods-burstable-pod5ba009bb_a2dc_44f4_a417_7362ad693523.slice. Dec 13 14:33:05.941651 kubelet[2473]: I1213 14:33:05.941615 2473 topology_manager.go:215] "Topology Admit Handler" podUID="38032238-3deb-4406-8d69-4d28cb8dbed4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qh29s" Dec 13 14:33:05.949740 systemd[1]: Created slice kubepods-burstable-pod38032238_3deb_4406_8d69_4d28cb8dbed4.slice. Dec 13 14:33:06.096764 kubelet[2473]: I1213 14:33:06.096713 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38032238-3deb-4406-8d69-4d28cb8dbed4-config-volume\") pod \"coredns-7db6d8ff4d-qh29s\" (UID: \"38032238-3deb-4406-8d69-4d28cb8dbed4\") " pod="kube-system/coredns-7db6d8ff4d-qh29s" Dec 13 14:33:06.097076 kubelet[2473]: I1213 14:33:06.097038 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ba009bb-a2dc-44f4-a417-7362ad693523-config-volume\") pod \"coredns-7db6d8ff4d-mcm5k\" (UID: \"5ba009bb-a2dc-44f4-a417-7362ad693523\") " pod="kube-system/coredns-7db6d8ff4d-mcm5k" Dec 13 14:33:06.097222 kubelet[2473]: I1213 14:33:06.097202 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxswg\" (UniqueName: \"kubernetes.io/projected/38032238-3deb-4406-8d69-4d28cb8dbed4-kube-api-access-hxswg\") pod \"coredns-7db6d8ff4d-qh29s\" (UID: \"38032238-3deb-4406-8d69-4d28cb8dbed4\") " pod="kube-system/coredns-7db6d8ff4d-qh29s" Dec 13 14:33:06.097364 kubelet[2473]: I1213 14:33:06.097345 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9tr\" (UniqueName: \"kubernetes.io/projected/5ba009bb-a2dc-44f4-a417-7362ad693523-kube-api-access-md9tr\") pod \"coredns-7db6d8ff4d-mcm5k\" (UID: \"5ba009bb-a2dc-44f4-a417-7362ad693523\") " pod="kube-system/coredns-7db6d8ff4d-mcm5k" Dec 13 14:33:06.240802 env[1408]: time="2024-12-13T14:33:06.240758749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mcm5k,Uid:5ba009bb-a2dc-44f4-a417-7362ad693523,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:06.253855 env[1408]: time="2024-12-13T14:33:06.253815687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qh29s,Uid:38032238-3deb-4406-8d69-4d28cb8dbed4,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:06.669620 kubelet[2473]: I1213 14:33:06.669468 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nzwkh" podStartSLOduration=23.697886469 podStartE2EDuration="35.669449993s" podCreationTimestamp="2024-12-13 14:32:31 +0000 UTC" firstStartedPulling="2024-12-13 14:32:32.597119982 +0000 UTC m=+14.626621278" lastFinishedPulling="2024-12-13 14:32:44.568683506 +0000 UTC m=+26.598184802" observedRunningTime="2024-12-13 14:33:06.669294293 +0000 UTC m=+48.698795589" watchObservedRunningTime="2024-12-13 14:33:06.669449993 +0000 UTC m=+48.698951289" Dec 13 14:33:07.940058 systemd-networkd[1571]: cilium_host: Link UP Dec 13 14:33:07.940221 systemd-networkd[1571]: cilium_net: Link UP Dec 13 14:33:07.940226 systemd-networkd[1571]: cilium_net: Gained carrier Dec 13 14:33:07.945829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:33:07.945471 systemd-networkd[1571]: cilium_host: Gained carrier Dec 13 14:33:08.000783 systemd-networkd[1571]: cilium_net: Gained IPv6LL Dec 13 14:33:08.070037 systemd-networkd[1571]: cilium_vxlan: Link UP Dec 13 14:33:08.070051 systemd-networkd[1571]: cilium_vxlan: Gained carrier Dec 13 14:33:08.291693 kernel: NET: Registered PF_ALG protocol family Dec 13 14:33:08.666850 systemd-networkd[1571]: cilium_host: Gained IPv6LL Dec 13 14:33:08.957006 systemd-networkd[1571]: lxc_health: Link UP Dec 13 14:33:08.967707 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:33:08.968011 systemd-networkd[1571]: lxc_health: Gained carrier Dec 13 14:33:09.242815 systemd-networkd[1571]: cilium_vxlan: Gained IPv6LL Dec 13 14:33:09.314543 systemd-networkd[1571]: lxc9b91fd0618f7: Link UP Dec 13 14:33:09.323686 kernel: eth0: renamed from tmp50d8f Dec 13 14:33:09.339132 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b91fd0618f7: link becomes ready Dec 13 14:33:09.334934 systemd-networkd[1571]: lxc9b91fd0618f7: Gained carrier Dec 13 14:33:09.371022 systemd-networkd[1571]: lxc292522d029b4: Link UP Dec 13 14:33:09.381755 kernel: eth0: renamed from tmpbd9fd Dec 13 14:33:09.391028 systemd-networkd[1571]: lxc292522d029b4: Gained carrier Dec 13 14:33:09.391678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc292522d029b4: link becomes ready Dec 13 14:33:10.138856 systemd-networkd[1571]: lxc_health: Gained IPv6LL Dec 13 14:33:10.906858 systemd-networkd[1571]: lxc9b91fd0618f7: Gained IPv6LL Dec 13 14:33:11.354848 systemd-networkd[1571]: lxc292522d029b4: Gained IPv6LL Dec 13 14:33:13.101725 env[1408]: time="2024-12-13T14:33:13.101618165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:13.102259 env[1408]: time="2024-12-13T14:33:13.102216267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:13.102401 env[1408]: time="2024-12-13T14:33:13.102369367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:13.102703 env[1408]: time="2024-12-13T14:33:13.102638568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd9fd109cd5e791d15ca2e235252c7f1a749654edf7d47044edaeb99b0581eca pid=3636 runtime=io.containerd.runc.v2 Dec 13 14:33:13.119567 env[1408]: time="2024-12-13T14:33:13.119489114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:13.119567 env[1408]: time="2024-12-13T14:33:13.119527814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:13.119884 env[1408]: time="2024-12-13T14:33:13.119542814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:13.119884 env[1408]: time="2024-12-13T14:33:13.119737315Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50d8f95f92c22af0b0648cff98ac0c9ad6864aaa86e7d7fc4f96c2a1fa6bfb4e pid=3652 runtime=io.containerd.runc.v2 Dec 13 14:33:13.139768 systemd[1]: Started cri-containerd-bd9fd109cd5e791d15ca2e235252c7f1a749654edf7d47044edaeb99b0581eca.scope. Dec 13 14:33:13.180193 systemd[1]: Started cri-containerd-50d8f95f92c22af0b0648cff98ac0c9ad6864aaa86e7d7fc4f96c2a1fa6bfb4e.scope. Dec 13 14:33:13.245942 env[1408]: time="2024-12-13T14:33:13.245889761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qh29s,Uid:38032238-3deb-4406-8d69-4d28cb8dbed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd9fd109cd5e791d15ca2e235252c7f1a749654edf7d47044edaeb99b0581eca\"" Dec 13 14:33:13.250540 env[1408]: time="2024-12-13T14:33:13.250494374Z" level=info msg="CreateContainer within sandbox \"bd9fd109cd5e791d15ca2e235252c7f1a749654edf7d47044edaeb99b0581eca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:33:13.280160 env[1408]: time="2024-12-13T14:33:13.280097755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mcm5k,Uid:5ba009bb-a2dc-44f4-a417-7362ad693523,Namespace:kube-system,Attempt:0,} returns sandbox id \"50d8f95f92c22af0b0648cff98ac0c9ad6864aaa86e7d7fc4f96c2a1fa6bfb4e\"" Dec 13 14:33:13.284794 env[1408]: time="2024-12-13T14:33:13.283984966Z" level=info msg="CreateContainer within sandbox \"50d8f95f92c22af0b0648cff98ac0c9ad6864aaa86e7d7fc4f96c2a1fa6bfb4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:33:13.289617 env[1408]: time="2024-12-13T14:33:13.289575081Z" level=info msg="CreateContainer within sandbox \"bd9fd109cd5e791d15ca2e235252c7f1a749654edf7d47044edaeb99b0581eca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a49bb6089a61dccd7d809aab33548db4cf635e1dcbb70fdcdff403fb353d05a8\"" Dec 13 14:33:13.291694 env[1408]: time="2024-12-13T14:33:13.291613687Z" level=info msg="StartContainer for \"a49bb6089a61dccd7d809aab33548db4cf635e1dcbb70fdcdff403fb353d05a8\"" Dec 13 14:33:13.321067 systemd[1]: Started cri-containerd-a49bb6089a61dccd7d809aab33548db4cf635e1dcbb70fdcdff403fb353d05a8.scope. Dec 13 14:33:13.337469 env[1408]: time="2024-12-13T14:33:13.337406913Z" level=info msg="CreateContainer within sandbox \"50d8f95f92c22af0b0648cff98ac0c9ad6864aaa86e7d7fc4f96c2a1fa6bfb4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70628a9d7d79e1d93a1723e51d5be56f6cd63538eeeb193c16fad036bdb31585\"" Dec 13 14:33:13.346084 env[1408]: time="2024-12-13T14:33:13.344963834Z" level=info msg="StartContainer for \"70628a9d7d79e1d93a1723e51d5be56f6cd63538eeeb193c16fad036bdb31585\"" Dec 13 14:33:13.380179 systemd[1]: Started cri-containerd-70628a9d7d79e1d93a1723e51d5be56f6cd63538eeeb193c16fad036bdb31585.scope. Dec 13 14:33:13.414468 env[1408]: time="2024-12-13T14:33:13.414411824Z" level=info msg="StartContainer for \"a49bb6089a61dccd7d809aab33548db4cf635e1dcbb70fdcdff403fb353d05a8\" returns successfully" Dec 13 14:33:13.435898 env[1408]: time="2024-12-13T14:33:13.435844383Z" level=info msg="StartContainer for \"70628a9d7d79e1d93a1723e51d5be56f6cd63538eeeb193c16fad036bdb31585\" returns successfully" Dec 13 14:33:13.711523 kubelet[2473]: I1213 14:33:13.711363 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qh29s" podStartSLOduration=41.71134054 podStartE2EDuration="41.71134054s" podCreationTimestamp="2024-12-13 14:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:13.710535138 +0000 UTC m=+55.740036534" watchObservedRunningTime="2024-12-13 14:33:13.71134054 +0000 UTC m=+55.740841836" Dec 13 14:33:13.757796 kubelet[2473]: I1213 14:33:13.757714 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mcm5k" podStartSLOduration=41.757689268 podStartE2EDuration="41.757689268s" podCreationTimestamp="2024-12-13 14:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:13.736252709 +0000 UTC m=+55.765754105" watchObservedRunningTime="2024-12-13 14:33:13.757689268 +0000 UTC m=+55.787190564" Dec 13 14:33:14.110841 systemd[1]: run-containerd-runc-k8s.io-50d8f95f92c22af0b0648cff98ac0c9ad6864aaa86e7d7fc4f96c2a1fa6bfb4e-runc.pXnZeF.mount: Deactivated successfully. Dec 13 14:35:20.972003 systemd[1]: Started sshd@5-10.200.8.20:22-10.200.16.10:52588.service. Dec 13 14:35:21.684126 sshd[3813]: Accepted publickey for core from 10.200.16.10 port 52588 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:21.685716 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:21.690760 systemd-logind[1397]: New session 8 of user core. Dec 13 14:35:21.690772 systemd[1]: Started session-8.scope. Dec 13 14:35:22.245898 sshd[3813]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:22.249297 systemd[1]: sshd@5-10.200.8.20:22-10.200.16.10:52588.service: Deactivated successfully. Dec 13 14:35:22.250419 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:35:22.251373 systemd-logind[1397]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:35:22.252373 systemd-logind[1397]: Removed session 8. Dec 13 14:35:27.380191 systemd[1]: Started sshd@6-10.200.8.20:22-10.200.16.10:52590.service. Dec 13 14:35:28.090507 sshd[3826]: Accepted publickey for core from 10.200.16.10 port 52590 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:28.092221 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:28.099397 systemd[1]: Started session-9.scope. Dec 13 14:35:28.100395 systemd-logind[1397]: New session 9 of user core. Dec 13 14:35:28.660242 sshd[3826]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:28.663414 systemd[1]: sshd@6-10.200.8.20:22-10.200.16.10:52590.service: Deactivated successfully. Dec 13 14:35:28.664257 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:35:28.664766 systemd-logind[1397]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:35:28.665532 systemd-logind[1397]: Removed session 9. Dec 13 14:35:33.780124 systemd[1]: Started sshd@7-10.200.8.20:22-10.200.16.10:58468.service. Dec 13 14:35:34.534392 sshd[3845]: Accepted publickey for core from 10.200.16.10 port 58468 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:34.536065 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:34.541364 systemd[1]: Started session-10.scope. Dec 13 14:35:34.542008 systemd-logind[1397]: New session 10 of user core. Dec 13 14:35:35.095451 sshd[3845]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:35.098456 systemd[1]: sshd@7-10.200.8.20:22-10.200.16.10:58468.service: Deactivated successfully. Dec 13 14:35:35.099476 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:35:35.100192 systemd-logind[1397]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:35:35.101084 systemd-logind[1397]: Removed session 10. Dec 13 14:35:40.226443 systemd[1]: Started sshd@8-10.200.8.20:22-10.200.16.10:43658.service. Dec 13 14:35:41.010424 sshd[3857]: Accepted publickey for core from 10.200.16.10 port 43658 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:41.012125 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:41.016762 systemd-logind[1397]: New session 11 of user core. Dec 13 14:35:41.017835 systemd[1]: Started session-11.scope. Dec 13 14:35:41.560584 sshd[3857]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:41.563504 systemd[1]: sshd@8-10.200.8.20:22-10.200.16.10:43658.service: Deactivated successfully. Dec 13 14:35:41.564441 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:35:41.565191 systemd-logind[1397]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:35:41.566050 systemd-logind[1397]: Removed session 11. Dec 13 14:35:46.681280 systemd[1]: Started sshd@9-10.200.8.20:22-10.200.16.10:43670.service. Dec 13 14:35:47.390897 sshd[3873]: Accepted publickey for core from 10.200.16.10 port 43670 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:47.392543 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:47.398266 systemd-logind[1397]: New session 12 of user core. Dec 13 14:35:47.398937 systemd[1]: Started session-12.scope. Dec 13 14:35:47.948538 sshd[3873]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:47.952534 systemd[1]: sshd@9-10.200.8.20:22-10.200.16.10:43670.service: Deactivated successfully. Dec 13 14:35:47.953695 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:35:47.954572 systemd-logind[1397]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:35:47.955541 systemd-logind[1397]: Removed session 12. Dec 13 14:35:48.068167 systemd[1]: Started sshd@10-10.200.8.20:22-10.200.16.10:43682.service. Dec 13 14:35:48.779727 sshd[3885]: Accepted publickey for core from 10.200.16.10 port 43682 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:48.781468 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:48.786423 systemd[1]: Started session-13.scope. Dec 13 14:35:48.787085 systemd-logind[1397]: New session 13 of user core. Dec 13 14:35:49.365382 sshd[3885]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:49.368358 systemd[1]: sshd@10-10.200.8.20:22-10.200.16.10:43682.service: Deactivated successfully. Dec 13 14:35:49.369263 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:35:49.370006 systemd-logind[1397]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:35:49.370882 systemd-logind[1397]: Removed session 13. Dec 13 14:35:49.485073 systemd[1]: Started sshd@11-10.200.8.20:22-10.200.16.10:34180.service. Dec 13 14:35:50.195433 sshd[3894]: Accepted publickey for core from 10.200.16.10 port 34180 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:50.198081 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:50.202734 systemd-logind[1397]: New session 14 of user core. Dec 13 14:35:50.203564 systemd[1]: Started session-14.scope. Dec 13 14:35:50.760205 sshd[3894]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:50.763677 systemd[1]: sshd@11-10.200.8.20:22-10.200.16.10:34180.service: Deactivated successfully. Dec 13 14:35:50.764808 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:35:50.765776 systemd-logind[1397]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:35:50.766837 systemd-logind[1397]: Removed session 14. Dec 13 14:35:55.881853 systemd[1]: Started sshd@12-10.200.8.20:22-10.200.16.10:34188.service. Dec 13 14:35:56.592276 sshd[3906]: Accepted publickey for core from 10.200.16.10 port 34188 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:56.594123 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:56.601341 systemd[1]: Started session-15.scope. Dec 13 14:35:56.601961 systemd-logind[1397]: New session 15 of user core. Dec 13 14:35:57.157138 sshd[3906]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:57.160574 systemd[1]: sshd@12-10.200.8.20:22-10.200.16.10:34188.service: Deactivated successfully. Dec 13 14:35:57.161575 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:35:57.162362 systemd-logind[1397]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:35:57.163246 systemd-logind[1397]: Removed session 15. Dec 13 14:35:57.276632 systemd[1]: Started sshd@13-10.200.8.20:22-10.200.16.10:34204.service. Dec 13 14:35:57.986611 sshd[3918]: Accepted publickey for core from 10.200.16.10 port 34204 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:57.988159 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:57.993396 systemd[1]: Started session-16.scope. Dec 13 14:35:57.994177 systemd-logind[1397]: New session 16 of user core. Dec 13 14:35:58.605013 sshd[3918]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:58.607985 systemd[1]: sshd@13-10.200.8.20:22-10.200.16.10:34204.service: Deactivated successfully. Dec 13 14:35:58.608971 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:35:58.609613 systemd-logind[1397]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:35:58.610481 systemd-logind[1397]: Removed session 16. Dec 13 14:35:58.725327 systemd[1]: Started sshd@14-10.200.8.20:22-10.200.16.10:44250.service. Dec 13 14:35:59.442251 sshd[3928]: Accepted publickey for core from 10.200.16.10 port 44250 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:35:59.443694 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:59.448721 systemd[1]: Started session-17.scope. Dec 13 14:35:59.449092 systemd-logind[1397]: New session 17 of user core. Dec 13 14:36:01.383637 sshd[3928]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:01.387574 systemd[1]: sshd@14-10.200.8.20:22-10.200.16.10:44250.service: Deactivated successfully. Dec 13 14:36:01.388512 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:36:01.389644 systemd-logind[1397]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:36:01.390543 systemd-logind[1397]: Removed session 17. Dec 13 14:36:01.501474 systemd[1]: Started sshd@15-10.200.8.20:22-10.200.16.10:44264.service. Dec 13 14:36:02.210238 sshd[3945]: Accepted publickey for core from 10.200.16.10 port 44264 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:02.211833 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:02.215626 systemd-logind[1397]: New session 18 of user core. Dec 13 14:36:02.217549 systemd[1]: Started session-18.scope. Dec 13 14:36:02.872080 sshd[3945]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:02.875034 systemd[1]: sshd@15-10.200.8.20:22-10.200.16.10:44264.service: Deactivated successfully. Dec 13 14:36:02.876229 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:36:02.876254 systemd-logind[1397]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:36:02.877342 systemd-logind[1397]: Removed session 18. Dec 13 14:36:02.992903 systemd[1]: Started sshd@16-10.200.8.20:22-10.200.16.10:44268.service. Dec 13 14:36:03.702493 sshd[3956]: Accepted publickey for core from 10.200.16.10 port 44268 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:03.704563 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:03.710402 systemd-logind[1397]: New session 19 of user core. Dec 13 14:36:03.710898 systemd[1]: Started session-19.scope. Dec 13 14:36:04.253983 sshd[3956]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:04.257391 systemd[1]: sshd@16-10.200.8.20:22-10.200.16.10:44268.service: Deactivated successfully. Dec 13 14:36:04.258447 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:36:04.259490 systemd-logind[1397]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:36:04.260568 systemd-logind[1397]: Removed session 19. Dec 13 14:36:09.382579 systemd[1]: Started sshd@17-10.200.8.20:22-10.200.16.10:46552.service. Dec 13 14:36:10.092353 sshd[3970]: Accepted publickey for core from 10.200.16.10 port 46552 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:10.094163 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:10.099801 systemd-logind[1397]: New session 20 of user core. Dec 13 14:36:10.100310 systemd[1]: Started session-20.scope. Dec 13 14:36:10.643036 sshd[3970]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:10.645898 systemd[1]: sshd@17-10.200.8.20:22-10.200.16.10:46552.service: Deactivated successfully. Dec 13 14:36:10.646799 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:36:10.647496 systemd-logind[1397]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:36:10.648372 systemd-logind[1397]: Removed session 20. Dec 13 14:36:15.763388 systemd[1]: Started sshd@18-10.200.8.20:22-10.200.16.10:46564.service. Dec 13 14:36:16.475267 sshd[3982]: Accepted publickey for core from 10.200.16.10 port 46564 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:16.476677 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:16.481641 systemd[1]: Started session-21.scope. Dec 13 14:36:16.482256 systemd-logind[1397]: New session 21 of user core. Dec 13 14:36:17.026567 sshd[3982]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:17.028909 systemd[1]: sshd@18-10.200.8.20:22-10.200.16.10:46564.service: Deactivated successfully. Dec 13 14:36:17.030026 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:36:17.030519 systemd-logind[1397]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:36:17.031383 systemd-logind[1397]: Removed session 21. Dec 13 14:36:22.145043 systemd[1]: Started sshd@19-10.200.8.20:22-10.200.16.10:52964.service. Dec 13 14:36:22.855936 sshd[3996]: Accepted publickey for core from 10.200.16.10 port 52964 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:22.857769 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:22.862275 systemd-logind[1397]: New session 22 of user core. Dec 13 14:36:22.863063 systemd[1]: Started session-22.scope. Dec 13 14:36:23.407676 sshd[3996]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:23.410452 systemd[1]: sshd@19-10.200.8.20:22-10.200.16.10:52964.service: Deactivated successfully. Dec 13 14:36:23.411422 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:36:23.412978 systemd-logind[1397]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:36:23.413793 systemd-logind[1397]: Removed session 22. Dec 13 14:36:23.526814 systemd[1]: Started sshd@20-10.200.8.20:22-10.200.16.10:52970.service. Dec 13 14:36:24.237864 sshd[4008]: Accepted publickey for core from 10.200.16.10 port 52970 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:24.239374 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:24.244460 systemd-logind[1397]: New session 23 of user core. Dec 13 14:36:24.244840 systemd[1]: Started session-23.scope. Dec 13 14:36:25.933648 env[1408]: time="2024-12-13T14:36:25.933597812Z" level=info msg="StopContainer for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" with timeout 30 (s)" Dec 13 14:36:25.934553 env[1408]: time="2024-12-13T14:36:25.934458815Z" level=info msg="Stop container \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" with signal terminated" Dec 13 14:36:25.945293 systemd[1]: cri-containerd-2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78.scope: Deactivated successfully. Dec 13 14:36:25.961133 env[1408]: time="2024-12-13T14:36:25.961066308Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:36:25.969850 env[1408]: time="2024-12-13T14:36:25.969811839Z" level=info msg="StopContainer for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" with timeout 2 (s)" Dec 13 14:36:25.971231 env[1408]: time="2024-12-13T14:36:25.971202144Z" level=info msg="Stop container \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" with signal terminated" Dec 13 14:36:25.980951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78-rootfs.mount: Deactivated successfully. Dec 13 14:36:25.987950 systemd-networkd[1571]: lxc_health: Link DOWN Dec 13 14:36:25.987957 systemd-networkd[1571]: lxc_health: Lost carrier Dec 13 14:36:26.011178 systemd[1]: cri-containerd-5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593.scope: Deactivated successfully. Dec 13 14:36:26.011505 systemd[1]: cri-containerd-5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593.scope: Consumed 7.236s CPU time. Dec 13 14:36:26.031514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593-rootfs.mount: Deactivated successfully. Dec 13 14:36:26.052239 env[1408]: time="2024-12-13T14:36:26.052200327Z" level=info msg="shim disconnected" id=2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78 Dec 13 14:36:26.052414 env[1408]: time="2024-12-13T14:36:26.052334727Z" level=warning msg="cleaning up after shim disconnected" id=2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78 namespace=k8s.io Dec 13 14:36:26.052414 env[1408]: time="2024-12-13T14:36:26.052354427Z" level=info msg="cleaning up dead shim" Dec 13 14:36:26.052603 env[1408]: time="2024-12-13T14:36:26.052194527Z" level=info msg="shim disconnected" id=5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593 Dec 13 14:36:26.052680 env[1408]: time="2024-12-13T14:36:26.052613228Z" level=warning msg="cleaning up after shim disconnected" id=5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593 namespace=k8s.io Dec 13 14:36:26.052680 env[1408]: time="2024-12-13T14:36:26.052628128Z" level=info msg="cleaning up dead shim" Dec 13 14:36:26.068105 env[1408]: time="2024-12-13T14:36:26.068057482Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4076 runtime=io.containerd.runc.v2\n" Dec 13 14:36:26.069244 env[1408]: time="2024-12-13T14:36:26.069216486Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4077 runtime=io.containerd.runc.v2\n" Dec 13 14:36:26.074378 env[1408]: time="2024-12-13T14:36:26.074244703Z" level=info msg="StopContainer for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" returns successfully" Dec 13 14:36:26.074922 env[1408]: time="2024-12-13T14:36:26.074891306Z" level=info msg="StopPodSandbox for \"db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4\"" Dec 13 14:36:26.075024 env[1408]: time="2024-12-13T14:36:26.074961506Z" level=info msg="Container to stop \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:26.078414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4-shm.mount: Deactivated successfully. Dec 13 14:36:26.079956 env[1408]: time="2024-12-13T14:36:26.079929423Z" level=info msg="StopContainer for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" returns successfully" Dec 13 14:36:26.080468 env[1408]: time="2024-12-13T14:36:26.080442425Z" level=info msg="StopPodSandbox for \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\"" Dec 13 14:36:26.080630 env[1408]: time="2024-12-13T14:36:26.080604826Z" level=info msg="Container to stop \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:26.080886 env[1408]: time="2024-12-13T14:36:26.080849926Z" level=info msg="Container to stop \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:26.081026 env[1408]: time="2024-12-13T14:36:26.081003127Z" level=info msg="Container to stop \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:26.081153 env[1408]: time="2024-12-13T14:36:26.081131227Z" level=info msg="Container to stop \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:26.081254 env[1408]: time="2024-12-13T14:36:26.081232328Z" level=info msg="Container to stop \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:26.083771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079-shm.mount: Deactivated successfully. Dec 13 14:36:26.088439 systemd[1]: cri-containerd-362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079.scope: Deactivated successfully. Dec 13 14:36:26.098139 systemd[1]: cri-containerd-db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4.scope: Deactivated successfully. Dec 13 14:36:26.128785 env[1408]: time="2024-12-13T14:36:26.128736593Z" level=info msg="shim disconnected" id=db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4 Dec 13 14:36:26.129035 env[1408]: time="2024-12-13T14:36:26.129010694Z" level=warning msg="cleaning up after shim disconnected" id=db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4 namespace=k8s.io Dec 13 14:36:26.129135 env[1408]: time="2024-12-13T14:36:26.129121195Z" level=info msg="cleaning up dead shim" Dec 13 14:36:26.129439 env[1408]: time="2024-12-13T14:36:26.129385096Z" level=info msg="shim disconnected" id=362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079 Dec 13 14:36:26.129533 env[1408]: time="2024-12-13T14:36:26.129438896Z" level=warning msg="cleaning up after shim disconnected" id=362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079 namespace=k8s.io Dec 13 14:36:26.129533 env[1408]: time="2024-12-13T14:36:26.129450296Z" level=info msg="cleaning up dead shim" Dec 13 14:36:26.129626 env[1408]: time="2024-12-13T14:36:26.129551296Z" level=info msg="shim disconnected" id=fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80 Dec 13 14:36:26.129626 env[1408]: time="2024-12-13T14:36:26.129579596Z" level=warning msg="cleaning up after shim disconnected" id=fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80 namespace=k8s.io Dec 13 14:36:26.129626 env[1408]: time="2024-12-13T14:36:26.129589396Z" level=info msg="cleaning up dead shim" Dec 13 14:36:26.146491 env[1408]: time="2024-12-13T14:36:26.146457855Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4144 runtime=io.containerd.runc.v2\n" Dec 13 14:36:26.147610 env[1408]: time="2024-12-13T14:36:26.147577559Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4142 runtime=io.containerd.runc.v2\n" Dec 13 14:36:26.148518 env[1408]: time="2024-12-13T14:36:26.148473262Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" Dec 13 14:36:26.148773 env[1408]: time="2024-12-13T14:36:26.148739963Z" level=info msg="TearDown network for sandbox \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" successfully" Dec 13 14:36:26.148773 env[1408]: time="2024-12-13T14:36:26.148764463Z" level=info msg="StopPodSandbox for \"362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079\" returns successfully" Dec 13 14:36:26.149276 env[1408]: time="2024-12-13T14:36:26.148052161Z" level=info msg="TearDown network for sandbox \"db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4\" successfully" Dec 13 14:36:26.149276 env[1408]: time="2024-12-13T14:36:26.148965864Z" level=info msg="StopPodSandbox for \"db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4\" returns successfully" Dec 13 14:36:26.334292 kubelet[2473]: I1213 14:36:26.334232 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cni-path\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334292 kubelet[2473]: I1213 14:36:26.334301 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-lib-modules\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334964 kubelet[2473]: I1213 14:36:26.334328 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-kernel\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334964 kubelet[2473]: I1213 14:36:26.334352 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-run\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334964 kubelet[2473]: I1213 14:36:26.334376 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-cgroup\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334964 kubelet[2473]: I1213 14:36:26.334411 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-config-path\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334964 kubelet[2473]: I1213 14:36:26.334441 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfbkz\" (UniqueName: \"kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-kube-api-access-wfbkz\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.334964 kubelet[2473]: I1213 14:36:26.334475 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d76133e-6973-48a2-a28f-d827290a64f6-cilium-config-path\") pod \"1d76133e-6973-48a2-a28f-d827290a64f6\" (UID: \"1d76133e-6973-48a2-a28f-d827290a64f6\") " Dec 13 14:36:26.335324 kubelet[2473]: I1213 14:36:26.334508 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-net\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335324 kubelet[2473]: I1213 14:36:26.334532 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-etc-cni-netd\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335324 kubelet[2473]: I1213 14:36:26.334563 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c003f275-d71c-4136-ad85-62cf1aee22f9-clustermesh-secrets\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335324 kubelet[2473]: I1213 14:36:26.334589 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-xtables-lock\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335324 kubelet[2473]: I1213 14:36:26.334620 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-hubble-tls\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335324 kubelet[2473]: I1213 14:36:26.334651 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbwk8\" (UniqueName: \"kubernetes.io/projected/1d76133e-6973-48a2-a28f-d827290a64f6-kube-api-access-mbwk8\") pod \"1d76133e-6973-48a2-a28f-d827290a64f6\" (UID: \"1d76133e-6973-48a2-a28f-d827290a64f6\") " Dec 13 14:36:26.335923 kubelet[2473]: I1213 14:36:26.334708 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-bpf-maps\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335923 kubelet[2473]: I1213 14:36:26.334735 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-hostproc\") pod \"c003f275-d71c-4136-ad85-62cf1aee22f9\" (UID: \"c003f275-d71c-4136-ad85-62cf1aee22f9\") " Dec 13 14:36:26.335923 kubelet[2473]: I1213 14:36:26.334823 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-hostproc" (OuterVolumeSpecName: "hostproc") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.335923 kubelet[2473]: I1213 14:36:26.334875 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cni-path" (OuterVolumeSpecName: "cni-path") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.335923 kubelet[2473]: I1213 14:36:26.334899 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.336198 kubelet[2473]: I1213 14:36:26.334926 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.336198 kubelet[2473]: I1213 14:36:26.334952 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.336198 kubelet[2473]: I1213 14:36:26.334977 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.338318 kubelet[2473]: I1213 14:36:26.338264 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:36:26.338443 kubelet[2473]: I1213 14:36:26.338353 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.340931 kubelet[2473]: I1213 14:36:26.340902 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c003f275-d71c-4136-ad85-62cf1aee22f9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:36:26.342084 kubelet[2473]: I1213 14:36:26.342052 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:26.344306 kubelet[2473]: I1213 14:36:26.344277 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-kube-api-access-wfbkz" (OuterVolumeSpecName: "kube-api-access-wfbkz") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "kube-api-access-wfbkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:26.345019 kubelet[2473]: I1213 14:36:26.344989 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d76133e-6973-48a2-a28f-d827290a64f6-kube-api-access-mbwk8" (OuterVolumeSpecName: "kube-api-access-mbwk8") pod "1d76133e-6973-48a2-a28f-d827290a64f6" (UID: "1d76133e-6973-48a2-a28f-d827290a64f6"). InnerVolumeSpecName "kube-api-access-mbwk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:26.345109 kubelet[2473]: I1213 14:36:26.345035 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.345109 kubelet[2473]: I1213 14:36:26.345058 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.345109 kubelet[2473]: I1213 14:36:26.345080 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c003f275-d71c-4136-ad85-62cf1aee22f9" (UID: "c003f275-d71c-4136-ad85-62cf1aee22f9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:26.346511 kubelet[2473]: I1213 14:36:26.346486 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d76133e-6973-48a2-a28f-d827290a64f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d76133e-6973-48a2-a28f-d827290a64f6" (UID: "1d76133e-6973-48a2-a28f-d827290a64f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:36:26.435455 kubelet[2473]: I1213 14:36:26.435404 2473 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-bpf-maps\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435455 kubelet[2473]: I1213 14:36:26.435456 2473 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-hostproc\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435473 2473 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cni-path\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435486 2473 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-lib-modules\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435500 2473 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435515 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-run\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435528 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-cgroup\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435540 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c003f275-d71c-4136-ad85-62cf1aee22f9-cilium-config-path\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435553 2473 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wfbkz\" (UniqueName: \"kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-kube-api-access-wfbkz\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.435757 kubelet[2473]: I1213 14:36:26.435567 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d76133e-6973-48a2-a28f-d827290a64f6-cilium-config-path\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.436019 kubelet[2473]: I1213 14:36:26.435581 2473 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-host-proc-sys-net\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.436019 kubelet[2473]: I1213 14:36:26.435597 2473 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-etc-cni-netd\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.436019 kubelet[2473]: I1213 14:36:26.435612 2473 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c003f275-d71c-4136-ad85-62cf1aee22f9-clustermesh-secrets\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.436019 kubelet[2473]: I1213 14:36:26.435626 2473 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c003f275-d71c-4136-ad85-62cf1aee22f9-xtables-lock\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.436019 kubelet[2473]: I1213 14:36:26.435638 2473 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c003f275-d71c-4136-ad85-62cf1aee22f9-hubble-tls\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.436019 kubelet[2473]: I1213 14:36:26.435695 2473 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mbwk8\" (UniqueName: \"kubernetes.io/projected/1d76133e-6973-48a2-a28f-d827290a64f6-kube-api-access-mbwk8\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:26.493472 systemd[1]: Removed slice kubepods-besteffort-pod1d76133e_6973_48a2_a28f_d827290a64f6.slice. Dec 13 14:36:26.495227 systemd[1]: Removed slice kubepods-burstable-podc003f275_d71c_4136_ad85_62cf1aee22f9.slice. Dec 13 14:36:26.495336 systemd[1]: kubepods-burstable-podc003f275_d71c_4136_ad85_62cf1aee22f9.slice: Consumed 7.343s CPU time. Dec 13 14:36:26.929305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-362a56c150b46e074eefdabd04cea2ed9ec488f7909108239b3dc7353d13c079-rootfs.mount: Deactivated successfully. Dec 13 14:36:26.929423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db37ccf6322fcb3b492300042e8dccc6786c314d24501b078c7cf2c852cdd5c4-rootfs.mount: Deactivated successfully. Dec 13 14:36:26.929495 systemd[1]: var-lib-kubelet-pods-1d76133e\x2d6973\x2d48a2\x2da28f\x2dd827290a64f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmbwk8.mount: Deactivated successfully. Dec 13 14:36:26.929573 systemd[1]: var-lib-kubelet-pods-c003f275\x2dd71c\x2d4136\x2dad85\x2d62cf1aee22f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfbkz.mount: Deactivated successfully. Dec 13 14:36:26.929670 systemd[1]: var-lib-kubelet-pods-c003f275\x2dd71c\x2d4136\x2dad85\x2d62cf1aee22f9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:36:26.929754 systemd[1]: var-lib-kubelet-pods-c003f275\x2dd71c\x2d4136\x2dad85\x2d62cf1aee22f9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:36:27.067011 kubelet[2473]: I1213 14:36:27.066980 2473 scope.go:117] "RemoveContainer" containerID="5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593" Dec 13 14:36:27.070500 env[1408]: time="2024-12-13T14:36:27.069520773Z" level=info msg="RemoveContainer for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\"" Dec 13 14:36:27.090149 env[1408]: time="2024-12-13T14:36:27.090111145Z" level=info msg="RemoveContainer for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" returns successfully" Dec 13 14:36:27.092784 kubelet[2473]: I1213 14:36:27.092756 2473 scope.go:117] "RemoveContainer" containerID="365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde" Dec 13 14:36:27.098889 env[1408]: time="2024-12-13T14:36:27.098777075Z" level=info msg="RemoveContainer for \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\"" Dec 13 14:36:27.107741 env[1408]: time="2024-12-13T14:36:27.107707806Z" level=info msg="RemoveContainer for \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\" returns successfully" Dec 13 14:36:27.107919 kubelet[2473]: I1213 14:36:27.107893 2473 scope.go:117] "RemoveContainer" containerID="ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d" Dec 13 14:36:27.109055 env[1408]: time="2024-12-13T14:36:27.109013611Z" level=info msg="RemoveContainer for \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\"" Dec 13 14:36:27.119468 env[1408]: time="2024-12-13T14:36:27.119425247Z" level=info msg="RemoveContainer for \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\" returns successfully" Dec 13 14:36:27.119636 kubelet[2473]: I1213 14:36:27.119610 2473 scope.go:117] "RemoveContainer" containerID="ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0" Dec 13 14:36:27.120705 env[1408]: time="2024-12-13T14:36:27.120678051Z" level=info msg="RemoveContainer for \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\"" Dec 13 14:36:27.132248 env[1408]: time="2024-12-13T14:36:27.132215291Z" level=info msg="RemoveContainer for \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\" returns successfully" Dec 13 14:36:27.132402 kubelet[2473]: I1213 14:36:27.132372 2473 scope.go:117] "RemoveContainer" containerID="fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80" Dec 13 14:36:27.133451 env[1408]: time="2024-12-13T14:36:27.133423396Z" level=info msg="RemoveContainer for \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\"" Dec 13 14:36:27.146566 env[1408]: time="2024-12-13T14:36:27.146520541Z" level=info msg="RemoveContainer for \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\" returns successfully" Dec 13 14:36:27.146719 kubelet[2473]: I1213 14:36:27.146700 2473 scope.go:117] "RemoveContainer" containerID="5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593" Dec 13 14:36:27.146969 env[1408]: time="2024-12-13T14:36:27.146903642Z" level=error msg="ContainerStatus for \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\": not found" Dec 13 14:36:27.147153 kubelet[2473]: E1213 14:36:27.147122 2473 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\": not found" containerID="5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593" Dec 13 14:36:27.147303 kubelet[2473]: I1213 14:36:27.147170 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593"} err="failed to get container status \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cd6ea1822a0c3417693ec0c4552ae26d9ebecb71a2e1e0e407ef93fb5219593\": not found" Dec 13 14:36:27.147375 kubelet[2473]: I1213 14:36:27.147303 2473 scope.go:117] "RemoveContainer" containerID="365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde" Dec 13 14:36:27.147597 env[1408]: time="2024-12-13T14:36:27.147540545Z" level=error msg="ContainerStatus for \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\": not found" Dec 13 14:36:27.147762 kubelet[2473]: E1213 14:36:27.147738 2473 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\": not found" containerID="365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde" Dec 13 14:36:27.147832 kubelet[2473]: I1213 14:36:27.147772 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde"} err="failed to get container status \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\": rpc error: code = NotFound desc = an error occurred when try to find container \"365d8766d9e44c1ff85f47649f034a929a25536abf99f91ddd94beed5e6cacde\": not found" Dec 13 14:36:27.147832 kubelet[2473]: I1213 14:36:27.147793 2473 scope.go:117] "RemoveContainer" containerID="ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d" Dec 13 14:36:27.148081 env[1408]: time="2024-12-13T14:36:27.148017646Z" level=error msg="ContainerStatus for \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\": not found" Dec 13 14:36:27.148205 kubelet[2473]: E1213 14:36:27.148165 2473 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\": not found" containerID="ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d" Dec 13 14:36:27.148289 kubelet[2473]: I1213 14:36:27.148210 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d"} err="failed to get container status \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac621d272e6f0a70a2b397657342071f3a1e948ca9a1b043370cae9195092a6d\": not found" Dec 13 14:36:27.148289 kubelet[2473]: I1213 14:36:27.148232 2473 scope.go:117] "RemoveContainer" containerID="ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0" Dec 13 14:36:27.148500 env[1408]: time="2024-12-13T14:36:27.148461648Z" level=error msg="ContainerStatus for \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\": not found" Dec 13 14:36:27.148617 kubelet[2473]: E1213 14:36:27.148596 2473 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\": not found" containerID="ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0" Dec 13 14:36:27.148728 kubelet[2473]: I1213 14:36:27.148632 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0"} err="failed to get container status \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce9ea2169b4546920e73c693ba72d170b07529b6560dadc625a31e6d99ce62d0\": not found" Dec 13 14:36:27.148728 kubelet[2473]: I1213 14:36:27.148651 2473 scope.go:117] "RemoveContainer" containerID="fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80" Dec 13 14:36:27.148915 env[1408]: time="2024-12-13T14:36:27.148871549Z" level=error msg="ContainerStatus for \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\": not found" Dec 13 14:36:27.149048 kubelet[2473]: E1213 14:36:27.149023 2473 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\": not found" containerID="fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80" Dec 13 14:36:27.149112 kubelet[2473]: I1213 14:36:27.149054 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80"} err="failed to get container status \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb6ad6b0d027126d652f3210458af6ec5cd010854a89298eda3314724e87ed80\": not found" Dec 13 14:36:27.149112 kubelet[2473]: I1213 14:36:27.149074 2473 scope.go:117] "RemoveContainer" containerID="2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78" Dec 13 14:36:27.150251 env[1408]: time="2024-12-13T14:36:27.150018053Z" level=info msg="RemoveContainer for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\"" Dec 13 14:36:27.158186 env[1408]: time="2024-12-13T14:36:27.158154282Z" level=info msg="RemoveContainer for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" returns successfully" Dec 13 14:36:27.158339 kubelet[2473]: I1213 14:36:27.158324 2473 scope.go:117] "RemoveContainer" containerID="2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78" Dec 13 14:36:27.158635 env[1408]: time="2024-12-13T14:36:27.158591583Z" level=error msg="ContainerStatus for \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\": not found" Dec 13 14:36:27.158826 kubelet[2473]: E1213 14:36:27.158796 2473 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\": not found" containerID="2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78" Dec 13 14:36:27.158890 kubelet[2473]: I1213 14:36:27.158821 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78"} err="failed to get container status \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a952b06f83de3d515df7d9344935a1835af366048aa74eebf79c6fb18e6cb78\": not found" Dec 13 14:36:27.986324 sshd[4008]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:27.989960 systemd[1]: sshd@20-10.200.8.20:22-10.200.16.10:52970.service: Deactivated successfully. Dec 13 14:36:27.991040 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:36:27.991727 systemd-logind[1397]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:36:27.992582 systemd-logind[1397]: Removed session 23. Dec 13 14:36:28.126337 systemd[1]: Started sshd@21-10.200.8.20:22-10.200.16.10:52986.service. Dec 13 14:36:28.488111 kubelet[2473]: I1213 14:36:28.488063 2473 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d76133e-6973-48a2-a28f-d827290a64f6" path="/var/lib/kubelet/pods/1d76133e-6973-48a2-a28f-d827290a64f6/volumes" Dec 13 14:36:28.488803 kubelet[2473]: I1213 14:36:28.488772 2473 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" path="/var/lib/kubelet/pods/c003f275-d71c-4136-ad85-62cf1aee22f9/volumes" Dec 13 14:36:28.602409 kubelet[2473]: E1213 14:36:28.602343 2473 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:36:28.844796 sshd[4186]: Accepted publickey for core from 10.200.16.10 port 52986 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:28.846634 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:28.851725 systemd-logind[1397]: New session 24 of user core. Dec 13 14:36:28.851831 systemd[1]: Started session-24.scope. Dec 13 14:36:29.716602 kubelet[2473]: I1213 14:36:29.716549 2473 topology_manager.go:215] "Topology Admit Handler" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" podNamespace="kube-system" podName="cilium-w7fwl" Dec 13 14:36:29.717257 kubelet[2473]: E1213 14:36:29.717233 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d76133e-6973-48a2-a28f-d827290a64f6" containerName="cilium-operator" Dec 13 14:36:29.717399 kubelet[2473]: E1213 14:36:29.717383 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" containerName="mount-cgroup" Dec 13 14:36:29.717507 kubelet[2473]: E1213 14:36:29.717494 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" containerName="apply-sysctl-overwrites" Dec 13 14:36:29.717617 kubelet[2473]: E1213 14:36:29.717605 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" containerName="mount-bpf-fs" Dec 13 14:36:29.717722 kubelet[2473]: E1213 14:36:29.717709 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" containerName="clean-cilium-state" Dec 13 14:36:29.717825 kubelet[2473]: E1213 14:36:29.717814 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" containerName="cilium-agent" Dec 13 14:36:29.717949 kubelet[2473]: I1213 14:36:29.717934 2473 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d76133e-6973-48a2-a28f-d827290a64f6" containerName="cilium-operator" Dec 13 14:36:29.718100 kubelet[2473]: I1213 14:36:29.718079 2473 memory_manager.go:354] "RemoveStaleState removing state" podUID="c003f275-d71c-4136-ad85-62cf1aee22f9" containerName="cilium-agent" Dec 13 14:36:29.725540 systemd[1]: Created slice kubepods-burstable-pod1c888b73_caff_4f5b_9a07_4015fcbc2b87.slice. Dec 13 14:36:29.811560 sshd[4186]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:29.814456 systemd[1]: sshd@21-10.200.8.20:22-10.200.16.10:52986.service: Deactivated successfully. Dec 13 14:36:29.816271 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:36:29.817356 systemd-logind[1397]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:36:29.819063 systemd-logind[1397]: Removed session 24. Dec 13 14:36:29.853192 kubelet[2473]: I1213 14:36:29.853155 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-config-path\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.853430 kubelet[2473]: I1213 14:36:29.853408 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-net\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.853562 kubelet[2473]: I1213 14:36:29.853546 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-bpf-maps\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.853689 kubelet[2473]: I1213 14:36:29.853673 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-run\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.853815 kubelet[2473]: I1213 14:36:29.853781 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-kernel\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.853948 kubelet[2473]: I1213 14:36:29.853934 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-cgroup\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854062 kubelet[2473]: I1213 14:36:29.854050 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-etc-cni-netd\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854169 kubelet[2473]: I1213 14:36:29.854156 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7b95\" (UniqueName: \"kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-kube-api-access-j7b95\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854260 kubelet[2473]: I1213 14:36:29.854248 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cni-path\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854350 kubelet[2473]: I1213 14:36:29.854339 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-clustermesh-secrets\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854445 kubelet[2473]: I1213 14:36:29.854431 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-lib-modules\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854538 kubelet[2473]: I1213 14:36:29.854525 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hostproc\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854634 kubelet[2473]: I1213 14:36:29.854620 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-xtables-lock\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854743 kubelet[2473]: I1213 14:36:29.854728 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-ipsec-secrets\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.854838 kubelet[2473]: I1213 14:36:29.854827 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hubble-tls\") pod \"cilium-w7fwl\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " pod="kube-system/cilium-w7fwl" Dec 13 14:36:29.929404 systemd[1]: Started sshd@22-10.200.8.20:22-10.200.16.10:44998.service. Dec 13 14:36:30.030747 env[1408]: time="2024-12-13T14:36:30.030698252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7fwl,Uid:1c888b73-caff-4f5b-9a07-4015fcbc2b87,Namespace:kube-system,Attempt:0,}" Dec 13 14:36:30.069087 env[1408]: time="2024-12-13T14:36:30.069022485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:30.069281 env[1408]: time="2024-12-13T14:36:30.069058885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:30.069281 env[1408]: time="2024-12-13T14:36:30.069077185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:30.069281 env[1408]: time="2024-12-13T14:36:30.069225385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad pid=4210 runtime=io.containerd.runc.v2 Dec 13 14:36:30.086364 systemd[1]: Started cri-containerd-553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad.scope. Dec 13 14:36:30.111905 env[1408]: time="2024-12-13T14:36:30.111861833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7fwl,Uid:1c888b73-caff-4f5b-9a07-4015fcbc2b87,Namespace:kube-system,Attempt:0,} returns sandbox id \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\"" Dec 13 14:36:30.115985 env[1408]: time="2024-12-13T14:36:30.115945947Z" level=info msg="CreateContainer within sandbox \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:36:30.155732 env[1408]: time="2024-12-13T14:36:30.155683684Z" level=info msg="CreateContainer within sandbox \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\"" Dec 13 14:36:30.157613 env[1408]: time="2024-12-13T14:36:30.156445187Z" level=info msg="StartContainer for \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\"" Dec 13 14:36:30.175445 systemd[1]: Started cri-containerd-339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840.scope. Dec 13 14:36:30.191015 systemd[1]: cri-containerd-339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840.scope: Deactivated successfully. Dec 13 14:36:30.244976 env[1408]: time="2024-12-13T14:36:30.244916993Z" level=info msg="shim disconnected" id=339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840 Dec 13 14:36:30.244976 env[1408]: time="2024-12-13T14:36:30.244976393Z" level=warning msg="cleaning up after shim disconnected" id=339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840 namespace=k8s.io Dec 13 14:36:30.245295 env[1408]: time="2024-12-13T14:36:30.244986893Z" level=info msg="cleaning up dead shim" Dec 13 14:36:30.253115 env[1408]: time="2024-12-13T14:36:30.253069521Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4273 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:36:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:36:30.253467 env[1408]: time="2024-12-13T14:36:30.253354622Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Dec 13 14:36:30.253791 env[1408]: time="2024-12-13T14:36:30.253742123Z" level=error msg="Failed to pipe stdout of container \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\"" error="reading from a closed fifo" Dec 13 14:36:30.254624 env[1408]: time="2024-12-13T14:36:30.254582426Z" level=error msg="Failed to pipe stderr of container \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\"" error="reading from a closed fifo" Dec 13 14:36:30.261025 env[1408]: time="2024-12-13T14:36:30.260966248Z" level=error msg="StartContainer for \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:36:30.262462 kubelet[2473]: E1213 14:36:30.261253 2473 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840" Dec 13 14:36:30.262462 kubelet[2473]: E1213 14:36:30.261576 2473 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:36:30.262462 kubelet[2473]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:36:30.262462 kubelet[2473]: rm /hostbin/cilium-mount Dec 13 14:36:30.262841 kubelet[2473]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7b95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w7fwl_kube-system(1c888b73-caff-4f5b-9a07-4015fcbc2b87): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:36:30.263003 kubelet[2473]: E1213 14:36:30.261617 2473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w7fwl" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" Dec 13 14:36:30.644982 sshd[4196]: Accepted publickey for core from 10.200.16.10 port 44998 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:30.646393 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:30.651154 systemd-logind[1397]: New session 25 of user core. Dec 13 14:36:30.651776 systemd[1]: Started session-25.scope. Dec 13 14:36:31.089227 env[1408]: time="2024-12-13T14:36:31.089179210Z" level=info msg="CreateContainer within sandbox \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:36:31.125152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268896759.mount: Deactivated successfully. Dec 13 14:36:31.131407 env[1408]: time="2024-12-13T14:36:31.131366956Z" level=info msg="CreateContainer within sandbox \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\"" Dec 13 14:36:31.132188 env[1408]: time="2024-12-13T14:36:31.132150258Z" level=info msg="StartContainer for \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\"" Dec 13 14:36:31.164409 systemd[1]: Started cri-containerd-7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb.scope. Dec 13 14:36:31.184573 systemd[1]: cri-containerd-7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb.scope: Deactivated successfully. Dec 13 14:36:31.206705 env[1408]: time="2024-12-13T14:36:31.206628315Z" level=info msg="shim disconnected" id=7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb Dec 13 14:36:31.206705 env[1408]: time="2024-12-13T14:36:31.206703916Z" level=warning msg="cleaning up after shim disconnected" id=7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb namespace=k8s.io Dec 13 14:36:31.206981 env[1408]: time="2024-12-13T14:36:31.206715316Z" level=info msg="cleaning up dead shim" Dec 13 14:36:31.215350 env[1408]: time="2024-12-13T14:36:31.215314645Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4317 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:36:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:36:31.215731 env[1408]: time="2024-12-13T14:36:31.215681147Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Dec 13 14:36:31.217785 env[1408]: time="2024-12-13T14:36:31.217735154Z" level=error msg="Failed to pipe stdout of container \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\"" error="reading from a closed fifo" Dec 13 14:36:31.217956 env[1408]: time="2024-12-13T14:36:31.217928654Z" level=error msg="Failed to pipe stderr of container \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\"" error="reading from a closed fifo" Dec 13 14:36:31.222358 env[1408]: time="2024-12-13T14:36:31.222326970Z" level=error msg="StartContainer for \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:36:31.222890 kubelet[2473]: E1213 14:36:31.222641 2473 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb" Dec 13 14:36:31.222890 kubelet[2473]: E1213 14:36:31.222804 2473 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:36:31.222890 kubelet[2473]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:36:31.222890 kubelet[2473]: rm /hostbin/cilium-mount Dec 13 14:36:31.223424 kubelet[2473]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7b95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w7fwl_kube-system(1c888b73-caff-4f5b-9a07-4015fcbc2b87): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:36:31.223561 kubelet[2473]: E1213 14:36:31.222840 2473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w7fwl" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" Dec 13 14:36:31.235098 sshd[4196]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:31.238050 systemd[1]: sshd@22-10.200.8.20:22-10.200.16.10:44998.service: Deactivated successfully. Dec 13 14:36:31.238845 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:36:31.239762 systemd-logind[1397]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:36:31.240640 systemd-logind[1397]: Removed session 25. Dec 13 14:36:31.354182 systemd[1]: Started sshd@23-10.200.8.20:22-10.200.16.10:45004.service. Dec 13 14:36:31.966136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb-rootfs.mount: Deactivated successfully. Dec 13 14:36:32.068365 sshd[4331]: Accepted publickey for core from 10.200.16.10 port 45004 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:32.070090 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:32.075557 systemd[1]: Started session-26.scope. Dec 13 14:36:32.076052 systemd-logind[1397]: New session 26 of user core. Dec 13 14:36:32.087262 kubelet[2473]: I1213 14:36:32.087236 2473 scope.go:117] "RemoveContainer" containerID="339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840" Dec 13 14:36:32.088845 env[1408]: time="2024-12-13T14:36:32.088716757Z" level=info msg="StopPodSandbox for \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\"" Dec 13 14:36:32.089210 env[1408]: time="2024-12-13T14:36:32.089173359Z" level=info msg="Container to stop \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.089350 env[1408]: time="2024-12-13T14:36:32.089327860Z" level=info msg="Container to stop \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:36:32.101819 env[1408]: time="2024-12-13T14:36:32.089105359Z" level=info msg="RemoveContainer for \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\"" Dec 13 14:36:32.093399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad-shm.mount: Deactivated successfully. Dec 13 14:36:32.102045 env[1408]: time="2024-12-13T14:36:32.101876903Z" level=info msg="RemoveContainer for \"339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840\" returns successfully" Dec 13 14:36:32.108629 systemd[1]: cri-containerd-553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad.scope: Deactivated successfully. Dec 13 14:36:32.137635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad-rootfs.mount: Deactivated successfully. Dec 13 14:36:32.161266 env[1408]: time="2024-12-13T14:36:32.161206207Z" level=info msg="shim disconnected" id=553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad Dec 13 14:36:32.161514 env[1408]: time="2024-12-13T14:36:32.161275607Z" level=warning msg="cleaning up after shim disconnected" id=553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad namespace=k8s.io Dec 13 14:36:32.161514 env[1408]: time="2024-12-13T14:36:32.161290507Z" level=info msg="cleaning up dead shim" Dec 13 14:36:32.169791 env[1408]: time="2024-12-13T14:36:32.169748636Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4354 runtime=io.containerd.runc.v2\n" Dec 13 14:36:32.170086 env[1408]: time="2024-12-13T14:36:32.170053037Z" level=info msg="TearDown network for sandbox \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\" successfully" Dec 13 14:36:32.170086 env[1408]: time="2024-12-13T14:36:32.170082138Z" level=info msg="StopPodSandbox for \"553a59c48997f3aa1edab172413fb643615601506ba57e2971185045ce172cad\" returns successfully" Dec 13 14:36:32.369990 kubelet[2473]: I1213 14:36:32.369931 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cni-path\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370565 kubelet[2473]: I1213 14:36:32.370002 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-ipsec-secrets\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370565 kubelet[2473]: I1213 14:36:32.370030 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-cgroup\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370565 kubelet[2473]: I1213 14:36:32.370053 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-run\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370565 kubelet[2473]: I1213 14:36:32.370536 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7b95\" (UniqueName: \"kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-kube-api-access-j7b95\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370860 kubelet[2473]: I1213 14:36:32.370598 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-clustermesh-secrets\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370860 kubelet[2473]: I1213 14:36:32.370677 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-lib-modules\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.370860 kubelet[2473]: I1213 14:36:32.370718 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-net\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371027 kubelet[2473]: I1213 14:36:32.370754 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-kernel\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371027 kubelet[2473]: I1213 14:36:32.370951 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hubble-tls\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371027 kubelet[2473]: I1213 14:36:32.371004 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-etc-cni-netd\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371200 kubelet[2473]: I1213 14:36:32.371039 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-xtables-lock\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371200 kubelet[2473]: I1213 14:36:32.371080 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-config-path\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371200 kubelet[2473]: I1213 14:36:32.371124 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-bpf-maps\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.371359 kubelet[2473]: I1213 14:36:32.371152 2473 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hostproc\") pod \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\" (UID: \"1c888b73-caff-4f5b-9a07-4015fcbc2b87\") " Dec 13 14:36:32.372004 kubelet[2473]: I1213 14:36:32.371951 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.372155 kubelet[2473]: I1213 14:36:32.372032 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.372155 kubelet[2473]: I1213 14:36:32.372059 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.372486 kubelet[2473]: I1213 14:36:32.372342 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.383017 systemd[1]: var-lib-kubelet-pods-1c888b73\x2dcaff\x2d4f5b\x2d9a07\x2d4015fcbc2b87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj7b95.mount: Deactivated successfully. Dec 13 14:36:32.388701 kubelet[2473]: I1213 14:36:32.387795 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.388701 kubelet[2473]: I1213 14:36:32.387877 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.383149 systemd[1]: var-lib-kubelet-pods-1c888b73\x2dcaff\x2d4f5b\x2d9a07\x2d4015fcbc2b87-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:36:32.390641 kubelet[2473]: I1213 14:36:32.390594 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:36:32.390797 kubelet[2473]: I1213 14:36:32.390678 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.393378 systemd[1]: var-lib-kubelet-pods-1c888b73\x2dcaff\x2d4f5b\x2d9a07\x2d4015fcbc2b87-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:36:32.394573 kubelet[2473]: I1213 14:36:32.394541 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.394740 kubelet[2473]: I1213 14:36:32.394605 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.394740 kubelet[2473]: I1213 14:36:32.394634 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:36:32.394846 kubelet[2473]: I1213 14:36:32.394820 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:36:32.394937 kubelet[2473]: I1213 14:36:32.394917 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-kube-api-access-j7b95" (OuterVolumeSpecName: "kube-api-access-j7b95") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "kube-api-access-j7b95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:32.395013 kubelet[2473]: I1213 14:36:32.394989 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:36:32.395424 kubelet[2473]: I1213 14:36:32.395396 2473 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c888b73-caff-4f5b-9a07-4015fcbc2b87" (UID: "1c888b73-caff-4f5b-9a07-4015fcbc2b87"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:36:32.471592 kubelet[2473]: I1213 14:36:32.471551 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-cgroup\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.471851 kubelet[2473]: I1213 14:36:32.471835 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-run\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.471949 kubelet[2473]: I1213 14:36:32.471935 2473 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j7b95\" (UniqueName: \"kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-kube-api-access-j7b95\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472026 kubelet[2473]: I1213 14:36:32.472014 2473 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-clustermesh-secrets\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472099 kubelet[2473]: I1213 14:36:32.472088 2473 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-lib-modules\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472171 kubelet[2473]: I1213 14:36:32.472160 2473 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hubble-tls\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472244 kubelet[2473]: I1213 14:36:32.472233 2473 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-net\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472321 kubelet[2473]: I1213 14:36:32.472310 2473 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472393 kubelet[2473]: I1213 14:36:32.472382 2473 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-etc-cni-netd\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472463 kubelet[2473]: I1213 14:36:32.472452 2473 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-xtables-lock\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472531 kubelet[2473]: I1213 14:36:32.472521 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-config-path\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472605 kubelet[2473]: I1213 14:36:32.472593 2473 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-bpf-maps\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472701 kubelet[2473]: I1213 14:36:32.472689 2473 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-hostproc\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472781 kubelet[2473]: I1213 14:36:32.472770 2473 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cni-path\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.472867 kubelet[2473]: I1213 14:36:32.472851 2473 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c888b73-caff-4f5b-9a07-4015fcbc2b87-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-34fc77c933\" DevicePath \"\"" Dec 13 14:36:32.493819 systemd[1]: Removed slice kubepods-burstable-pod1c888b73_caff_4f5b_9a07_4015fcbc2b87.slice. Dec 13 14:36:32.966102 systemd[1]: var-lib-kubelet-pods-1c888b73\x2dcaff\x2d4f5b\x2d9a07\x2d4015fcbc2b87-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:36:33.090956 kubelet[2473]: I1213 14:36:33.090925 2473 scope.go:117] "RemoveContainer" containerID="7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb" Dec 13 14:36:33.095791 env[1408]: time="2024-12-13T14:36:33.095740223Z" level=info msg="RemoveContainer for \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\"" Dec 13 14:36:33.105394 env[1408]: time="2024-12-13T14:36:33.105358856Z" level=info msg="RemoveContainer for \"7324caaa835bc166c2e0bcbcc14ff51723c3eab49dff0853c03e33fbe9d1e6bb\" returns successfully" Dec 13 14:36:33.152435 kubelet[2473]: I1213 14:36:33.152353 2473 topology_manager.go:215] "Topology Admit Handler" podUID="318ee9b3-6880-47c8-b59f-5dc7f91bd99f" podNamespace="kube-system" podName="cilium-9kqdm" Dec 13 14:36:33.152646 kubelet[2473]: E1213 14:36:33.152478 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" containerName="mount-cgroup" Dec 13 14:36:33.152646 kubelet[2473]: I1213 14:36:33.152511 2473 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" containerName="mount-cgroup" Dec 13 14:36:33.152646 kubelet[2473]: I1213 14:36:33.152519 2473 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" containerName="mount-cgroup" Dec 13 14:36:33.152646 kubelet[2473]: E1213 14:36:33.152552 2473 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" containerName="mount-cgroup" Dec 13 14:36:33.161620 systemd[1]: Created slice kubepods-burstable-pod318ee9b3_6880_47c8_b59f_5dc7f91bd99f.slice. Dec 13 14:36:33.177036 kubelet[2473]: I1213 14:36:33.177011 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-bpf-maps\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.177240 kubelet[2473]: I1213 14:36:33.177221 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-lib-modules\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.177405 kubelet[2473]: I1213 14:36:33.177388 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zr4\" (UniqueName: \"kubernetes.io/projected/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-kube-api-access-26zr4\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.177546 kubelet[2473]: I1213 14:36:33.177530 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-etc-cni-netd\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.177705 kubelet[2473]: I1213 14:36:33.177689 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-clustermesh-secrets\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.177840 kubelet[2473]: I1213 14:36:33.177827 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-cilium-config-path\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.177977 kubelet[2473]: I1213 14:36:33.177963 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-host-proc-sys-kernel\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178098 kubelet[2473]: I1213 14:36:33.178085 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-hostproc\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178230 kubelet[2473]: I1213 14:36:33.178216 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-host-proc-sys-net\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178347 kubelet[2473]: I1213 14:36:33.178333 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-xtables-lock\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178465 kubelet[2473]: I1213 14:36:33.178453 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-cilium-cgroup\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178589 kubelet[2473]: I1213 14:36:33.178573 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-cilium-ipsec-secrets\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178738 kubelet[2473]: I1213 14:36:33.178722 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-hubble-tls\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.178925 kubelet[2473]: I1213 14:36:33.178867 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-cilium-run\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.179043 kubelet[2473]: I1213 14:36:33.179030 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/318ee9b3-6880-47c8-b59f-5dc7f91bd99f-cni-path\") pod \"cilium-9kqdm\" (UID: \"318ee9b3-6880-47c8-b59f-5dc7f91bd99f\") " pod="kube-system/cilium-9kqdm" Dec 13 14:36:33.351440 kubelet[2473]: W1213 14:36:33.351356 2473 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c888b73_caff_4f5b_9a07_4015fcbc2b87.slice/cri-containerd-339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840.scope WatchSource:0}: container "339c37aff48341cdfdba65fef91ac6780da038aa3854b56e5bc4956790dfe840" in namespace "k8s.io": not found Dec 13 14:36:33.468817 env[1408]: time="2024-12-13T14:36:33.468773005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kqdm,Uid:318ee9b3-6880-47c8-b59f-5dc7f91bd99f,Namespace:kube-system,Attempt:0,}" Dec 13 14:36:33.502196 env[1408]: time="2024-12-13T14:36:33.502106719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:33.502437 env[1408]: time="2024-12-13T14:36:33.502395620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:33.502437 env[1408]: time="2024-12-13T14:36:33.502414620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:33.502803 env[1408]: time="2024-12-13T14:36:33.502749221Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95 pid=4390 runtime=io.containerd.runc.v2 Dec 13 14:36:33.514424 systemd[1]: Started cri-containerd-c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95.scope. Dec 13 14:36:33.540847 env[1408]: time="2024-12-13T14:36:33.540802152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kqdm,Uid:318ee9b3-6880-47c8-b59f-5dc7f91bd99f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\"" Dec 13 14:36:33.544139 env[1408]: time="2024-12-13T14:36:33.544099563Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:36:33.574231 env[1408]: time="2024-12-13T14:36:33.574167967Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff\"" Dec 13 14:36:33.574881 env[1408]: time="2024-12-13T14:36:33.574847869Z" level=info msg="StartContainer for \"74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff\"" Dec 13 14:36:33.592207 systemd[1]: Started cri-containerd-74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff.scope. Dec 13 14:36:33.603174 kubelet[2473]: E1213 14:36:33.603023 2473 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:36:33.626391 env[1408]: time="2024-12-13T14:36:33.626346846Z" level=info msg="StartContainer for \"74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff\" returns successfully" Dec 13 14:36:33.630078 systemd[1]: cri-containerd-74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff.scope: Deactivated successfully. Dec 13 14:36:33.674182 env[1408]: time="2024-12-13T14:36:33.674133810Z" level=info msg="shim disconnected" id=74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff Dec 13 14:36:33.674182 env[1408]: time="2024-12-13T14:36:33.674182510Z" level=warning msg="cleaning up after shim disconnected" id=74da8b5a41e8bf4014120e0f6f8cdac9837253f44c61a6c926034a4217933aff namespace=k8s.io Dec 13 14:36:33.674532 env[1408]: time="2024-12-13T14:36:33.674193610Z" level=info msg="cleaning up dead shim" Dec 13 14:36:33.682383 env[1408]: time="2024-12-13T14:36:33.682329438Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4476 runtime=io.containerd.runc.v2\n" Dec 13 14:36:34.097921 env[1408]: time="2024-12-13T14:36:34.097871965Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:36:34.124422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600594618.mount: Deactivated successfully. Dec 13 14:36:34.133269 env[1408]: time="2024-12-13T14:36:34.133227586Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326\"" Dec 13 14:36:34.134693 env[1408]: time="2024-12-13T14:36:34.133891389Z" level=info msg="StartContainer for \"fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326\"" Dec 13 14:36:34.159843 systemd[1]: Started cri-containerd-fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326.scope. Dec 13 14:36:34.198505 env[1408]: time="2024-12-13T14:36:34.198457310Z" level=info msg="StartContainer for \"fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326\" returns successfully" Dec 13 14:36:34.201044 systemd[1]: cri-containerd-fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326.scope: Deactivated successfully. Dec 13 14:36:34.231505 env[1408]: time="2024-12-13T14:36:34.231070922Z" level=info msg="shim disconnected" id=fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326 Dec 13 14:36:34.231505 env[1408]: time="2024-12-13T14:36:34.231189522Z" level=warning msg="cleaning up after shim disconnected" id=fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326 namespace=k8s.io Dec 13 14:36:34.231505 env[1408]: time="2024-12-13T14:36:34.231203822Z" level=info msg="cleaning up dead shim" Dec 13 14:36:34.232897 kubelet[2473]: I1213 14:36:34.232100 2473 setters.go:580] "Node became not ready" node="ci-3510.3.6-a-34fc77c933" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:36:34Z","lastTransitionTime":"2024-12-13T14:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:36:34.243958 env[1408]: time="2024-12-13T14:36:34.243899166Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4542 runtime=io.containerd.runc.v2\n" Dec 13 14:36:34.487949 kubelet[2473]: I1213 14:36:34.487903 2473 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c888b73-caff-4f5b-9a07-4015fcbc2b87" path="/var/lib/kubelet/pods/1c888b73-caff-4f5b-9a07-4015fcbc2b87/volumes" Dec 13 14:36:34.966378 systemd[1]: run-containerd-runc-k8s.io-fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326-runc.uhXCqC.mount: Deactivated successfully. Dec 13 14:36:34.966769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa91f87d02cae3da67d871b74546fc74111b67774e88e3ed7fa663187d84d326-rootfs.mount: Deactivated successfully. Dec 13 14:36:35.102466 env[1408]: time="2024-12-13T14:36:35.102405008Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:36:35.130778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077624664.mount: Deactivated successfully. Dec 13 14:36:35.142807 env[1408]: time="2024-12-13T14:36:35.142761046Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1\"" Dec 13 14:36:35.144505 env[1408]: time="2024-12-13T14:36:35.143470549Z" level=info msg="StartContainer for \"82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1\"" Dec 13 14:36:35.169558 systemd[1]: Started cri-containerd-82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1.scope. Dec 13 14:36:35.204895 systemd[1]: cri-containerd-82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1.scope: Deactivated successfully. Dec 13 14:36:35.206509 env[1408]: time="2024-12-13T14:36:35.206472464Z" level=info msg="StartContainer for \"82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1\" returns successfully" Dec 13 14:36:35.243494 env[1408]: time="2024-12-13T14:36:35.243425491Z" level=info msg="shim disconnected" id=82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1 Dec 13 14:36:35.243494 env[1408]: time="2024-12-13T14:36:35.243488591Z" level=warning msg="cleaning up after shim disconnected" id=82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1 namespace=k8s.io Dec 13 14:36:35.243818 env[1408]: time="2024-12-13T14:36:35.243500591Z" level=info msg="cleaning up dead shim" Dec 13 14:36:35.251105 env[1408]: time="2024-12-13T14:36:35.251065817Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4600 runtime=io.containerd.runc.v2\n" Dec 13 14:36:35.967103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82181d9c00aec651b962526c0e050b518f99eeff36c42dc8e63d89c3267173f1-rootfs.mount: Deactivated successfully. Dec 13 14:36:36.107558 env[1408]: time="2024-12-13T14:36:36.107510847Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:36:36.150348 env[1408]: time="2024-12-13T14:36:36.150294993Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a\"" Dec 13 14:36:36.152673 env[1408]: time="2024-12-13T14:36:36.150884495Z" level=info msg="StartContainer for \"bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a\"" Dec 13 14:36:36.177035 systemd[1]: Started cri-containerd-bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a.scope. Dec 13 14:36:36.200838 systemd[1]: cri-containerd-bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a.scope: Deactivated successfully. Dec 13 14:36:36.202604 env[1408]: time="2024-12-13T14:36:36.202309170Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod318ee9b3_6880_47c8_b59f_5dc7f91bd99f.slice/cri-containerd-bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a.scope/memory.events\": no such file or directory" Dec 13 14:36:36.210748 env[1408]: time="2024-12-13T14:36:36.210576399Z" level=info msg="StartContainer for \"bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a\" returns successfully" Dec 13 14:36:36.238849 env[1408]: time="2024-12-13T14:36:36.238802595Z" level=info msg="shim disconnected" id=bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a Dec 13 14:36:36.239081 env[1408]: time="2024-12-13T14:36:36.238852795Z" level=warning msg="cleaning up after shim disconnected" id=bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a namespace=k8s.io Dec 13 14:36:36.239081 env[1408]: time="2024-12-13T14:36:36.238865295Z" level=info msg="cleaning up dead shim" Dec 13 14:36:36.246794 env[1408]: time="2024-12-13T14:36:36.246759422Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4655 runtime=io.containerd.runc.v2\n" Dec 13 14:36:36.967202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd77c7bbfb6ceb9b0487a5b7d83d0b4a3ec9fcf12b4bb594aa0494f667cb2c4a-rootfs.mount: Deactivated successfully. Dec 13 14:36:37.112014 env[1408]: time="2024-12-13T14:36:37.111948076Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:36:37.159353 env[1408]: time="2024-12-13T14:36:37.159303537Z" level=info msg="CreateContainer within sandbox \"c128fc534b647697a50727909e0badd913d691289941a9842ecc11d3ef522e95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32\"" Dec 13 14:36:37.161110 env[1408]: time="2024-12-13T14:36:37.160046740Z" level=info msg="StartContainer for \"72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32\"" Dec 13 14:36:37.186794 systemd[1]: Started cri-containerd-72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32.scope. Dec 13 14:36:37.231440 env[1408]: time="2024-12-13T14:36:37.231396583Z" level=info msg="StartContainer for \"72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32\" returns successfully" Dec 13 14:36:37.562694 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:36:37.967353 systemd[1]: run-containerd-runc-k8s.io-72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32-runc.4KZsMf.mount: Deactivated successfully. Dec 13 14:36:38.692604 systemd[1]: run-containerd-runc-k8s.io-72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32-runc.eKrCK3.mount: Deactivated successfully. Dec 13 14:36:40.220761 systemd-networkd[1571]: lxc_health: Link UP Dec 13 14:36:40.229724 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:36:40.229488 systemd-networkd[1571]: lxc_health: Gained carrier Dec 13 14:36:40.883086 systemd[1]: run-containerd-runc-k8s.io-72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32-runc.1sKU2i.mount: Deactivated successfully. Dec 13 14:36:41.500519 kubelet[2473]: I1213 14:36:41.500446 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9kqdm" podStartSLOduration=8.500423482 podStartE2EDuration="8.500423482s" podCreationTimestamp="2024-12-13 14:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:36:38.155843433 +0000 UTC m=+260.185344729" watchObservedRunningTime="2024-12-13 14:36:41.500423482 +0000 UTC m=+263.529924778" Dec 13 14:36:41.722917 systemd-networkd[1571]: lxc_health: Gained IPv6LL Dec 13 14:36:43.074522 systemd[1]: run-containerd-runc-k8s.io-72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32-runc.FNlLy8.mount: Deactivated successfully. Dec 13 14:36:45.217526 systemd[1]: run-containerd-runc-k8s.io-72ccd0f39bb91221de9d0bee867d5ef0328fad907899a7b726ba9832b45d8d32-runc.XHZdtR.mount: Deactivated successfully. Dec 13 14:36:45.403903 sshd[4331]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:45.407517 systemd[1]: sshd@23-10.200.8.20:22-10.200.16.10:45004.service: Deactivated successfully. Dec 13 14:36:45.408512 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:36:45.409375 systemd-logind[1397]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:36:45.410416 systemd-logind[1397]: Removed session 26.