Dec 13 01:48:22.026051 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:48:22.026094 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:48:22.026107 kernel: BIOS-provided physical RAM map: Dec 13 01:48:22.026116 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:48:22.026125 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 01:48:22.026135 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 01:48:22.026149 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 01:48:22.026159 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 01:48:22.026170 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 01:48:22.026180 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 01:48:22.026191 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 01:48:22.026202 kernel: printk: bootconsole [earlyser0] enabled Dec 13 01:48:22.026212 kernel: NX (Execute Disable) protection: active Dec 13 01:48:22.026221 kernel: efi: EFI v2.70 by Microsoft Dec 13 01:48:22.026236 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 01:48:22.026247 kernel: random: crng init done Dec 13 01:48:22.026258 kernel: SMBIOS 3.1.0 present. Dec 13 01:48:22.026270 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 01:48:22.026281 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 01:48:22.026292 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 01:48:22.026301 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 01:48:22.026311 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 01:48:22.026325 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 01:48:22.026336 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 01:48:22.026348 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:48:22.026358 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 01:48:22.026370 kernel: tsc: Detected 2593.908 MHz processor Dec 13 01:48:22.026381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:48:22.026392 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:48:22.026403 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 01:48:22.026415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:48:22.026426 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 01:48:22.026440 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 01:48:22.026451 kernel: Using GB pages for direct mapping Dec 13 01:48:22.026463 kernel: Secure boot disabled Dec 13 01:48:22.026475 kernel: ACPI: Early table checksum verification disabled Dec 13 01:48:22.026488 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 01:48:22.026500 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026513 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026526 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 01:48:22.026546 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 01:48:22.026559 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026572 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026584 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026596 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026610 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026626 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026638 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:48:22.026670 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 01:48:22.026683 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 01:48:22.026696 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 01:48:22.026709 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 01:48:22.026722 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 01:48:22.026735 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 01:48:22.026751 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 01:48:22.026764 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 01:48:22.026777 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 01:48:22.026789 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 01:48:22.026802 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:48:22.026814 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:48:22.026827 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 01:48:22.026840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 01:48:22.026852 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 01:48:22.026868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 01:48:22.026881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 01:48:22.026894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 01:48:22.026906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 01:48:22.026919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 01:48:22.026932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 01:48:22.026945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 01:48:22.026958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 01:48:22.026971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 01:48:22.026986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 01:48:22.026999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 01:48:22.027012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 01:48:22.027026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 01:48:22.027039 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 01:48:22.027051 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 01:48:22.027064 kernel: Zone ranges: Dec 13 01:48:22.027076 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:48:22.027089 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:48:22.027106 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:48:22.027118 kernel: Movable zone start for each node Dec 13 01:48:22.027131 kernel: Early memory node ranges Dec 13 01:48:22.027144 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:48:22.027157 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 01:48:22.027170 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 01:48:22.027183 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:48:22.027196 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 01:48:22.027209 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:48:22.027224 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:48:22.027237 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 01:48:22.027249 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 01:48:22.027262 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 01:48:22.027275 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:48:22.027287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:48:22.027300 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:48:22.027312 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 01:48:22.027324 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:48:22.027341 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 01:48:22.027354 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 01:48:22.027367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:48:22.027380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:48:22.027393 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 01:48:22.027406 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 01:48:22.027419 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:48:22.027432 kernel: Hyper-V: PV spinlocks enabled Dec 13 01:48:22.027445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:48:22.027460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 01:48:22.027473 kernel: Policy zone: Normal Dec 13 01:48:22.027488 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:48:22.027502 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:48:22.027515 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:48:22.027527 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:48:22.027540 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:48:22.027554 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved) Dec 13 01:48:22.027569 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:48:22.027582 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:48:22.027604 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:48:22.027620 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:48:22.027636 kernel: rcu: RCU event tracing is enabled. Dec 13 01:48:22.027660 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:48:22.028716 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:48:22.028734 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:48:22.028749 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:48:22.028762 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:48:22.028775 kernel: Using NULL legacy PIC Dec 13 01:48:22.028794 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 01:48:22.028807 kernel: Console: colour dummy device 80x25 Dec 13 01:48:22.028821 kernel: printk: console [tty1] enabled Dec 13 01:48:22.028834 kernel: printk: console [ttyS0] enabled Dec 13 01:48:22.028847 kernel: printk: bootconsole [earlyser0] disabled Dec 13 01:48:22.028863 kernel: ACPI: Core revision 20210730 Dec 13 01:48:22.028876 kernel: Failed to register legacy timer interrupt Dec 13 01:48:22.028889 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:48:22.028901 kernel: Hyper-V: Using IPI hypercalls Dec 13 01:48:22.028915 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Dec 13 01:48:22.028928 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:48:22.028942 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:48:22.028955 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:48:22.028967 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:48:22.028981 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:48:22.028997 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:48:22.029009 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:48:22.029023 kernel: RETBleed: Vulnerable Dec 13 01:48:22.029036 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:48:22.029049 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:48:22.029062 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:48:22.029075 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:48:22.029088 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:48:22.029101 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:48:22.029114 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:48:22.029129 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:48:22.029142 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:48:22.029156 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:48:22.029169 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:48:22.029183 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 01:48:22.029196 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 01:48:22.029209 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 01:48:22.029222 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 01:48:22.029236 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:48:22.029249 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:48:22.029262 kernel: LSM: Security Framework initializing Dec 13 01:48:22.029275 kernel: SELinux: Initializing. Dec 13 01:48:22.029290 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:48:22.029302 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:48:22.029316 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:48:22.029329 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:48:22.029343 kernel: signal: max sigframe size: 3632 Dec 13 01:48:22.029355 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:48:22.029368 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:48:22.029381 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:48:22.029394 kernel: x86: Booting SMP configuration: Dec 13 01:48:22.029408 kernel: .... node #0, CPUs: #1 Dec 13 01:48:22.029425 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 01:48:22.029438 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:48:22.029451 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:48:22.029464 kernel: smpboot: Max logical packages: 1 Dec 13 01:48:22.029476 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Dec 13 01:48:22.029489 kernel: devtmpfs: initialized Dec 13 01:48:22.029501 kernel: x86/mm: Memory block size: 128MB Dec 13 01:48:22.029514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 01:48:22.029529 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:48:22.029542 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:48:22.029555 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:48:22.029568 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:48:22.029580 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:48:22.029594 kernel: audit: type=2000 audit(1734054500.025:1): state=initialized audit_enabled=0 res=1 Dec 13 01:48:22.029607 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:48:22.029620 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:48:22.029633 kernel: cpuidle: using governor menu Dec 13 01:48:22.029648 kernel: ACPI: bus type PCI registered Dec 13 01:48:22.029673 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:48:22.029686 kernel: dca service started, version 1.12.1 Dec 13 01:48:22.029700 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:48:22.029714 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:48:22.029727 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:48:22.029740 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:48:22.029753 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:48:22.029766 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:48:22.029782 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:48:22.029795 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:48:22.029809 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:48:22.029823 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:48:22.029836 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:48:22.029849 kernel: ACPI: Interpreter enabled Dec 13 01:48:22.029862 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:48:22.029875 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:48:22.029889 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:48:22.029905 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 01:48:22.029918 kernel: iommu: Default domain type: Translated Dec 13 01:48:22.029932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:48:22.029945 kernel: vgaarb: loaded Dec 13 01:48:22.029958 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:48:22.029972 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:48:22.029986 kernel: PTP clock support registered Dec 13 01:48:22.029999 kernel: Registered efivars operations Dec 13 01:48:22.030012 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:48:22.030025 kernel: PCI: System does not support PCI Dec 13 01:48:22.030040 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 01:48:22.030054 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:48:22.030068 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:48:22.030081 kernel: pnp: PnP ACPI init Dec 13 01:48:22.030094 kernel: pnp: PnP ACPI: found 3 devices Dec 13 01:48:22.030106 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:48:22.030119 kernel: NET: Registered PF_INET protocol family Dec 13 01:48:22.030131 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:48:22.030152 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:48:22.030165 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:48:22.030178 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:48:22.030191 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 01:48:22.030205 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:48:22.030218 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:48:22.030231 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:48:22.030244 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:48:22.030256 kernel: NET: Registered PF_XDP protocol family Dec 13 01:48:22.030272 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:48:22.030285 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:48:22.030297 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 01:48:22.030311 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:48:22.030323 kernel: Initialise system trusted keyrings Dec 13 01:48:22.030337 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:48:22.030349 kernel: Key type asymmetric registered Dec 13 01:48:22.030361 kernel: Asymmetric key parser 'x509' registered Dec 13 01:48:22.030373 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:48:22.030390 kernel: io scheduler mq-deadline registered Dec 13 01:48:22.030403 kernel: io scheduler kyber registered Dec 13 01:48:22.030416 kernel: io scheduler bfq registered Dec 13 01:48:22.030430 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:48:22.030442 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:48:22.030455 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:48:22.030468 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:48:22.030481 kernel: i8042: PNP: No PS/2 controller found. Dec 13 01:48:22.030643 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 01:48:22.030777 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:48:21 UTC (1734054501) Dec 13 01:48:22.030863 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 01:48:22.030880 kernel: fail to initialize ptp_kvm Dec 13 01:48:22.030894 kernel: intel_pstate: CPU model not supported Dec 13 01:48:22.030907 kernel: efifb: probing for efifb Dec 13 01:48:22.030922 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:48:22.030936 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:48:22.030950 kernel: efifb: scrolling: redraw Dec 13 01:48:22.030968 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:48:22.030982 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:48:22.030996 kernel: fb0: EFI VGA frame buffer device Dec 13 01:48:22.031010 kernel: pstore: Registered efi as persistent store backend Dec 13 01:48:22.031024 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:48:22.031037 kernel: Segment Routing with IPv6 Dec 13 01:48:22.031048 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:48:22.031060 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:48:22.031071 kernel: Key type dns_resolver registered Dec 13 01:48:22.031085 kernel: IPI shorthand broadcast: enabled Dec 13 01:48:22.031097 kernel: sched_clock: Marking stable (762094900, 20649800)->(956859900, -174115200) Dec 13 01:48:22.031110 kernel: registered taskstats version 1 Dec 13 01:48:22.031124 kernel: Loading compiled-in X.509 certificates Dec 13 01:48:22.031137 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:48:22.031150 kernel: Key type .fscrypt registered Dec 13 01:48:22.031163 kernel: Key type fscrypt-provisioning registered Dec 13 01:48:22.031177 kernel: pstore: Using crash dump compression: deflate Dec 13 01:48:22.031193 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:48:22.031207 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:48:22.031221 kernel: ima: No architecture policies found Dec 13 01:48:22.031234 kernel: clk: Disabling unused clocks Dec 13 01:48:22.031248 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:48:22.031261 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:48:22.031274 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:48:22.031288 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:48:22.031301 kernel: Run /init as init process Dec 13 01:48:22.031315 kernel: with arguments: Dec 13 01:48:22.031331 kernel: /init Dec 13 01:48:22.031341 kernel: with environment: Dec 13 01:48:22.031352 kernel: HOME=/ Dec 13 01:48:22.031363 kernel: TERM=linux Dec 13 01:48:22.031374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:48:22.031388 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:48:22.031402 systemd[1]: Detected virtualization microsoft. Dec 13 01:48:22.031416 systemd[1]: Detected architecture x86-64. Dec 13 01:48:22.031424 systemd[1]: Running in initrd. Dec 13 01:48:22.031435 systemd[1]: No hostname configured, using default hostname. Dec 13 01:48:22.031445 systemd[1]: Hostname set to . Dec 13 01:48:22.031455 systemd[1]: Initializing machine ID from random generator. Dec 13 01:48:22.031464 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:48:22.031474 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:48:22.031484 systemd[1]: Reached target cryptsetup.target. Dec 13 01:48:22.031492 systemd[1]: Reached target paths.target. Dec 13 01:48:22.031503 systemd[1]: Reached target slices.target. Dec 13 01:48:22.031519 systemd[1]: Reached target swap.target. Dec 13 01:48:22.031529 systemd[1]: Reached target timers.target. Dec 13 01:48:22.031540 systemd[1]: Listening on iscsid.socket. Dec 13 01:48:22.031548 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:48:22.031555 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:48:22.031563 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:48:22.031575 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:48:22.031583 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:48:22.031591 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:48:22.031600 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:48:22.031608 systemd[1]: Reached target sockets.target. Dec 13 01:48:22.031616 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:48:22.031623 systemd[1]: Finished network-cleanup.service. Dec 13 01:48:22.031631 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:48:22.031638 systemd[1]: Starting systemd-journald.service... Dec 13 01:48:22.031662 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:48:22.031672 systemd[1]: Starting systemd-resolved.service... Dec 13 01:48:22.031682 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:48:22.031692 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:48:22.031706 systemd-journald[182]: Journal started Dec 13 01:48:22.031756 systemd-journald[182]: Runtime Journal (/run/log/journal/0bf576d7d43647bea9d4714e3f6895fe) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:48:22.019684 systemd-modules-load[183]: Inserted module 'overlay' Dec 13 01:48:22.048394 kernel: audit: type=1130 audit(1734054502.033:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.053836 systemd[1]: Started systemd-journald.service. Dec 13 01:48:22.054530 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:48:22.058726 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:48:22.064107 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:48:22.068981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:48:22.091197 kernel: audit: type=1130 audit(1734054502.053:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.111670 kernel: audit: type=1130 audit(1734054502.058:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.115557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:48:22.124360 systemd-resolved[184]: Positive Trust Anchors: Dec 13 01:48:22.124380 systemd-resolved[184]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:48:22.124424 systemd-resolved[184]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:48:22.130005 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:48:22.202753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:48:22.202785 kernel: audit: type=1130 audit(1734054502.062:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.202805 kernel: audit: type=1130 audit(1734054502.114:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.202823 kernel: Bridge firewalling registered Dec 13 01:48:22.202846 kernel: audit: type=1130 audit(1734054502.137:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.202862 kernel: audit: type=1130 audit(1734054502.188:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.139184 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:48:22.144558 systemd-resolved[184]: Defaulting to hostname 'linux'. Dec 13 01:48:22.186754 systemd[1]: Started systemd-resolved.service. Dec 13 01:48:22.189073 systemd[1]: Reached target nss-lookup.target. Dec 13 01:48:22.200110 systemd-modules-load[183]: Inserted module 'br_netfilter' Dec 13 01:48:22.221884 dracut-cmdline[200]: dracut-dracut-053 Dec 13 01:48:22.226206 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:48:22.241853 kernel: SCSI subsystem initialized Dec 13 01:48:22.257517 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:48:22.257559 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:48:22.263675 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:48:22.266886 systemd-modules-load[183]: Inserted module 'dm_multipath' Dec 13 01:48:22.268311 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:48:22.287456 kernel: audit: type=1130 audit(1734054502.271:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.284150 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:48:22.296182 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:48:22.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.309664 kernel: audit: type=1130 audit(1734054502.298:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.342668 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:48:22.361669 kernel: iscsi: registered transport (tcp) Dec 13 01:48:22.388367 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:48:22.388445 kernel: QLogic iSCSI HBA Driver Dec 13 01:48:22.417886 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:48:22.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.423226 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:48:22.473681 kernel: raid6: avx512x4 gen() 18291 MB/s Dec 13 01:48:22.493662 kernel: raid6: avx512x4 xor() 7078 MB/s Dec 13 01:48:22.513659 kernel: raid6: avx512x2 gen() 18237 MB/s Dec 13 01:48:22.534670 kernel: raid6: avx512x2 xor() 29917 MB/s Dec 13 01:48:22.554664 kernel: raid6: avx512x1 gen() 18174 MB/s Dec 13 01:48:22.574663 kernel: raid6: avx512x1 xor() 26786 MB/s Dec 13 01:48:22.594665 kernel: raid6: avx2x4 gen() 18177 MB/s Dec 13 01:48:22.614662 kernel: raid6: avx2x4 xor() 6632 MB/s Dec 13 01:48:22.634664 kernel: raid6: avx2x2 gen() 18222 MB/s Dec 13 01:48:22.654667 kernel: raid6: avx2x2 xor() 22229 MB/s Dec 13 01:48:22.673661 kernel: raid6: avx2x1 gen() 13989 MB/s Dec 13 01:48:22.693661 kernel: raid6: avx2x1 xor() 19469 MB/s Dec 13 01:48:22.713662 kernel: raid6: sse2x4 gen() 11712 MB/s Dec 13 01:48:22.733661 kernel: raid6: sse2x4 xor() 5994 MB/s Dec 13 01:48:22.753661 kernel: raid6: sse2x2 gen() 12897 MB/s Dec 13 01:48:22.773662 kernel: raid6: sse2x2 xor() 7509 MB/s Dec 13 01:48:22.793659 kernel: raid6: sse2x1 gen() 11652 MB/s Dec 13 01:48:22.817889 kernel: raid6: sse2x1 xor() 5916 MB/s Dec 13 01:48:22.817910 kernel: raid6: using algorithm avx512x4 gen() 18291 MB/s Dec 13 01:48:22.817923 kernel: raid6: .... xor() 7078 MB/s, rmw enabled Dec 13 01:48:22.820961 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:48:22.839676 kernel: xor: automatically using best checksumming function avx Dec 13 01:48:22.935686 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:48:22.944136 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:48:22.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.948000 audit: BPF prog-id=7 op=LOAD Dec 13 01:48:22.948000 audit: BPF prog-id=8 op=LOAD Dec 13 01:48:22.949318 systemd[1]: Starting systemd-udevd.service... Dec 13 01:48:22.963778 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 01:48:22.968511 systemd[1]: Started systemd-udevd.service. Dec 13 01:48:22.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:22.978207 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:48:22.994284 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Dec 13 01:48:23.024004 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:48:23.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:23.029751 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:48:23.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:23.062787 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:48:23.124577 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:48:23.124643 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 01:48:23.162669 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:48:23.175675 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:48:23.175726 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:48:23.184928 kernel: scsi host0: storvsc_host_t Dec 13 01:48:23.185017 kernel: scsi host1: storvsc_host_t Dec 13 01:48:23.190676 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:48:23.190745 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:48:23.204666 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:48:23.212666 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:48:23.212705 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:48:23.213677 kernel: AES CTR mode by8 optimization enabled Dec 13 01:48:23.244764 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:48:23.269974 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:48:23.270003 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:48:23.270024 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:48:23.270047 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:48:23.270204 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:48:23.289630 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:48:23.289820 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:48:23.289985 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:48:23.290145 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:48:23.290304 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:48:23.290466 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:48:23.290485 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:48:23.386945 kernel: hv_netvsc 7c1e5221-3fd8-7c1e-5221-3fd87c1e5221 eth0: VF slot 1 added Dec 13 01:48:23.387191 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (447) Dec 13 01:48:23.394668 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:48:23.400673 kernel: hv_pci cbcd7b6c-2b55-4f06-a924-f54d1c077f6c: PCI VMBus probing: Using version 0x10004 Dec 13 01:48:23.495834 kernel: hv_pci cbcd7b6c-2b55-4f06-a924-f54d1c077f6c: PCI host bridge to bus 2b55:00 Dec 13 01:48:23.496431 kernel: pci_bus 2b55:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 01:48:23.496627 kernel: pci_bus 2b55:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:48:23.496827 kernel: pci 2b55:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 01:48:23.496992 kernel: pci 2b55:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:48:23.497154 kernel: pci 2b55:00:02.0: enabling Extended Tags Dec 13 01:48:23.497313 kernel: pci 2b55:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2b55:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 01:48:23.497473 kernel: pci_bus 2b55:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:48:23.497663 kernel: pci 2b55:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:48:23.401432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:48:23.415633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:48:23.464816 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:48:23.509208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:48:23.473601 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:48:23.482795 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:48:23.491146 systemd[1]: Starting disk-uuid.service... Dec 13 01:48:23.517709 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:48:23.693678 kernel: mlx5_core 2b55:00:02.0: firmware version: 14.30.5000 Dec 13 01:48:23.954682 kernel: mlx5_core 2b55:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 01:48:23.954847 kernel: mlx5_core 2b55:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 01:48:23.954955 kernel: mlx5_core 2b55:00:02.0: mlx5e_tc_post_act_init:40:(pid 187): firmware level support is missing Dec 13 01:48:23.955053 kernel: hv_netvsc 7c1e5221-3fd8-7c1e-5221-3fd87c1e5221 eth0: VF registering: eth1 Dec 13 01:48:23.955148 kernel: mlx5_core 2b55:00:02.0 eth1: joined to eth0 Dec 13 01:48:23.961694 kernel: mlx5_core 2b55:00:02.0 enP11093s1: renamed from eth1 Dec 13 01:48:24.518663 disk-uuid[556]: The operation has completed successfully. Dec 13 01:48:24.521225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:48:24.586541 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:48:24.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.586647 systemd[1]: Finished disk-uuid.service. Dec 13 01:48:24.602781 systemd[1]: Starting verity-setup.service... Dec 13 01:48:24.624671 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:48:24.699861 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:48:24.704511 systemd[1]: Finished verity-setup.service. Dec 13 01:48:24.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.709040 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:48:24.784539 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:48:24.788443 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:48:24.788550 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:48:24.792498 systemd[1]: Starting ignition-setup.service... Dec 13 01:48:24.796981 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:48:24.814060 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:48:24.814104 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:48:24.814115 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:48:24.848108 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:48:24.868324 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:48:24.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.874000 audit: BPF prog-id=9 op=LOAD Dec 13 01:48:24.875715 systemd[1]: Starting systemd-networkd.service... Dec 13 01:48:24.899908 systemd-networkd[811]: lo: Link UP Dec 13 01:48:24.899916 systemd-networkd[811]: lo: Gained carrier Dec 13 01:48:24.904367 systemd-networkd[811]: Enumeration completed Dec 13 01:48:24.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.904451 systemd[1]: Started systemd-networkd.service. Dec 13 01:48:24.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.908663 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:48:24.908943 systemd[1]: Finished ignition-setup.service. Dec 13 01:48:24.913297 systemd[1]: Reached target network.target. Dec 13 01:48:24.926485 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:48:24.931473 systemd[1]: Starting iscsiuio.service... Dec 13 01:48:24.940080 systemd[1]: Started iscsiuio.service. Dec 13 01:48:24.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.942950 systemd[1]: Starting iscsid.service... Dec 13 01:48:24.951623 iscsid[818]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:48:24.951623 iscsid[818]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 01:48:24.951623 iscsid[818]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:48:24.951623 iscsid[818]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:48:24.951623 iscsid[818]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:48:24.951623 iscsid[818]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:48:24.951623 iscsid[818]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:48:24.999168 kernel: mlx5_core 2b55:00:02.0 enP11093s1: Link up Dec 13 01:48:24.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:24.948621 systemd[1]: Started iscsid.service. Dec 13 01:48:24.952413 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:48:24.965554 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:48:25.010691 kernel: hv_netvsc 7c1e5221-3fd8-7c1e-5221-3fd87c1e5221 eth0: Data path switched to VF: enP11093s1 Dec 13 01:48:24.969550 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:48:25.015978 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:48:24.979330 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:48:24.981714 systemd[1]: Reached target remote-fs.target. Dec 13 01:48:24.986869 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:48:24.996879 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:48:25.016141 systemd-networkd[811]: enP11093s1: Link UP Dec 13 01:48:25.018353 systemd-networkd[811]: eth0: Link UP Dec 13 01:48:25.018562 systemd-networkd[811]: eth0: Gained carrier Dec 13 01:48:25.024898 systemd-networkd[811]: enP11093s1: Gained carrier Dec 13 01:48:25.058834 systemd-networkd[811]: eth0: DHCPv4 address 10.200.8.23/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:48:25.551104 ignition[814]: Ignition 2.14.0 Dec 13 01:48:25.551117 ignition[814]: Stage: fetch-offline Dec 13 01:48:25.551194 ignition[814]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:25.551242 ignition[814]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:25.583840 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:25.584046 ignition[814]: parsed url from cmdline: "" Dec 13 01:48:25.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:25.585372 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:48:25.584051 ignition[814]: no config URL provided Dec 13 01:48:25.590709 systemd[1]: Starting ignition-fetch.service... Dec 13 01:48:25.584060 ignition[814]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:48:25.584070 ignition[814]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:48:25.584076 ignition[814]: failed to fetch config: resource requires networking Dec 13 01:48:25.584491 ignition[814]: Ignition finished successfully Dec 13 01:48:25.601072 ignition[838]: Ignition 2.14.0 Dec 13 01:48:25.601079 ignition[838]: Stage: fetch Dec 13 01:48:25.601189 ignition[838]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:25.601217 ignition[838]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:25.607225 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:25.607458 ignition[838]: parsed url from cmdline: "" Dec 13 01:48:25.607463 ignition[838]: no config URL provided Dec 13 01:48:25.607469 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:48:25.607477 ignition[838]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:48:25.607527 ignition[838]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:48:25.688954 ignition[838]: GET result: OK Dec 13 01:48:25.689125 ignition[838]: config has been read from IMDS userdata Dec 13 01:48:25.689172 ignition[838]: parsing config with SHA512: 14ab4083cebb2cd1e66b897bdaadda32a9fe3262f53372bda04f47e6d26231e580d81f15ed885e94a1aee16e6c7c298d6f6c3dc1e346947bd2e4428c763474f4 Dec 13 01:48:25.693989 unknown[838]: fetched base config from "system" Dec 13 01:48:25.693999 unknown[838]: fetched base config from "system" Dec 13 01:48:25.694608 ignition[838]: fetch: fetch complete Dec 13 01:48:25.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:25.694006 unknown[838]: fetched user config from "azure" Dec 13 01:48:25.694613 ignition[838]: fetch: fetch passed Dec 13 01:48:25.696523 systemd[1]: Finished ignition-fetch.service. Dec 13 01:48:25.694668 ignition[838]: Ignition finished successfully Dec 13 01:48:25.712200 systemd[1]: Starting ignition-kargs.service... Dec 13 01:48:25.723169 ignition[844]: Ignition 2.14.0 Dec 13 01:48:25.723179 ignition[844]: Stage: kargs Dec 13 01:48:25.723318 ignition[844]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:25.723352 ignition[844]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:25.728181 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:25.729987 ignition[844]: kargs: kargs passed Dec 13 01:48:25.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:25.732377 systemd[1]: Finished ignition-kargs.service. Dec 13 01:48:25.730029 ignition[844]: Ignition finished successfully Dec 13 01:48:25.735252 systemd[1]: Starting ignition-disks.service... Dec 13 01:48:25.744351 ignition[850]: Ignition 2.14.0 Dec 13 01:48:25.744357 ignition[850]: Stage: disks Dec 13 01:48:25.744454 ignition[850]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:25.744482 ignition[850]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:25.753081 systemd[1]: Finished ignition-disks.service. Dec 13 01:48:25.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:25.749044 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:25.755913 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:48:25.751428 ignition[850]: disks: disks passed Dec 13 01:48:25.759853 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:48:25.751488 ignition[850]: Ignition finished successfully Dec 13 01:48:25.761817 systemd[1]: Reached target local-fs.target. Dec 13 01:48:25.763722 systemd[1]: Reached target sysinit.target. Dec 13 01:48:25.768199 systemd[1]: Reached target basic.target. Dec 13 01:48:25.770791 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:48:25.795693 systemd-fsck[858]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 01:48:25.799582 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:48:25.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:25.804742 systemd[1]: Mounting sysroot.mount... Dec 13 01:48:25.826672 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:48:25.826669 systemd[1]: Mounted sysroot.mount. Dec 13 01:48:25.829822 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:48:25.840837 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:48:25.845875 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 01:48:25.850663 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:48:25.850704 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:48:25.858432 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:48:25.871639 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:48:25.876985 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:48:25.885007 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (868) Dec 13 01:48:25.894136 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:48:25.894172 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:48:25.894187 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:48:25.898179 initrd-setup-root[873]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:48:25.904357 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:48:25.909267 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:48:25.918697 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:48:25.925644 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:48:26.036838 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:48:26.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:26.048467 systemd[1]: Starting ignition-mount.service... Dec 13 01:48:26.053799 systemd[1]: Starting sysroot-boot.service... Dec 13 01:48:26.063184 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 01:48:26.063403 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 01:48:26.087529 ignition[936]: INFO : Ignition 2.14.0 Dec 13 01:48:26.087529 ignition[936]: INFO : Stage: mount Dec 13 01:48:26.087529 ignition[936]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:26.087529 ignition[936]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:26.088453 systemd[1]: Finished sysroot-boot.service. Dec 13 01:48:26.096948 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:26.098940 ignition[936]: INFO : mount: mount passed Dec 13 01:48:26.099708 ignition[936]: INFO : Ignition finished successfully Dec 13 01:48:26.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:26.111604 systemd[1]: Finished ignition-mount.service. Dec 13 01:48:26.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:26.188881 coreos-metadata[867]: Dec 13 01:48:26.188 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:48:26.195032 coreos-metadata[867]: Dec 13 01:48:26.195 INFO Fetch successful Dec 13 01:48:26.231129 coreos-metadata[867]: Dec 13 01:48:26.231 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:48:26.248928 coreos-metadata[867]: Dec 13 01:48:26.248 INFO Fetch successful Dec 13 01:48:26.255181 coreos-metadata[867]: Dec 13 01:48:26.255 INFO wrote hostname ci-3510.3.6-a-1addd118d4 to /sysroot/etc/hostname Dec 13 01:48:26.260420 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 01:48:26.272357 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 01:48:26.272397 kernel: audit: type=1130 audit(1734054506.265:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:26.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:26.266824 systemd[1]: Starting ignition-files.service... Dec 13 01:48:26.281252 systemd-networkd[811]: eth0: Gained IPv6LL Dec 13 01:48:26.289840 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:48:26.303679 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (946) Dec 13 01:48:26.303714 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:48:26.311078 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:48:26.311107 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:48:26.319398 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:48:26.332221 ignition[965]: INFO : Ignition 2.14.0 Dec 13 01:48:26.332221 ignition[965]: INFO : Stage: files Dec 13 01:48:26.335850 ignition[965]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:26.335850 ignition[965]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:26.347583 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:26.355932 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:48:26.358978 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:48:26.358978 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:48:26.371274 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:48:26.374681 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:48:26.380796 unknown[965]: wrote ssh authorized keys file for user: core Dec 13 01:48:26.383234 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:48:26.390046 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:48:26.394408 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:48:26.475071 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:48:26.555756 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:48:26.561615 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:48:26.561615 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:48:27.086672 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:48:27.230322 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:48:27.235199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:48:27.316183 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (967) Dec 13 01:48:27.261022 systemd[1]: mnt-oem562107472.mount: Deactivated successfully. Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem562107472" Dec 13 01:48:27.318734 ignition[965]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem562107472": device or resource busy Dec 13 01:48:27.318734 ignition[965]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem562107472", trying btrfs: device or resource busy Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem562107472" Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem562107472" Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem562107472" Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem562107472" Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2135392286" Dec 13 01:48:27.318734 ignition[965]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2135392286": device or resource busy Dec 13 01:48:27.318734 ignition[965]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2135392286", trying btrfs: device or resource busy Dec 13 01:48:27.318734 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2135392286" Dec 13 01:48:27.286436 systemd[1]: mnt-oem2135392286.mount: Deactivated successfully. Dec 13 01:48:27.390208 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2135392286" Dec 13 01:48:27.390208 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2135392286" Dec 13 01:48:27.390208 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2135392286" Dec 13 01:48:27.390208 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:48:27.390208 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:48:27.390208 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:48:27.822802 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Dec 13 01:48:28.729011 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:48:28.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(14): [started] processing unit "nvidia.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(14): [finished] processing unit "nvidia.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(15): [started] processing unit "waagent.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(15): [finished] processing unit "waagent.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:48:28.736846 ignition[965]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:48:28.736846 ignition[965]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:48:28.736846 ignition[965]: INFO : files: files passed Dec 13 01:48:28.736846 ignition[965]: INFO : Ignition finished successfully Dec 13 01:48:28.860814 kernel: audit: type=1130 audit(1734054508.736:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.860849 kernel: audit: type=1130 audit(1734054508.804:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.860868 kernel: audit: type=1130 audit(1734054508.827:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.860891 kernel: audit: type=1131 audit(1734054508.827:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.733158 systemd[1]: Finished ignition-files.service. Dec 13 01:48:28.738662 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:48:28.767550 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:48:28.870618 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:48:28.790604 systemd[1]: Starting ignition-quench.service... Dec 13 01:48:28.799038 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:48:28.805205 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:48:28.825093 systemd[1]: Finished ignition-quench.service. Dec 13 01:48:28.828058 systemd[1]: Reached target ignition-complete.target. Dec 13 01:48:28.855207 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:48:28.894945 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:48:28.897355 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:48:28.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.901711 systemd[1]: Reached target initrd-fs.target. Dec 13 01:48:28.929738 kernel: audit: type=1130 audit(1734054508.900:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.929770 kernel: audit: type=1131 audit(1734054508.900:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.926729 systemd[1]: Reached target initrd.target. Dec 13 01:48:28.929770 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:48:28.930670 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:48:28.944955 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:48:28.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.949780 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:48:28.963591 kernel: audit: type=1130 audit(1734054508.948:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.969203 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:48:28.973394 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:48:28.975660 systemd[1]: Stopped target timers.target. Dec 13 01:48:28.979499 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:48:28.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.979599 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:48:28.999809 kernel: audit: type=1131 audit(1734054508.983:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:28.995681 systemd[1]: Stopped target initrd.target. Dec 13 01:48:28.999864 systemd[1]: Stopped target basic.target. Dec 13 01:48:29.003543 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:48:29.007366 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:48:29.011450 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:48:29.015836 systemd[1]: Stopped target remote-fs.target. Dec 13 01:48:29.019813 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:48:29.023700 systemd[1]: Stopped target sysinit.target. Dec 13 01:48:29.030377 systemd[1]: Stopped target local-fs.target. Dec 13 01:48:29.034205 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:48:29.037969 systemd[1]: Stopped target swap.target. Dec 13 01:48:29.041722 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:48:29.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.041866 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:48:29.062270 kernel: audit: type=1131 audit(1734054509.045:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.057790 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:48:29.062248 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:48:29.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.062410 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:48:29.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.066381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:48:29.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.066509 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:48:29.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.070957 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:48:29.088609 iscsid[818]: iscsid shutting down. Dec 13 01:48:29.071088 systemd[1]: Stopped ignition-files.service. Dec 13 01:48:29.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.074957 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:48:29.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.105281 ignition[1003]: INFO : Ignition 2.14.0 Dec 13 01:48:29.105281 ignition[1003]: INFO : Stage: umount Dec 13 01:48:29.105281 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:48:29.105281 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:48:29.075084 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 01:48:29.120070 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:48:29.080794 systemd[1]: Stopping ignition-mount.service... Dec 13 01:48:29.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.127263 ignition[1003]: INFO : umount: umount passed Dec 13 01:48:29.127263 ignition[1003]: INFO : Ignition finished successfully Dec 13 01:48:29.084074 systemd[1]: Stopping iscsid.service... Dec 13 01:48:29.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.091281 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:48:29.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.095064 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:48:29.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.095229 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:48:29.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.097614 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:48:29.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.097772 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:48:29.104994 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:48:29.121700 systemd[1]: Stopped iscsid.service. Dec 13 01:48:29.126759 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:48:29.128003 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:48:29.128091 systemd[1]: Stopped ignition-mount.service. Dec 13 01:48:29.134939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:48:29.135025 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:48:29.138184 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:48:29.138227 systemd[1]: Stopped ignition-disks.service. Dec 13 01:48:29.142874 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:48:29.142922 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:48:29.147546 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:48:29.147595 systemd[1]: Stopped ignition-fetch.service. Dec 13 01:48:29.151531 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:48:29.151579 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:48:29.156290 systemd[1]: Stopped target paths.target. Dec 13 01:48:29.157255 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:48:29.161755 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:48:29.195183 systemd[1]: Stopped target slices.target. Dec 13 01:48:29.196916 systemd[1]: Stopped target sockets.target. Dec 13 01:48:29.200428 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:48:29.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.200472 systemd[1]: Closed iscsid.socket. Dec 13 01:48:29.204225 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:48:29.204269 systemd[1]: Stopped ignition-setup.service. Dec 13 01:48:29.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.211156 systemd[1]: Stopping iscsiuio.service... Dec 13 01:48:29.213730 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:48:29.213819 systemd[1]: Stopped iscsiuio.service. Dec 13 01:48:29.217737 systemd[1]: Stopped target network.target. Dec 13 01:48:29.221355 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:48:29.221399 systemd[1]: Closed iscsiuio.socket. Dec 13 01:48:29.225738 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:48:29.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.228979 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:48:29.236578 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:48:29.236701 systemd-networkd[811]: eth0: DHCPv6 lease lost Dec 13 01:48:29.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.238350 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:48:29.250000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:48:29.250000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:48:29.242876 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:48:29.242971 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:48:29.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.251289 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:48:29.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.251327 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:48:29.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.255881 systemd[1]: Stopping network-cleanup.service... Dec 13 01:48:29.259179 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:48:29.259236 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:48:29.263232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:48:29.263283 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:48:29.267746 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:48:29.267796 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:48:29.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.278311 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:48:29.285232 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:48:29.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.289071 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:48:29.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.289205 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:48:29.295021 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:48:29.295061 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:48:29.297865 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:48:29.297905 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:48:29.338727 kernel: hv_netvsc 7c1e5221-3fd8-7c1e-5221-3fd87c1e5221 eth0: Data path switched from VF: enP11093s1 Dec 13 01:48:29.297991 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:48:29.298026 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:48:29.298684 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:48:29.298716 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:48:29.299190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:48:29.299217 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:48:29.300425 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:48:29.300587 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:48:29.300639 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:48:29.309534 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:48:29.309576 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:48:29.314516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:48:29.314566 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:48:29.317980 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:48:29.318419 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:48:29.318503 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:48:29.391269 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:48:29.391400 systemd[1]: Stopped network-cleanup.service. Dec 13 01:48:29.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.421795 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:48:29.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.421945 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:48:29.426875 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:48:29.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:29.430960 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:48:29.431024 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:48:29.436340 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:48:29.450222 systemd[1]: Switching root. Dec 13 01:48:29.475359 systemd-journald[182]: Journal stopped Dec 13 01:48:33.618764 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Dec 13 01:48:33.618799 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:48:33.618812 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:48:33.618822 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:48:33.618831 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:48:33.618839 kernel: SELinux: policy capability open_perms=1 Dec 13 01:48:33.618853 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:48:33.618862 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:48:33.618872 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:48:33.618881 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:48:33.618891 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:48:33.618899 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:48:33.618911 systemd[1]: Successfully loaded SELinux policy in 110.847ms. Dec 13 01:48:33.618921 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.221ms. Dec 13 01:48:33.618937 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:48:33.618947 systemd[1]: Detected virtualization microsoft. Dec 13 01:48:33.618958 systemd[1]: Detected architecture x86-64. Dec 13 01:48:33.618967 systemd[1]: Detected first boot. Dec 13 01:48:33.618981 systemd[1]: Hostname set to . Dec 13 01:48:33.618991 systemd[1]: Initializing machine ID from random generator. Dec 13 01:48:33.619003 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:48:33.619012 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:48:33.619024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:48:33.619035 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:48:33.619048 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:33.619063 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 13 01:48:33.619074 kernel: audit: type=1334 audit(1734054513.143:92): prog-id=12 op=LOAD Dec 13 01:48:33.619086 kernel: audit: type=1334 audit(1734054513.143:93): prog-id=3 op=UNLOAD Dec 13 01:48:33.619097 kernel: audit: type=1334 audit(1734054513.147:94): prog-id=13 op=LOAD Dec 13 01:48:33.619108 kernel: audit: type=1334 audit(1734054513.152:95): prog-id=14 op=LOAD Dec 13 01:48:33.619117 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:48:33.619127 kernel: audit: type=1334 audit(1734054513.152:96): prog-id=4 op=UNLOAD Dec 13 01:48:33.619137 kernel: audit: type=1334 audit(1734054513.152:97): prog-id=5 op=UNLOAD Dec 13 01:48:33.619151 kernel: audit: type=1131 audit(1734054513.153:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.619160 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:48:33.619172 kernel: audit: type=1334 audit(1734054513.194:99): prog-id=12 op=UNLOAD Dec 13 01:48:33.619181 kernel: audit: type=1130 audit(1734054513.200:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.619191 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:48:33.619202 kernel: audit: type=1131 audit(1734054513.200:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.619216 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:48:33.619227 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:48:33.619242 systemd[1]: Created slice system-getty.slice. Dec 13 01:48:33.619252 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:48:33.619264 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:48:33.619274 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:48:33.619286 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:48:33.619296 systemd[1]: Created slice user.slice. Dec 13 01:48:33.619308 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:48:33.619321 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:48:33.619332 systemd[1]: Set up automount boot.automount. Dec 13 01:48:33.619343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:48:33.619354 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:48:33.619369 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:48:33.619379 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:48:33.619391 systemd[1]: Reached target integritysetup.target. Dec 13 01:48:33.619401 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:48:33.619415 systemd[1]: Reached target remote-fs.target. Dec 13 01:48:33.619425 systemd[1]: Reached target slices.target. Dec 13 01:48:33.619437 systemd[1]: Reached target swap.target. Dec 13 01:48:33.619450 systemd[1]: Reached target torcx.target. Dec 13 01:48:33.619462 systemd[1]: Reached target veritysetup.target. Dec 13 01:48:33.619474 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:48:33.619486 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:48:33.619499 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:48:33.619511 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:48:33.619524 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:48:33.619534 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:48:33.619546 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:48:33.619556 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:48:33.619568 systemd[1]: Mounting media.mount... Dec 13 01:48:33.619581 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:33.619592 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:48:33.619603 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:48:33.619614 systemd[1]: Mounting tmp.mount... Dec 13 01:48:33.619625 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:48:33.619636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:48:33.619648 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:48:33.619666 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:48:33.619678 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:48:33.619693 systemd[1]: Starting modprobe@drm.service... Dec 13 01:48:33.619703 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:48:33.619715 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:48:33.619725 systemd[1]: Starting modprobe@loop.service... Dec 13 01:48:33.619738 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:48:33.619748 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:48:33.619757 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:48:33.619770 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:48:33.619782 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:48:33.619794 kernel: fuse: init (API version 7.34) Dec 13 01:48:33.619803 systemd[1]: Stopped systemd-journald.service. Dec 13 01:48:33.619816 kernel: loop: module loaded Dec 13 01:48:33.619825 systemd[1]: Starting systemd-journald.service... Dec 13 01:48:33.619837 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:48:33.619850 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:48:33.619863 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:48:33.619875 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:48:33.619889 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:48:33.619902 systemd[1]: Stopped verity-setup.service. Dec 13 01:48:33.619912 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:33.619925 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:48:33.619934 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:48:33.619947 systemd[1]: Mounted media.mount. Dec 13 01:48:33.619961 systemd-journald[1145]: Journal started Dec 13 01:48:33.620012 systemd-journald[1145]: Runtime Journal (/run/log/journal/5aef294cef2a443dbbb4ccb8f8e82a5c) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:48:29.965000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:48:30.190000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:48:30.192000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:48:30.192000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:48:30.192000 audit: BPF prog-id=10 op=LOAD Dec 13 01:48:30.192000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:48:30.192000 audit: BPF prog-id=11 op=LOAD Dec 13 01:48:30.192000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:48:30.480000 audit[1036]: AVC avc: denied { associate } for pid=1036 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:48:30.480000 audit[1036]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1019 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:48:30.480000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:48:30.487000 audit[1036]: AVC avc: denied { associate } for pid=1036 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:48:30.487000 audit[1036]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1019 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:48:30.487000 audit: CWD cwd="/" Dec 13 01:48:30.487000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:30.487000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:30.487000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:48:33.143000 audit: BPF prog-id=12 op=LOAD Dec 13 01:48:33.143000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:48:33.147000 audit: BPF prog-id=13 op=LOAD Dec 13 01:48:33.152000 audit: BPF prog-id=14 op=LOAD Dec 13 01:48:33.152000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:48:33.152000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:48:33.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.194000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:48:33.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.548000 audit: BPF prog-id=15 op=LOAD Dec 13 01:48:33.548000 audit: BPF prog-id=16 op=LOAD Dec 13 01:48:33.548000 audit: BPF prog-id=17 op=LOAD Dec 13 01:48:33.548000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:48:33.548000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:48:33.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.614000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:48:33.614000 audit[1145]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd9a5d9100 a2=4000 a3=7ffd9a5d919c items=0 ppid=1 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:48:33.614000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:48:30.470256 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:48:33.141994 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:48:30.470603 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:48:33.153823 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:48:30.470625 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:48:30.470674 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:48:30.470685 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:48:30.470734 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:48:30.470748 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:48:30.470984 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:48:30.471028 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:48:30.471042 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:48:30.477697 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:48:30.477753 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:48:30.477774 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:48:30.477787 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:48:30.477814 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:48:30.477828 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:48:32.700961 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:32Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:48:32.701202 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:32Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:48:32.701297 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:32Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:48:32.701461 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:32Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:48:32.701506 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:32Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:48:32.701559 /usr/lib/systemd/system-generators/torcx-generator[1036]: time="2024-12-13T01:48:32Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:48:33.628726 systemd[1]: Started systemd-journald.service. Dec 13 01:48:33.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.630534 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:48:33.632574 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:48:33.634645 systemd[1]: Mounted tmp.mount. Dec 13 01:48:33.636586 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:48:33.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.639250 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:48:33.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.641774 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:48:33.641987 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:48:33.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.644293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:33.644566 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:48:33.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.647245 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:48:33.647536 systemd[1]: Finished modprobe@drm.service. Dec 13 01:48:33.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.649943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:33.650220 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:48:33.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.652998 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:48:33.653282 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:48:33.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.655628 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:33.655932 systemd[1]: Finished modprobe@loop.service. Dec 13 01:48:33.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.658356 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:48:33.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.660856 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:48:33.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.663521 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:48:33.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.666388 systemd[1]: Reached target network-pre.target. Dec 13 01:48:33.670012 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:48:33.679725 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:48:33.681767 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:48:33.686267 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:48:33.690141 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:48:33.692722 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:48:33.694090 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:48:33.696341 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:48:33.698536 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:48:33.702225 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:48:33.710953 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:48:33.713608 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:48:33.719378 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:48:33.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.723628 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:48:33.738402 systemd-journald[1145]: Time spent on flushing to /var/log/journal/5aef294cef2a443dbbb4ccb8f8e82a5c is 32.415ms for 1149 entries. Dec 13 01:48:33.738402 systemd-journald[1145]: System Journal (/var/log/journal/5aef294cef2a443dbbb4ccb8f8e82a5c) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:48:33.812097 systemd-journald[1145]: Received client request to flush runtime journal. Dec 13 01:48:33.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.812396 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:48:33.747358 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:48:33.749809 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:48:33.764684 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:48:33.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.813287 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:48:33.867550 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:48:33.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.871607 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:48:33.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:33.963944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:48:34.296119 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:48:34.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:34.298000 audit: BPF prog-id=18 op=LOAD Dec 13 01:48:34.298000 audit: BPF prog-id=19 op=LOAD Dec 13 01:48:34.298000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:48:34.298000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:48:34.300385 systemd[1]: Starting systemd-udevd.service... Dec 13 01:48:34.318248 systemd-udevd[1164]: Using default interface naming scheme 'v252'. Dec 13 01:48:34.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:34.420000 audit: BPF prog-id=20 op=LOAD Dec 13 01:48:34.417009 systemd[1]: Started systemd-udevd.service. Dec 13 01:48:34.422189 systemd[1]: Starting systemd-networkd.service... Dec 13 01:48:34.450000 audit: BPF prog-id=21 op=LOAD Dec 13 01:48:34.450000 audit: BPF prog-id=22 op=LOAD Dec 13 01:48:34.450000 audit: BPF prog-id=23 op=LOAD Dec 13 01:48:34.452639 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:48:34.481230 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:48:34.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:34.495029 systemd[1]: Started systemd-userdbd.service. Dec 13 01:48:34.552670 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:48:34.593990 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:48:34.618519 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:48:34.618662 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:48:34.598000 audit[1175]: AVC avc: denied { confidentiality } for pid=1175 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:48:34.627331 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:48:34.627385 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:48:34.627404 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:48:34.635504 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:48:34.638683 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:48:34.674251 systemd-networkd[1176]: lo: Link UP Dec 13 01:48:34.674267 systemd-networkd[1176]: lo: Gained carrier Dec 13 01:48:34.674952 systemd-networkd[1176]: Enumeration completed Dec 13 01:48:34.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:34.675080 systemd[1]: Started systemd-networkd.service. Dec 13 01:48:34.679941 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:48:34.682739 systemd-networkd[1176]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:48:35.331927 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:48:35.331996 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:48:35.332023 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:48:35.340622 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:48:34.598000 audit[1175]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b35c299960 a1=f884 a2=7f874c1fbbc5 a3=5 items=12 ppid=1164 pid=1175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:48:34.598000 audit: CWD cwd="/" Dec 13 01:48:34.598000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=1 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=2 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=3 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=4 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=5 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=6 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=7 name=(null) inode=14752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=8 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=9 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=10 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PATH item=11 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:48:34.598000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:48:35.378749 kernel: mlx5_core 2b55:00:02.0 enP11093s1: Link up Dec 13 01:48:35.390618 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1170) Dec 13 01:48:35.398623 kernel: hv_netvsc 7c1e5221-3fd8-7c1e-5221-3fd87c1e5221 eth0: Data path switched to VF: enP11093s1 Dec 13 01:48:35.401007 systemd-networkd[1176]: enP11093s1: Link UP Dec 13 01:48:35.401264 systemd-networkd[1176]: eth0: Link UP Dec 13 01:48:35.402396 systemd-networkd[1176]: eth0: Gained carrier Dec 13 01:48:35.414945 systemd-networkd[1176]: enP11093s1: Gained carrier Dec 13 01:48:35.451798 systemd-networkd[1176]: eth0: DHCPv4 address 10.200.8.23/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:48:35.467681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:48:35.519620 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 01:48:35.561014 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:48:35.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:35.564895 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:48:35.679034 lvm[1241]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:48:35.709832 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:48:35.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:35.713521 systemd[1]: Reached target cryptsetup.target. Dec 13 01:48:35.717898 systemd[1]: Starting lvm2-activation.service... Dec 13 01:48:35.722869 lvm[1242]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:48:35.749609 systemd[1]: Finished lvm2-activation.service. Dec 13 01:48:35.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:35.751967 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:48:35.754099 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:48:35.754131 systemd[1]: Reached target local-fs.target. Dec 13 01:48:35.756052 systemd[1]: Reached target machines.target. Dec 13 01:48:35.759214 systemd[1]: Starting ldconfig.service... Dec 13 01:48:35.761661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:48:35.761764 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:35.762941 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:48:35.766282 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:48:35.769913 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:48:35.773725 systemd[1]: Starting systemd-sysext.service... Dec 13 01:48:35.788140 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1244 (bootctl) Dec 13 01:48:35.789441 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:48:36.323732 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:48:36.557178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:48:36.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.565462 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:48:36.565686 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:48:36.630624 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 01:48:36.776627 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:48:36.792722 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:48:36.798198 (sd-sysext)[1256]: Using extensions 'kubernetes'. Dec 13 01:48:36.798659 (sd-sysext)[1256]: Merged extensions into '/usr'. Dec 13 01:48:36.812261 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:48:36.812873 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:48:36.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.816424 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:36.818120 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:48:36.818589 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:48:36.822215 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:48:36.824353 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:48:36.825659 systemd[1]: Starting modprobe@loop.service... Dec 13 01:48:36.825828 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:48:36.825969 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:36.826117 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:36.827004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:36.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.829162 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:48:36.829948 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:36.830066 systemd[1]: Finished modprobe@loop.service. Dec 13 01:48:36.830500 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:48:36.833366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:36.833526 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:48:36.833965 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:48:36.838668 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:48:36.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:36.839588 systemd[1]: Finished systemd-sysext.service. Dec 13 01:48:36.841722 systemd[1]: Starting ensure-sysext.service... Dec 13 01:48:36.848914 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:48:36.861822 systemd[1]: Reloading. Dec 13 01:48:36.867856 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:48:36.877010 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:48:36.881798 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:48:36.892418 systemd-fsck[1252]: fsck.fat 4.2 (2021-01-31) Dec 13 01:48:36.892418 systemd-fsck[1252]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 01:48:36.934514 /usr/lib/systemd/system-generators/torcx-generator[1285]: time="2024-12-13T01:48:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:48:36.949270 /usr/lib/systemd/system-generators/torcx-generator[1285]: time="2024-12-13T01:48:36Z" level=info msg="torcx already run" Dec 13 01:48:37.011711 systemd-networkd[1176]: eth0: Gained IPv6LL Dec 13 01:48:37.061832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:48:37.061853 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:48:37.078970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:37.142000 audit: BPF prog-id=24 op=LOAD Dec 13 01:48:37.142000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:48:37.142000 audit: BPF prog-id=25 op=LOAD Dec 13 01:48:37.142000 audit: BPF prog-id=26 op=LOAD Dec 13 01:48:37.143000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:48:37.143000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:48:37.143000 audit: BPF prog-id=27 op=LOAD Dec 13 01:48:37.143000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:48:37.143000 audit: BPF prog-id=28 op=LOAD Dec 13 01:48:37.144000 audit: BPF prog-id=29 op=LOAD Dec 13 01:48:37.144000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:48:37.144000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:48:37.145000 audit: BPF prog-id=30 op=LOAD Dec 13 01:48:37.145000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:48:37.145000 audit: BPF prog-id=31 op=LOAD Dec 13 01:48:37.145000 audit: BPF prog-id=32 op=LOAD Dec 13 01:48:37.145000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:48:37.145000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:48:37.150956 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:48:37.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.154551 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:48:37.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.165159 systemd[1]: Mounting boot.mount... Dec 13 01:48:37.171276 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:37.171644 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.173391 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:48:37.176578 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:48:37.181666 systemd[1]: Starting modprobe@loop.service... Dec 13 01:48:37.183640 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.183836 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:37.184000 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:37.186916 systemd[1]: Mounted boot.mount. Dec 13 01:48:37.189300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:37.189825 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:48:37.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.193200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:37.193338 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:48:37.195997 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:37.196134 systemd[1]: Finished modprobe@loop.service. Dec 13 01:48:37.198683 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:48:37.198821 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.201367 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.203144 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:48:37.207252 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:48:37.210897 systemd[1]: Starting modprobe@loop.service... Dec 13 01:48:37.212927 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.213103 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:37.214129 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:48:37.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.217344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:37.217504 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:48:37.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.220407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:37.220569 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:48:37.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.224202 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:37.224368 systemd[1]: Finished modprobe@loop.service. Dec 13 01:48:37.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.233313 systemd[1]: Finished ensure-sysext.service. Dec 13 01:48:37.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.237307 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.238849 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:48:37.242367 systemd[1]: Starting modprobe@drm.service... Dec 13 01:48:37.245777 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:48:37.249198 systemd[1]: Starting modprobe@loop.service... Dec 13 01:48:37.252781 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.252852 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:37.253537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:37.253727 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:48:37.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.256336 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:48:37.256506 systemd[1]: Finished modprobe@drm.service. Dec 13 01:48:37.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.259034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:37.259199 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:48:37.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.261994 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:37.262166 systemd[1]: Finished modprobe@loop.service. Dec 13 01:48:37.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.264982 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:48:37.265037 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:48:37.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.306498 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:48:37.310505 systemd[1]: Starting audit-rules.service... Dec 13 01:48:37.313961 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:48:37.318151 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:48:37.321000 audit: BPF prog-id=33 op=LOAD Dec 13 01:48:37.325000 audit: BPF prog-id=34 op=LOAD Dec 13 01:48:37.323853 systemd[1]: Starting systemd-resolved.service... Dec 13 01:48:37.327813 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:48:37.331103 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:48:37.342201 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:48:37.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.344834 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:48:37.372000 audit[1365]: SYSTEM_BOOT pid=1365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.379089 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:48:37.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.407496 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:48:37.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.442801 systemd-resolved[1363]: Positive Trust Anchors: Dec 13 01:48:37.443168 systemd-resolved[1363]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:48:37.443299 systemd-resolved[1363]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:48:37.445952 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:48:37.448698 systemd[1]: Reached target time-set.target. Dec 13 01:48:37.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:48:37.453000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:48:37.453000 audit[1380]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd65fa7390 a2=420 a3=0 items=0 ppid=1360 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:48:37.453000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:48:37.455936 augenrules[1380]: No rules Dec 13 01:48:37.456155 systemd[1]: Finished audit-rules.service. Dec 13 01:48:37.465005 systemd-resolved[1363]: Using system hostname 'ci-3510.3.6-a-1addd118d4'. Dec 13 01:48:37.466550 systemd[1]: Started systemd-resolved.service. Dec 13 01:48:37.469290 systemd[1]: Reached target network.target. Dec 13 01:48:37.471407 systemd[1]: Reached target network-online.target. Dec 13 01:48:37.473864 systemd[1]: Reached target nss-lookup.target. Dec 13 01:48:37.475095 systemd-timesyncd[1364]: Contacted time server 193.1.8.106:123 (0.flatcar.pool.ntp.org). Dec 13 01:48:37.475144 systemd-timesyncd[1364]: Initial clock synchronization to Fri 2024-12-13 01:48:37.475421 UTC. Dec 13 01:48:37.564079 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:37.564109 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:48:38.272842 ldconfig[1243]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:48:38.286085 systemd[1]: Finished ldconfig.service. Dec 13 01:48:38.289636 systemd[1]: Starting systemd-update-done.service... Dec 13 01:48:38.299546 systemd[1]: Finished systemd-update-done.service. Dec 13 01:48:38.301897 systemd[1]: Reached target sysinit.target. Dec 13 01:48:38.303910 systemd[1]: Started motdgen.path. Dec 13 01:48:38.305673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:48:38.308645 systemd[1]: Started logrotate.timer. Dec 13 01:48:38.310676 systemd[1]: Started mdadm.timer. Dec 13 01:48:38.312267 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:48:38.314459 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:48:38.314504 systemd[1]: Reached target paths.target. Dec 13 01:48:38.317725 systemd[1]: Reached target timers.target. Dec 13 01:48:38.320733 systemd[1]: Listening on dbus.socket. Dec 13 01:48:38.324075 systemd[1]: Starting docker.socket... Dec 13 01:48:38.328440 systemd[1]: Listening on sshd.socket. Dec 13 01:48:38.330815 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:38.331333 systemd[1]: Listening on docker.socket. Dec 13 01:48:38.333524 systemd[1]: Reached target sockets.target. Dec 13 01:48:38.335772 systemd[1]: Reached target basic.target. Dec 13 01:48:38.337812 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:48:38.337846 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:48:38.338866 systemd[1]: Starting containerd.service... Dec 13 01:48:38.341902 systemd[1]: Starting dbus.service... Dec 13 01:48:38.344471 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:48:38.347953 systemd[1]: Starting extend-filesystems.service... Dec 13 01:48:38.350831 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:48:38.354801 systemd[1]: Starting kubelet.service... Dec 13 01:48:38.357616 systemd[1]: Starting motdgen.service... Dec 13 01:48:38.360813 systemd[1]: Started nvidia.service. Dec 13 01:48:38.364393 systemd[1]: Starting prepare-helm.service... Dec 13 01:48:38.367823 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:48:38.375278 systemd[1]: Starting sshd-keygen.service... Dec 13 01:48:38.380294 systemd[1]: Starting systemd-logind.service... Dec 13 01:48:38.382428 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:48:38.382539 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:48:38.383067 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:48:38.383910 systemd[1]: Starting update-engine.service... Dec 13 01:48:38.389469 jq[1391]: false Dec 13 01:48:38.391712 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:48:38.396709 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:48:38.396976 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:48:38.400194 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:48:38.401701 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:48:38.408477 jq[1407]: true Dec 13 01:48:38.423292 jq[1412]: true Dec 13 01:48:38.431171 tar[1411]: linux-amd64/helm Dec 13 01:48:38.467017 dbus-daemon[1390]: [system] SELinux support is enabled Dec 13 01:48:38.469080 systemd[1]: Started dbus.service. Dec 13 01:48:38.474390 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:48:38.474439 systemd[1]: Reached target system-config.target. Dec 13 01:48:38.476844 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:48:38.476877 systemd[1]: Reached target user-config.target. Dec 13 01:48:38.486573 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:48:38.486799 systemd[1]: Finished motdgen.service. Dec 13 01:48:38.493819 extend-filesystems[1392]: Found loop1 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda1 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda2 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda3 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found usr Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda4 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda6 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda7 Dec 13 01:48:38.496721 extend-filesystems[1392]: Found sda9 Dec 13 01:48:38.496721 extend-filesystems[1392]: Checking size of /dev/sda9 Dec 13 01:48:38.527631 bash[1442]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:48:38.528395 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:48:38.554780 extend-filesystems[1392]: Old size kept for /dev/sda9 Dec 13 01:48:38.557306 extend-filesystems[1392]: Found sr0 Dec 13 01:48:38.558514 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:48:38.558694 systemd[1]: Finished extend-filesystems.service. Dec 13 01:48:38.602695 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 01:48:38.616222 env[1413]: time="2024-12-13T01:48:38.616148959Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:48:38.673867 systemd-logind[1403]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:48:38.674108 systemd-logind[1403]: New seat seat0. Dec 13 01:48:38.677517 systemd[1]: Started systemd-logind.service. Dec 13 01:48:38.719669 env[1413]: time="2024-12-13T01:48:38.719593681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:48:38.719920 env[1413]: time="2024-12-13T01:48:38.719890989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:38.724500 env[1413]: time="2024-12-13T01:48:38.724451704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:38.724500 env[1413]: time="2024-12-13T01:48:38.724497006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725370 env[1413]: time="2024-12-13T01:48:38.724846614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725370 env[1413]: time="2024-12-13T01:48:38.724880115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725370 env[1413]: time="2024-12-13T01:48:38.724899516Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:48:38.725370 env[1413]: time="2024-12-13T01:48:38.724913616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725370 env[1413]: time="2024-12-13T01:48:38.725015119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725370 env[1413]: time="2024-12-13T01:48:38.725259425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725649 env[1413]: time="2024-12-13T01:48:38.725451030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:38.725649 env[1413]: time="2024-12-13T01:48:38.725476830Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:48:38.725649 env[1413]: time="2024-12-13T01:48:38.725547832Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:48:38.725649 env[1413]: time="2024-12-13T01:48:38.725564333Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:48:38.728504 update_engine[1404]: I1213 01:48:38.726727 1404 main.cc:92] Flatcar Update Engine starting Dec 13 01:48:38.739740 systemd[1]: Started update-engine.service. Dec 13 01:48:38.740215 update_engine[1404]: I1213 01:48:38.740123 1404 update_check_scheduler.cc:74] Next update check in 11m29s Dec 13 01:48:38.744655 systemd[1]: Started locksmithd.service. Dec 13 01:48:38.749443 env[1413]: time="2024-12-13T01:48:38.749406337Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:48:38.749521 env[1413]: time="2024-12-13T01:48:38.749482139Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:48:38.749521 env[1413]: time="2024-12-13T01:48:38.749506840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:48:38.749628 env[1413]: time="2024-12-13T01:48:38.749564341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749628 env[1413]: time="2024-12-13T01:48:38.749586542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749699 env[1413]: time="2024-12-13T01:48:38.749683544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749745 env[1413]: time="2024-12-13T01:48:38.749717745Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749745 env[1413]: time="2024-12-13T01:48:38.749739445Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749870 env[1413]: time="2024-12-13T01:48:38.749764046Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749870 env[1413]: time="2024-12-13T01:48:38.749795447Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749870 env[1413]: time="2024-12-13T01:48:38.749814847Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.749870 env[1413]: time="2024-12-13T01:48:38.749835048Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:48:38.750010 env[1413]: time="2024-12-13T01:48:38.749985352Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750116155Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750587767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750647668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750668169Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750749871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750825173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750856674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750875574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750896075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750925875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750943576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750962276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751035 env[1413]: time="2024-12-13T01:48:38.750982077Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751169882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751195682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751216383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751246384Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751268984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751284485Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751321886Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:48:38.751537 env[1413]: time="2024-12-13T01:48:38.751367287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:48:38.751842 env[1413]: time="2024-12-13T01:48:38.751697495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:48:38.751842 env[1413]: time="2024-12-13T01:48:38.751788797Z" level=info msg="Connect containerd service" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.751850599Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.752739521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.752861125Z" level=info msg="Start subscribing containerd event" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.752912426Z" level=info msg="Start recovering state" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.752980828Z" level=info msg="Start event monitor" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.752992428Z" level=info msg="Start snapshots syncer" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.753004128Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.753014728Z" level=info msg="Start streaming server" Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.753464440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:48:38.760134 env[1413]: time="2024-12-13T01:48:38.753553542Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:48:38.760823 systemd[1]: Started containerd.service. Dec 13 01:48:38.763954 env[1413]: time="2024-12-13T01:48:38.763833703Z" level=info msg="containerd successfully booted in 0.177183s" Dec 13 01:48:39.611438 tar[1411]: linux-amd64/LICENSE Dec 13 01:48:39.612021 tar[1411]: linux-amd64/README.md Dec 13 01:48:39.627779 systemd[1]: Finished prepare-helm.service. Dec 13 01:48:39.776817 systemd[1]: Started kubelet.service. Dec 13 01:48:39.821767 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:48:40.500974 kubelet[1505]: E1213 01:48:40.500923 1505 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:48:40.503139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:48:40.503292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:48:40.503574 systemd[1]: kubelet.service: Consumed 1.164s CPU time. Dec 13 01:48:40.775208 sshd_keygen[1415]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:48:40.795538 systemd[1]: Finished sshd-keygen.service. Dec 13 01:48:40.800000 systemd[1]: Starting issuegen.service... Dec 13 01:48:40.804378 systemd[1]: Started waagent.service. Dec 13 01:48:40.807658 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:48:40.808003 systemd[1]: Finished issuegen.service. Dec 13 01:48:40.812983 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:48:40.820054 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:48:40.823725 systemd[1]: Started getty@tty1.service. Dec 13 01:48:40.827092 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:48:40.829700 systemd[1]: Reached target getty.target. Dec 13 01:48:40.838925 systemd[1]: Reached target multi-user.target. Dec 13 01:48:40.842674 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:48:40.855511 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:48:40.855690 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:48:40.858301 systemd[1]: Startup finished in 625ms (firmware) + 6.394s (loader) + 915ms (kernel) + 8.105s (initrd) + 10.414s (userspace) = 26.455s. Dec 13 01:48:40.933487 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:48:40.935025 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:48:40.949141 systemd[1]: Created slice user-500.slice. Dec 13 01:48:40.950835 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:48:40.953718 systemd-logind[1403]: New session 2 of user core. Dec 13 01:48:40.960942 systemd-logind[1403]: New session 1 of user core. Dec 13 01:48:40.965825 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:48:40.967636 systemd[1]: Starting user@500.service... Dec 13 01:48:40.973929 (systemd)[1529]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:48:41.078413 systemd[1529]: Queued start job for default target default.target. Dec 13 01:48:41.079089 systemd[1529]: Reached target paths.target. Dec 13 01:48:41.079116 systemd[1529]: Reached target sockets.target. Dec 13 01:48:41.079132 systemd[1529]: Reached target timers.target. Dec 13 01:48:41.079146 systemd[1529]: Reached target basic.target. Dec 13 01:48:41.079273 systemd[1]: Started user@500.service. Dec 13 01:48:41.080502 systemd[1]: Started session-1.scope. Dec 13 01:48:41.081364 systemd[1]: Started session-2.scope. Dec 13 01:48:41.082286 systemd[1529]: Reached target default.target. Dec 13 01:48:41.082507 systemd[1529]: Startup finished in 102ms. Dec 13 01:48:42.896290 waagent[1521]: 2024-12-13T01:48:42.896172Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 01:48:42.914376 waagent[1521]: 2024-12-13T01:48:42.903119Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 01:48:42.914376 waagent[1521]: 2024-12-13T01:48:42.904166Z INFO Daemon Daemon Python: 3.9.16 Dec 13 01:48:42.914376 waagent[1521]: 2024-12-13T01:48:42.905275Z INFO Daemon Daemon Run daemon Dec 13 01:48:42.914376 waagent[1521]: 2024-12-13T01:48:42.906547Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 01:48:42.918437 waagent[1521]: 2024-12-13T01:48:42.918316Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:48:42.933308 waagent[1521]: 2024-12-13T01:48:42.933203Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:48:42.938339 waagent[1521]: 2024-12-13T01:48:42.938270Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:48:42.947908 waagent[1521]: 2024-12-13T01:48:42.938530Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:48:42.947908 waagent[1521]: 2024-12-13T01:48:42.939922Z INFO Daemon Daemon Activate resource disk Dec 13 01:48:42.947908 waagent[1521]: 2024-12-13T01:48:42.940648Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:48:42.948328 waagent[1521]: 2024-12-13T01:48:42.948270Z INFO Daemon Daemon Found device: None Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.948555Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.949365Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.951179Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.952082Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.961612Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.964359Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.965458Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:48:42.978667 waagent[1521]: 2024-12-13T01:48:42.966358Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:48:43.011447 waagent[1521]: 2024-12-13T01:48:43.011300Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:48:43.037066 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:48:43.045096 waagent[1521]: 2024-12-13T01:48:43.044981Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:48:43.048086 waagent[1521]: 2024-12-13T01:48:43.048004Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:48:43.051109 waagent[1521]: 2024-12-13T01:48:43.051040Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:48:43.054424 waagent[1521]: 2024-12-13T01:48:43.054363Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:48:43.057302 waagent[1521]: 2024-12-13T01:48:43.057240Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:48:43.060007 waagent[1521]: 2024-12-13T01:48:43.059950Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:48:43.090679 waagent[1521]: 2024-12-13T01:48:43.090591Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:48:43.094652 waagent[1521]: 2024-12-13T01:48:43.094585Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:48:43.097381 waagent[1521]: 2024-12-13T01:48:43.097322Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:48:43.377613 waagent[1521]: 2024-12-13T01:48:43.377379Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:48:43.389045 waagent[1521]: 2024-12-13T01:48:43.388968Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 01:48:43.394085 waagent[1521]: 2024-12-13T01:48:43.389357Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 01:48:43.459092 waagent[1521]: 2024-12-13T01:48:43.458971Z INFO Daemon Daemon Found private key matching thumbprint 381F23118060580A40607EAF038D167B925DDA13 Dec 13 01:48:43.468717 waagent[1521]: 2024-12-13T01:48:43.459476Z INFO Daemon Daemon Certificate with thumbprint 000D6F95CDB5FE8B5619C3A55FE3DE088755515D has no matching private key. Dec 13 01:48:43.468717 waagent[1521]: 2024-12-13T01:48:43.460527Z INFO Daemon Daemon Fetch goal state completed Dec 13 01:48:43.488060 waagent[1521]: 2024-12-13T01:48:43.487992Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a27db648-7948-470e-8f7b-a21a2c82a384 New eTag: 6891221549785365271] Dec 13 01:48:43.495275 waagent[1521]: 2024-12-13T01:48:43.488730Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:48:43.502860 waagent[1521]: 2024-12-13T01:48:43.502801Z INFO Daemon Daemon Starting provisioning Dec 13 01:48:43.509419 waagent[1521]: 2024-12-13T01:48:43.503073Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:48:43.509419 waagent[1521]: 2024-12-13T01:48:43.504184Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-1addd118d4] Dec 13 01:48:43.511159 waagent[1521]: 2024-12-13T01:48:43.511058Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-1addd118d4] Dec 13 01:48:43.518450 waagent[1521]: 2024-12-13T01:48:43.511640Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:48:43.518450 waagent[1521]: 2024-12-13T01:48:43.512479Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:48:43.526606 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 01:48:43.526880 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 01:48:43.526959 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 01:48:43.527338 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:48:43.532652 systemd-networkd[1176]: eth0: DHCPv6 lease lost Dec 13 01:48:43.534088 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:48:43.534288 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:48:43.537004 systemd[1]: Starting systemd-networkd.service... Dec 13 01:48:43.568990 systemd-networkd[1572]: enP11093s1: Link UP Dec 13 01:48:43.569001 systemd-networkd[1572]: enP11093s1: Gained carrier Dec 13 01:48:43.570327 systemd-networkd[1572]: eth0: Link UP Dec 13 01:48:43.570336 systemd-networkd[1572]: eth0: Gained carrier Dec 13 01:48:43.570800 systemd-networkd[1572]: lo: Link UP Dec 13 01:48:43.570809 systemd-networkd[1572]: lo: Gained carrier Dec 13 01:48:43.571120 systemd-networkd[1572]: eth0: Gained IPv6LL Dec 13 01:48:43.571382 systemd-networkd[1572]: Enumeration completed Dec 13 01:48:43.571482 systemd[1]: Started systemd-networkd.service. Dec 13 01:48:43.573664 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:48:43.575877 waagent[1521]: 2024-12-13T01:48:43.573500Z INFO Daemon Daemon Create user account if not exists Dec 13 01:48:43.575877 waagent[1521]: 2024-12-13T01:48:43.574277Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:48:43.575877 waagent[1521]: 2024-12-13T01:48:43.575336Z INFO Daemon Daemon Configure sudoer Dec 13 01:48:43.577193 waagent[1521]: 2024-12-13T01:48:43.577130Z INFO Daemon Daemon Configure sshd Dec 13 01:48:43.578304 waagent[1521]: 2024-12-13T01:48:43.578252Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:48:43.581984 systemd-networkd[1572]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:48:43.622674 systemd-networkd[1572]: eth0: DHCPv4 address 10.200.8.23/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:48:43.625683 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:48:44.692450 waagent[1521]: 2024-12-13T01:48:44.692349Z INFO Daemon Daemon Provisioning complete Dec 13 01:48:44.721675 waagent[1521]: 2024-12-13T01:48:44.721575Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:48:44.724890 waagent[1521]: 2024-12-13T01:48:44.724818Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:48:44.730399 waagent[1521]: 2024-12-13T01:48:44.730329Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 01:48:44.994376 waagent[1581]: 2024-12-13T01:48:44.994196Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 01:48:44.995112 waagent[1581]: 2024-12-13T01:48:44.995043Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:48:44.995265 waagent[1581]: 2024-12-13T01:48:44.995207Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:48:45.006830 waagent[1581]: 2024-12-13T01:48:45.006748Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 01:48:45.006992 waagent[1581]: 2024-12-13T01:48:45.006940Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 01:48:45.070509 waagent[1581]: 2024-12-13T01:48:45.070371Z INFO ExtHandler ExtHandler Found private key matching thumbprint 381F23118060580A40607EAF038D167B925DDA13 Dec 13 01:48:45.070801 waagent[1581]: 2024-12-13T01:48:45.070734Z INFO ExtHandler ExtHandler Certificate with thumbprint 000D6F95CDB5FE8B5619C3A55FE3DE088755515D has no matching private key. Dec 13 01:48:45.071057 waagent[1581]: 2024-12-13T01:48:45.071002Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 01:48:45.085033 waagent[1581]: 2024-12-13T01:48:45.084965Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: b1b955c4-0eed-4749-9666-1e03fe058b93 New eTag: 6891221549785365271] Dec 13 01:48:45.085570 waagent[1581]: 2024-12-13T01:48:45.085510Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:48:45.121369 waagent[1581]: 2024-12-13T01:48:45.121253Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:48:45.132939 waagent[1581]: 2024-12-13T01:48:45.132861Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1581 Dec 13 01:48:45.136362 waagent[1581]: 2024-12-13T01:48:45.136297Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:48:45.137652 waagent[1581]: 2024-12-13T01:48:45.137576Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:48:45.165478 waagent[1581]: 2024-12-13T01:48:45.165407Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:48:45.165872 waagent[1581]: 2024-12-13T01:48:45.165812Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:48:45.174040 waagent[1581]: 2024-12-13T01:48:45.173985Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:48:45.174506 waagent[1581]: 2024-12-13T01:48:45.174442Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:48:45.175566 waagent[1581]: 2024-12-13T01:48:45.175500Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 01:48:45.176871 waagent[1581]: 2024-12-13T01:48:45.176813Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:48:45.177214 waagent[1581]: 2024-12-13T01:48:45.177160Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:48:45.177996 waagent[1581]: 2024-12-13T01:48:45.177942Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:48:45.178501 waagent[1581]: 2024-12-13T01:48:45.178446Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:48:45.178815 waagent[1581]: 2024-12-13T01:48:45.178758Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:48:45.178815 waagent[1581]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:48:45.178815 waagent[1581]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:48:45.178815 waagent[1581]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:48:45.178815 waagent[1581]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:48:45.178815 waagent[1581]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:48:45.178815 waagent[1581]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:48:45.181960 waagent[1581]: 2024-12-13T01:48:45.181763Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:48:45.182835 waagent[1581]: 2024-12-13T01:48:45.182775Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:48:45.183014 waagent[1581]: 2024-12-13T01:48:45.182963Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:48:45.183705 waagent[1581]: 2024-12-13T01:48:45.183582Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:48:45.183832 waagent[1581]: 2024-12-13T01:48:45.183778Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:48:45.183976 waagent[1581]: 2024-12-13T01:48:45.183927Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:48:45.184828 waagent[1581]: 2024-12-13T01:48:45.184765Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:48:45.185031 waagent[1581]: 2024-12-13T01:48:45.184983Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:48:45.185612 waagent[1581]: 2024-12-13T01:48:45.185534Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:48:45.185703 waagent[1581]: 2024-12-13T01:48:45.185650Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:48:45.186099 waagent[1581]: 2024-12-13T01:48:45.186049Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:48:45.202420 waagent[1581]: 2024-12-13T01:48:45.202333Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 01:48:45.203382 waagent[1581]: 2024-12-13T01:48:45.203325Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:48:45.204616 waagent[1581]: 2024-12-13T01:48:45.204540Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 01:48:45.208419 waagent[1581]: 2024-12-13T01:48:45.208364Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1572' Dec 13 01:48:45.224332 waagent[1581]: 2024-12-13T01:48:45.224217Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:48:45.224332 waagent[1581]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:48:45.224332 waagent[1581]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:48:45.224332 waagent[1581]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:3f:d8 brd ff:ff:ff:ff:ff:ff Dec 13 01:48:45.224332 waagent[1581]: 3: enP11093s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:3f:d8 brd ff:ff:ff:ff:ff:ff\ altname enP11093p0s2 Dec 13 01:48:45.224332 waagent[1581]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:48:45.224332 waagent[1581]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:48:45.224332 waagent[1581]: 2: eth0 inet 10.200.8.23/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:48:45.224332 waagent[1581]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:48:45.224332 waagent[1581]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:48:45.224332 waagent[1581]: 2: eth0 inet6 fe80::7e1e:52ff:fe21:3fd8/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:48:45.246174 waagent[1581]: 2024-12-13T01:48:45.246073Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 01:48:45.359569 waagent[1581]: 2024-12-13T01:48:45.359453Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Dec 13 01:48:45.365021 waagent[1581]: 2024-12-13T01:48:45.364946Z INFO EnvHandler ExtHandler Firewall rules: Dec 13 01:48:45.365021 waagent[1581]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:48:45.365021 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 01:48:45.365021 waagent[1581]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:48:45.365021 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 01:48:45.365021 waagent[1581]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:48:45.365021 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 01:48:45.365021 waagent[1581]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:48:45.365021 waagent[1581]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:48:45.368212 waagent[1581]: 2024-12-13T01:48:45.368151Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:48:45.562768 waagent[1581]: 2024-12-13T01:48:45.562636Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 01:48:45.734123 waagent[1521]: 2024-12-13T01:48:45.733936Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 01:48:45.740502 waagent[1521]: 2024-12-13T01:48:45.740425Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 01:48:46.782001 waagent[1618]: 2024-12-13T01:48:46.781884Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 01:48:46.782762 waagent[1618]: 2024-12-13T01:48:46.782692Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 01:48:46.782909 waagent[1618]: 2024-12-13T01:48:46.782857Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 01:48:46.783060 waagent[1618]: 2024-12-13T01:48:46.783013Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 01:48:46.792978 waagent[1618]: 2024-12-13T01:48:46.792882Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:48:46.793382 waagent[1618]: 2024-12-13T01:48:46.793326Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:48:46.793547 waagent[1618]: 2024-12-13T01:48:46.793499Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:48:46.805443 waagent[1618]: 2024-12-13T01:48:46.805370Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:48:46.815292 waagent[1618]: 2024-12-13T01:48:46.815233Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:48:46.816201 waagent[1618]: 2024-12-13T01:48:46.816141Z INFO ExtHandler Dec 13 01:48:46.816346 waagent[1618]: 2024-12-13T01:48:46.816297Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1d9ca252-ffc9-45aa-a2f8-3cc48fe98305 eTag: 6891221549785365271 source: Fabric] Dec 13 01:48:46.817041 waagent[1618]: 2024-12-13T01:48:46.816982Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:48:46.818133 waagent[1618]: 2024-12-13T01:48:46.818073Z INFO ExtHandler Dec 13 01:48:46.818267 waagent[1618]: 2024-12-13T01:48:46.818218Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:48:46.824967 waagent[1618]: 2024-12-13T01:48:46.824915Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:48:46.825385 waagent[1618]: 2024-12-13T01:48:46.825338Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:48:46.843299 waagent[1618]: 2024-12-13T01:48:46.843232Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 01:48:46.909545 waagent[1618]: 2024-12-13T01:48:46.909416Z INFO ExtHandler Downloaded certificate {'thumbprint': '381F23118060580A40607EAF038D167B925DDA13', 'hasPrivateKey': True} Dec 13 01:48:46.910619 waagent[1618]: 2024-12-13T01:48:46.910530Z INFO ExtHandler Downloaded certificate {'thumbprint': '000D6F95CDB5FE8B5619C3A55FE3DE088755515D', 'hasPrivateKey': False} Dec 13 01:48:46.911637 waagent[1618]: 2024-12-13T01:48:46.911564Z INFO ExtHandler Fetch goal state completed Dec 13 01:48:46.932566 waagent[1618]: 2024-12-13T01:48:46.932468Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 01:48:46.943912 waagent[1618]: 2024-12-13T01:48:46.943832Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1618 Dec 13 01:48:46.946929 waagent[1618]: 2024-12-13T01:48:46.946867Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:48:46.947889 waagent[1618]: 2024-12-13T01:48:46.947823Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 01:48:46.948185 waagent[1618]: 2024-12-13T01:48:46.948129Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 01:48:46.950169 waagent[1618]: 2024-12-13T01:48:46.950109Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:48:46.954789 waagent[1618]: 2024-12-13T01:48:46.954736Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:48:46.955147 waagent[1618]: 2024-12-13T01:48:46.955091Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:48:46.963231 waagent[1618]: 2024-12-13T01:48:46.963173Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:48:46.963700 waagent[1618]: 2024-12-13T01:48:46.963644Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:48:46.975732 waagent[1618]: 2024-12-13T01:48:46.975636Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Dec 13 01:48:46.978415 waagent[1618]: 2024-12-13T01:48:46.978319Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Dec 13 01:48:46.979452 waagent[1618]: 2024-12-13T01:48:46.979383Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 01:48:46.980923 waagent[1618]: 2024-12-13T01:48:46.980861Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:48:46.981334 waagent[1618]: 2024-12-13T01:48:46.981278Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:48:46.981489 waagent[1618]: 2024-12-13T01:48:46.981440Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:48:46.982065 waagent[1618]: 2024-12-13T01:48:46.982007Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:48:46.982506 waagent[1618]: 2024-12-13T01:48:46.982450Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:48:46.983058 waagent[1618]: 2024-12-13T01:48:46.983006Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:48:46.983145 waagent[1618]: 2024-12-13T01:48:46.983075Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:48:46.983145 waagent[1618]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:48:46.983145 waagent[1618]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:48:46.983145 waagent[1618]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:48:46.983145 waagent[1618]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:48:46.983145 waagent[1618]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:48:46.983145 waagent[1618]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:48:46.985692 waagent[1618]: 2024-12-13T01:48:46.985415Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:48:46.987540 waagent[1618]: 2024-12-13T01:48:46.987466Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:48:46.987746 waagent[1618]: 2024-12-13T01:48:46.987685Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:48:46.987915 waagent[1618]: 2024-12-13T01:48:46.987848Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:48:46.987915 waagent[1618]: 2024-12-13T01:48:46.987006Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:48:46.988139 waagent[1618]: 2024-12-13T01:48:46.988077Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:48:46.988729 waagent[1618]: 2024-12-13T01:48:46.988670Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:48:46.989065 waagent[1618]: 2024-12-13T01:48:46.989008Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:48:46.990289 waagent[1618]: 2024-12-13T01:48:46.990235Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:48:47.005587 waagent[1618]: 2024-12-13T01:48:47.005509Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:48:47.005587 waagent[1618]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:48:47.005587 waagent[1618]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:48:47.005587 waagent[1618]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:3f:d8 brd ff:ff:ff:ff:ff:ff Dec 13 01:48:47.005587 waagent[1618]: 3: enP11093s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:3f:d8 brd ff:ff:ff:ff:ff:ff\ altname enP11093p0s2 Dec 13 01:48:47.005587 waagent[1618]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:48:47.005587 waagent[1618]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:48:47.005587 waagent[1618]: 2: eth0 inet 10.200.8.23/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:48:47.005587 waagent[1618]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:48:47.005587 waagent[1618]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:48:47.005587 waagent[1618]: 2: eth0 inet6 fe80::7e1e:52ff:fe21:3fd8/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:48:47.017048 waagent[1618]: 2024-12-13T01:48:47.016982Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 01:48:47.043977 waagent[1618]: 2024-12-13T01:48:47.043821Z INFO ExtHandler ExtHandler Dec 13 01:48:47.046321 waagent[1618]: 2024-12-13T01:48:47.046253Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 031f5d57-b3af-489f-8e74-7b6e386e674b correlation b5114fd6-8135-4cf3-89f3-35e73de33729 created: 2024-12-13T01:48:04.253609Z] Dec 13 01:48:47.051827 waagent[1618]: 2024-12-13T01:48:47.051773Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:48:47.065000 waagent[1618]: 2024-12-13T01:48:47.064932Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 21 ms] Dec 13 01:48:47.086397 waagent[1618]: 2024-12-13T01:48:47.086326Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:48:47.086397 waagent[1618]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:48:47.086397 waagent[1618]: pkts bytes target prot opt in out source destination Dec 13 01:48:47.086397 waagent[1618]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:48:47.086397 waagent[1618]: pkts bytes target prot opt in out source destination Dec 13 01:48:47.086397 waagent[1618]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:48:47.086397 waagent[1618]: pkts bytes target prot opt in out source destination Dec 13 01:48:47.086397 waagent[1618]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:48:47.086397 waagent[1618]: 181 23015 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:48:47.086397 waagent[1618]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:48:47.091664 waagent[1618]: 2024-12-13T01:48:47.091583Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 01:48:47.107075 waagent[1618]: 2024-12-13T01:48:47.107006Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CD95D02A-FC80-47C2-8489-80E8FECEAADA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 01:48:50.754305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:48:50.754656 systemd[1]: Stopped kubelet.service. Dec 13 01:48:50.754719 systemd[1]: kubelet.service: Consumed 1.164s CPU time. Dec 13 01:48:50.756816 systemd[1]: Starting kubelet.service... Dec 13 01:48:50.843370 systemd[1]: Started kubelet.service. Dec 13 01:48:51.487038 kubelet[1666]: E1213 01:48:51.486987 1666 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:48:51.490324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:48:51.490438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:49:01.741519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:49:01.741893 systemd[1]: Stopped kubelet.service. Dec 13 01:49:01.744079 systemd[1]: Starting kubelet.service... Dec 13 01:49:01.829002 systemd[1]: Started kubelet.service. Dec 13 01:49:02.439570 kubelet[1676]: E1213 01:49:02.439516 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:49:02.441433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:49:02.441594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:49:11.362116 systemd[1]: Created slice system-sshd.slice. Dec 13 01:49:11.364111 systemd[1]: Started sshd@0-10.200.8.23:22-10.200.16.10:59572.service. Dec 13 01:49:12.024852 sshd[1683]: Accepted publickey for core from 10.200.16.10 port 59572 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:49:12.026559 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:49:12.032040 systemd-logind[1403]: New session 3 of user core. Dec 13 01:49:12.032786 systemd[1]: Started session-3.scope. Dec 13 01:49:12.568947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:49:12.569185 systemd[1]: Stopped kubelet.service. Dec 13 01:49:12.571653 systemd[1]: Starting kubelet.service... Dec 13 01:49:12.573787 systemd[1]: Started sshd@1-10.200.8.23:22-10.200.16.10:59578.service. Dec 13 01:49:12.661628 systemd[1]: Started kubelet.service. Dec 13 01:49:13.199692 sshd[1689]: Accepted publickey for core from 10.200.16.10 port 59578 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:49:13.201208 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:49:13.206252 systemd[1]: Started session-4.scope. Dec 13 01:49:13.206883 systemd-logind[1403]: New session 4 of user core. Dec 13 01:49:13.262260 kubelet[1694]: E1213 01:49:13.262207 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:49:13.264124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:49:13.264328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:49:13.648218 sshd[1689]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:13.652150 systemd[1]: sshd@1-10.200.8.23:22-10.200.16.10:59578.service: Deactivated successfully. Dec 13 01:49:13.653227 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:49:13.654029 systemd-logind[1403]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:49:13.654980 systemd-logind[1403]: Removed session 4. Dec 13 01:49:13.754068 systemd[1]: Started sshd@2-10.200.8.23:22-10.200.16.10:59582.service. Dec 13 01:49:14.377346 sshd[1705]: Accepted publickey for core from 10.200.16.10 port 59582 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:49:14.379101 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:49:14.383702 systemd-logind[1403]: New session 5 of user core. Dec 13 01:49:14.384074 systemd[1]: Started session-5.scope. Dec 13 01:49:14.818485 sshd[1705]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:14.821743 systemd[1]: sshd@2-10.200.8.23:22-10.200.16.10:59582.service: Deactivated successfully. Dec 13 01:49:14.822816 systemd-logind[1403]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:49:14.822914 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:49:14.824021 systemd-logind[1403]: Removed session 5. Dec 13 01:49:14.923183 systemd[1]: Started sshd@3-10.200.8.23:22-10.200.16.10:59598.service. Dec 13 01:49:15.546930 sshd[1711]: Accepted publickey for core from 10.200.16.10 port 59598 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:49:15.548718 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:49:15.554211 systemd[1]: Started session-6.scope. Dec 13 01:49:15.554695 systemd-logind[1403]: New session 6 of user core. Dec 13 01:49:15.994175 sshd[1711]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:15.997582 systemd[1]: sshd@3-10.200.8.23:22-10.200.16.10:59598.service: Deactivated successfully. Dec 13 01:49:15.998690 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:49:15.999466 systemd-logind[1403]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:49:16.000639 systemd-logind[1403]: Removed session 6. Dec 13 01:49:16.098533 systemd[1]: Started sshd@4-10.200.8.23:22-10.200.16.10:59608.service. Dec 13 01:49:16.724573 sshd[1717]: Accepted publickey for core from 10.200.16.10 port 59608 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:49:16.726293 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:49:16.731238 systemd[1]: Started session-7.scope. Dec 13 01:49:16.731855 systemd-logind[1403]: New session 7 of user core. Dec 13 01:49:17.121247 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:49:17.121561 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:49:17.147893 systemd[1]: Starting docker.service... Dec 13 01:49:17.188327 env[1730]: time="2024-12-13T01:49:17.188285522Z" level=info msg="Starting up" Dec 13 01:49:17.189861 env[1730]: time="2024-12-13T01:49:17.189826525Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:49:17.189861 env[1730]: time="2024-12-13T01:49:17.189846625Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:49:17.190039 env[1730]: time="2024-12-13T01:49:17.189882825Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:49:17.190039 env[1730]: time="2024-12-13T01:49:17.189898725Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:49:17.192066 env[1730]: time="2024-12-13T01:49:17.191942630Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:49:17.192066 env[1730]: time="2024-12-13T01:49:17.191963430Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:49:17.192066 env[1730]: time="2024-12-13T01:49:17.191984230Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:49:17.192066 env[1730]: time="2024-12-13T01:49:17.191995130Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:49:17.240082 env[1730]: time="2024-12-13T01:49:17.240043828Z" level=info msg="Loading containers: start." Dec 13 01:49:17.333804 kernel: Initializing XFRM netlink socket Dec 13 01:49:17.348170 env[1730]: time="2024-12-13T01:49:17.348129249Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 01:49:17.409359 systemd-networkd[1572]: docker0: Link UP Dec 13 01:49:17.430177 env[1730]: time="2024-12-13T01:49:17.430142817Z" level=info msg="Loading containers: done." Dec 13 01:49:17.445974 env[1730]: time="2024-12-13T01:49:17.445928249Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:49:17.446167 env[1730]: time="2024-12-13T01:49:17.446126650Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 01:49:17.446262 env[1730]: time="2024-12-13T01:49:17.446238550Z" level=info msg="Daemon has completed initialization" Dec 13 01:49:17.518185 systemd[1]: Started docker.service. Dec 13 01:49:17.528117 env[1730]: time="2024-12-13T01:49:17.528054417Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:49:23.340961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:49:23.341230 systemd[1]: Stopped kubelet.service. Dec 13 01:49:23.343134 systemd[1]: Starting kubelet.service... Dec 13 01:49:23.457630 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 01:49:23.471258 systemd[1]: Started kubelet.service. Dec 13 01:49:23.978186 kubelet[1854]: E1213 01:49:23.978129 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:49:23.979900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:49:23.980021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:49:24.426900 update_engine[1404]: I1213 01:49:24.426836 1404 update_attempter.cc:509] Updating boot flags... Dec 13 01:49:26.922554 env[1413]: time="2024-12-13T01:49:26.922490219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:49:27.513579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088237943.mount: Deactivated successfully. Dec 13 01:49:29.518621 env[1413]: time="2024-12-13T01:49:29.518543876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:29.524548 env[1413]: time="2024-12-13T01:49:29.524448881Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:29.528684 env[1413]: time="2024-12-13T01:49:29.528552485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:29.533685 env[1413]: time="2024-12-13T01:49:29.533651090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:29.534354 env[1413]: time="2024-12-13T01:49:29.534319790Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:49:29.544542 env[1413]: time="2024-12-13T01:49:29.544512000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:49:31.401432 env[1413]: time="2024-12-13T01:49:31.401355246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:31.405447 env[1413]: time="2024-12-13T01:49:31.405347949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:31.409811 env[1413]: time="2024-12-13T01:49:31.409720753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:31.413031 env[1413]: time="2024-12-13T01:49:31.412945556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:31.413855 env[1413]: time="2024-12-13T01:49:31.413822656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:49:31.424343 env[1413]: time="2024-12-13T01:49:31.424317465Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:49:32.667239 env[1413]: time="2024-12-13T01:49:32.667139161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:32.672320 env[1413]: time="2024-12-13T01:49:32.672233365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:32.676663 env[1413]: time="2024-12-13T01:49:32.676630568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:32.679987 env[1413]: time="2024-12-13T01:49:32.679955271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:32.680860 env[1413]: time="2024-12-13T01:49:32.680824971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:49:32.690955 env[1413]: time="2024-12-13T01:49:32.690926879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:49:33.905720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054768104.mount: Deactivated successfully. Dec 13 01:49:34.090997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:49:34.091262 systemd[1]: Stopped kubelet.service. Dec 13 01:49:34.093316 systemd[1]: Starting kubelet.service... Dec 13 01:49:34.210826 systemd[1]: Started kubelet.service. Dec 13 01:49:34.772675 kubelet[1947]: E1213 01:49:34.772619 1947 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:49:34.774374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:49:34.774495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:49:34.840969 env[1413]: time="2024-12-13T01:49:34.840897622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:34.846635 env[1413]: time="2024-12-13T01:49:34.846557326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:34.850195 env[1413]: time="2024-12-13T01:49:34.850160628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:34.853511 env[1413]: time="2024-12-13T01:49:34.853431231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:34.854561 env[1413]: time="2024-12-13T01:49:34.854530431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:49:34.864962 env[1413]: time="2024-12-13T01:49:34.864931738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:49:35.393572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536892161.mount: Deactivated successfully. Dec 13 01:49:36.632882 env[1413]: time="2024-12-13T01:49:36.632821451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:36.642620 env[1413]: time="2024-12-13T01:49:36.642567557Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:36.646875 env[1413]: time="2024-12-13T01:49:36.646846259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:36.650998 env[1413]: time="2024-12-13T01:49:36.650967462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:36.651616 env[1413]: time="2024-12-13T01:49:36.651570362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:49:36.662411 env[1413]: time="2024-12-13T01:49:36.662374868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:49:37.809509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974081611.mount: Deactivated successfully. Dec 13 01:49:37.825305 env[1413]: time="2024-12-13T01:49:37.825254235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:37.831390 env[1413]: time="2024-12-13T01:49:37.831343939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:37.834380 env[1413]: time="2024-12-13T01:49:37.834343740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:37.838402 env[1413]: time="2024-12-13T01:49:37.838375543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:37.838916 env[1413]: time="2024-12-13T01:49:37.838883943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:49:37.849008 env[1413]: time="2024-12-13T01:49:37.848972349Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:49:38.463840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350624041.mount: Deactivated successfully. Dec 13 01:49:42.115165 env[1413]: time="2024-12-13T01:49:42.115051341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:42.121365 env[1413]: time="2024-12-13T01:49:42.121267998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:42.125007 env[1413]: time="2024-12-13T01:49:42.124978712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:42.128172 env[1413]: time="2024-12-13T01:49:42.128073889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:42.129156 env[1413]: time="2024-12-13T01:49:42.129124050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:49:44.841011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:49:44.841269 systemd[1]: Stopped kubelet.service. Dec 13 01:49:44.843282 systemd[1]: Starting kubelet.service... Dec 13 01:49:45.261532 systemd[1]: Started kubelet.service. Dec 13 01:49:45.549737 systemd[1]: Stopping kubelet.service... Dec 13 01:49:46.206706 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:49:46.207046 systemd[1]: Stopped kubelet.service. Dec 13 01:49:46.210518 systemd[1]: Starting kubelet.service... Dec 13 01:49:46.229077 systemd[1]: Reloading. Dec 13 01:49:46.311173 /usr/lib/systemd/system-generators/torcx-generator[2067]: time="2024-12-13T01:49:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:49:46.311212 /usr/lib/systemd/system-generators/torcx-generator[2067]: time="2024-12-13T01:49:46Z" level=info msg="torcx already run" Dec 13 01:49:46.417109 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:49:46.417128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:49:46.436070 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:49:46.562333 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:49:46.562425 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:49:46.562682 systemd[1]: Stopped kubelet.service. Dec 13 01:49:46.564628 systemd[1]: Starting kubelet.service... Dec 13 01:49:46.838126 systemd[1]: Started kubelet.service. Dec 13 01:49:47.447501 kubelet[2134]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:49:47.447887 kubelet[2134]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:49:47.447887 kubelet[2134]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:49:47.447996 kubelet[2134]: I1213 01:49:47.447956 2134 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:49:47.815725 kubelet[2134]: I1213 01:49:47.815214 2134 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:49:47.815725 kubelet[2134]: I1213 01:49:47.815246 2134 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:49:47.815725 kubelet[2134]: I1213 01:49:47.815522 2134 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:49:47.828420 kubelet[2134]: I1213 01:49:47.828320 2134 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:49:47.829366 kubelet[2134]: E1213 01:49:47.829341 2134 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.839998 kubelet[2134]: I1213 01:49:47.839965 2134 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:49:47.841508 kubelet[2134]: I1213 01:49:47.841460 2134 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:49:47.841718 kubelet[2134]: I1213 01:49:47.841507 2134 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-1addd118d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:49:47.842206 kubelet[2134]: I1213 01:49:47.842182 2134 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:49:47.842276 kubelet[2134]: I1213 01:49:47.842212 2134 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:49:47.842355 kubelet[2134]: I1213 01:49:47.842338 2134 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:49:47.843254 kubelet[2134]: I1213 01:49:47.843235 2134 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:49:47.843254 kubelet[2134]: I1213 01:49:47.843257 2134 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:49:47.843386 kubelet[2134]: I1213 01:49:47.843285 2134 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:49:47.843386 kubelet[2134]: I1213 01:49:47.843307 2134 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:49:47.853477 kubelet[2134]: I1213 01:49:47.853456 2134 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:49:47.855284 kubelet[2134]: I1213 01:49:47.855261 2134 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:49:47.855377 kubelet[2134]: W1213 01:49:47.855330 2134 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:49:47.856551 kubelet[2134]: I1213 01:49:47.856525 2134 server.go:1264] "Started kubelet" Dec 13 01:49:47.857436 kubelet[2134]: W1213 01:49:47.856865 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.857436 kubelet[2134]: E1213 01:49:47.856936 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.857436 kubelet[2134]: W1213 01:49:47.857029 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-1addd118d4&limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.857436 kubelet[2134]: E1213 01:49:47.857071 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-1addd118d4&limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.860228 kubelet[2134]: I1213 01:49:47.860151 2134 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:49:47.860504 kubelet[2134]: I1213 01:49:47.860483 2134 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:49:47.866782 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:49:47.866912 kubelet[2134]: I1213 01:49:47.866894 2134 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:49:47.871158 kubelet[2134]: I1213 01:49:47.870744 2134 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:49:47.871884 kubelet[2134]: I1213 01:49:47.871861 2134 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:49:47.874207 kubelet[2134]: I1213 01:49:47.874179 2134 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:49:47.879171 kubelet[2134]: E1213 01:49:47.878031 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-1addd118d4?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="200ms" Dec 13 01:49:47.879171 kubelet[2134]: E1213 01:49:47.878143 2134 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-1addd118d4.18109975d252af5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-1addd118d4,UID:ci-3510.3.6-a-1addd118d4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-1addd118d4,},FirstTimestamp:2024-12-13 01:49:47.856498525 +0000 UTC m=+1.013116774,LastTimestamp:2024-12-13 01:49:47.856498525 +0000 UTC m=+1.013116774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-1addd118d4,}" Dec 13 01:49:47.879171 kubelet[2134]: I1213 01:49:47.878566 2134 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:49:47.879171 kubelet[2134]: W1213 01:49:47.878894 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.879171 kubelet[2134]: E1213 01:49:47.878942 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.879171 kubelet[2134]: I1213 01:49:47.879005 2134 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:49:47.879511 kubelet[2134]: I1213 01:49:47.879461 2134 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:49:47.879563 kubelet[2134]: I1213 01:49:47.879546 2134 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:49:47.881788 kubelet[2134]: E1213 01:49:47.881772 2134 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:49:47.882154 kubelet[2134]: I1213 01:49:47.882136 2134 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:49:47.898143 kubelet[2134]: I1213 01:49:47.898110 2134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:49:47.899356 kubelet[2134]: I1213 01:49:47.899330 2134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:49:47.899440 kubelet[2134]: I1213 01:49:47.899362 2134 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:49:47.899440 kubelet[2134]: I1213 01:49:47.899379 2134 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:49:47.899440 kubelet[2134]: E1213 01:49:47.899419 2134 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:49:47.907904 kubelet[2134]: W1213 01:49:47.907863 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.907995 kubelet[2134]: E1213 01:49:47.907915 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:47.945129 kubelet[2134]: I1213 01:49:47.945098 2134 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:49:47.945129 kubelet[2134]: I1213 01:49:47.945114 2134 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:49:47.945308 kubelet[2134]: I1213 01:49:47.945147 2134 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:49:47.949499 kubelet[2134]: I1213 01:49:47.949434 2134 policy_none.go:49] "None policy: Start" Dec 13 01:49:47.950346 kubelet[2134]: I1213 01:49:47.950319 2134 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:49:47.950346 kubelet[2134]: I1213 01:49:47.950344 2134 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:49:47.960439 systemd[1]: Created slice kubepods.slice. Dec 13 01:49:47.964959 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:49:47.967811 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:49:47.974198 kubelet[2134]: I1213 01:49:47.974177 2134 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:49:47.974358 kubelet[2134]: I1213 01:49:47.974324 2134 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:49:47.976519 kubelet[2134]: I1213 01:49:47.975789 2134 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:49:47.977364 kubelet[2134]: I1213 01:49:47.977340 2134 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:47.977971 kubelet[2134]: E1213 01:49:47.977751 2134 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:47.977971 kubelet[2134]: E1213 01:49:47.977954 2134 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-1addd118d4\" not found" Dec 13 01:49:48.000324 kubelet[2134]: I1213 01:49:48.000222 2134 topology_manager.go:215] "Topology Admit Handler" podUID="ab2651504d7696700cd2e6e4fc7a9ce7" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.002352 kubelet[2134]: I1213 01:49:48.002315 2134 topology_manager.go:215] "Topology Admit Handler" podUID="e0642ec4b5491c779d60657dac104585" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.003898 kubelet[2134]: I1213 01:49:48.003871 2134 topology_manager.go:215] "Topology Admit Handler" podUID="3446de64db40f2fb517fdfe3c0a21663" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.010500 systemd[1]: Created slice kubepods-burstable-podab2651504d7696700cd2e6e4fc7a9ce7.slice. Dec 13 01:49:48.018451 systemd[1]: Created slice kubepods-burstable-pode0642ec4b5491c779d60657dac104585.slice. Dec 13 01:49:48.022486 systemd[1]: Created slice kubepods-burstable-pod3446de64db40f2fb517fdfe3c0a21663.slice. Dec 13 01:49:48.079413 kubelet[2134]: E1213 01:49:48.079271 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-1addd118d4?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="400ms" Dec 13 01:49:48.080999 kubelet[2134]: I1213 01:49:48.080953 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2651504d7696700cd2e6e4fc7a9ce7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" (UID: \"ab2651504d7696700cd2e6e4fc7a9ce7\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.081220 kubelet[2134]: I1213 01:49:48.081199 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.081394 kubelet[2134]: I1213 01:49:48.081369 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.081557 kubelet[2134]: I1213 01:49:48.081528 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.081768 kubelet[2134]: I1213 01:49:48.081714 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.081936 kubelet[2134]: I1213 01:49:48.081919 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3446de64db40f2fb517fdfe3c0a21663-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-1addd118d4\" (UID: \"3446de64db40f2fb517fdfe3c0a21663\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.082080 kubelet[2134]: I1213 01:49:48.082064 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2651504d7696700cd2e6e4fc7a9ce7-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" (UID: \"ab2651504d7696700cd2e6e4fc7a9ce7\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.082230 kubelet[2134]: I1213 01:49:48.082213 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2651504d7696700cd2e6e4fc7a9ce7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" (UID: \"ab2651504d7696700cd2e6e4fc7a9ce7\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.082384 kubelet[2134]: I1213 01:49:48.082367 2134 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.180017 kubelet[2134]: I1213 01:49:48.179982 2134 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.180424 kubelet[2134]: E1213 01:49:48.180393 2134 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.318764 env[1413]: time="2024-12-13T01:49:48.318714998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-1addd118d4,Uid:ab2651504d7696700cd2e6e4fc7a9ce7,Namespace:kube-system,Attempt:0,}" Dec 13 01:49:48.322453 env[1413]: time="2024-12-13T01:49:48.322411878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-1addd118d4,Uid:e0642ec4b5491c779d60657dac104585,Namespace:kube-system,Attempt:0,}" Dec 13 01:49:48.326036 env[1413]: time="2024-12-13T01:49:48.325974851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-1addd118d4,Uid:3446de64db40f2fb517fdfe3c0a21663,Namespace:kube-system,Attempt:0,}" Dec 13 01:49:48.479972 kubelet[2134]: E1213 01:49:48.479916 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-1addd118d4?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="800ms" Dec 13 01:49:48.582414 kubelet[2134]: I1213 01:49:48.582381 2134 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.582837 kubelet[2134]: E1213 01:49:48.582799 2134 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:48.746498 kubelet[2134]: W1213 01:49:48.746388 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:48.746498 kubelet[2134]: E1213 01:49:48.746432 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:48.785652 kubelet[2134]: W1213 01:49:48.785563 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-1addd118d4&limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:48.785810 kubelet[2134]: E1213 01:49:48.785659 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-1addd118d4&limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:48.966724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989995110.mount: Deactivated successfully. Dec 13 01:49:48.992192 env[1413]: time="2024-12-13T01:49:48.992141844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:48.995675 env[1413]: time="2024-12-13T01:49:48.995637914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.006298 env[1413]: time="2024-12-13T01:49:49.006206221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.010101 env[1413]: time="2024-12-13T01:49:49.010065904Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.014620 env[1413]: time="2024-12-13T01:49:49.014571817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.018136 env[1413]: time="2024-12-13T01:49:49.018102884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.023431 env[1413]: time="2024-12-13T01:49:49.023399235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.026822 env[1413]: time="2024-12-13T01:49:49.026791895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.032538 env[1413]: time="2024-12-13T01:49:49.032505165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.037181 env[1413]: time="2024-12-13T01:49:49.037148485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.049683 env[1413]: time="2024-12-13T01:49:49.049652377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.059191 env[1413]: time="2024-12-13T01:49:49.059161426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:49:49.064537 kubelet[2134]: W1213 01:49:49.064474 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:49.064642 kubelet[2134]: E1213 01:49:49.064553 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:49.107691 env[1413]: time="2024-12-13T01:49:49.107628619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:49.107691 env[1413]: time="2024-12-13T01:49:49.107670921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:49.107945 env[1413]: time="2024-12-13T01:49:49.107849530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:49.108724 env[1413]: time="2024-12-13T01:49:49.108669768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97f9da6f438bb59fecb0a779b17927f4a467da8303979c547a58f5103665aa30 pid=2173 runtime=io.containerd.runc.v2 Dec 13 01:49:49.118943 env[1413]: time="2024-12-13T01:49:49.118873451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:49.118943 env[1413]: time="2024-12-13T01:49:49.118922753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:49.120088 env[1413]: time="2024-12-13T01:49:49.119026158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:49.120088 env[1413]: time="2024-12-13T01:49:49.119211367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/569668d7e46297601b132356470d59435d519fc23b2886b9585b3b68fe31e857 pid=2191 runtime=io.containerd.runc.v2 Dec 13 01:49:49.130915 systemd[1]: Started cri-containerd-97f9da6f438bb59fecb0a779b17927f4a467da8303979c547a58f5103665aa30.scope. Dec 13 01:49:49.165632 env[1413]: time="2024-12-13T01:49:49.161618273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:49.165632 env[1413]: time="2024-12-13T01:49:49.161705277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:49.165632 env[1413]: time="2024-12-13T01:49:49.161734779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:49.165632 env[1413]: time="2024-12-13T01:49:49.161961189Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d5fedd6190799fcfd1e4852285a7b41be5fc83d2d4242bf1778220041077cf9 pid=2228 runtime=io.containerd.runc.v2 Dec 13 01:49:49.168089 systemd[1]: Started cri-containerd-569668d7e46297601b132356470d59435d519fc23b2886b9585b3b68fe31e857.scope. Dec 13 01:49:49.176260 systemd[1]: Started cri-containerd-7d5fedd6190799fcfd1e4852285a7b41be5fc83d2d4242bf1778220041077cf9.scope. Dec 13 01:49:49.231123 env[1413]: time="2024-12-13T01:49:49.231071259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-1addd118d4,Uid:ab2651504d7696700cd2e6e4fc7a9ce7,Namespace:kube-system,Attempt:0,} returns sandbox id \"97f9da6f438bb59fecb0a779b17927f4a467da8303979c547a58f5103665aa30\"" Dec 13 01:49:49.237447 env[1413]: time="2024-12-13T01:49:49.237377257Z" level=info msg="CreateContainer within sandbox \"97f9da6f438bb59fecb0a779b17927f4a467da8303979c547a58f5103665aa30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:49:49.252836 env[1413]: time="2024-12-13T01:49:49.252796386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-1addd118d4,Uid:3446de64db40f2fb517fdfe3c0a21663,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d5fedd6190799fcfd1e4852285a7b41be5fc83d2d4242bf1778220041077cf9\"" Dec 13 01:49:49.260119 env[1413]: time="2024-12-13T01:49:49.260022128Z" level=info msg="CreateContainer within sandbox \"7d5fedd6190799fcfd1e4852285a7b41be5fc83d2d4242bf1778220041077cf9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:49:49.276149 env[1413]: time="2024-12-13T01:49:49.276115189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-1addd118d4,Uid:e0642ec4b5491c779d60657dac104585,Namespace:kube-system,Attempt:0,} returns sandbox id \"569668d7e46297601b132356470d59435d519fc23b2886b9585b3b68fe31e857\"" Dec 13 01:49:49.278968 env[1413]: time="2024-12-13T01:49:49.278923922Z" level=info msg="CreateContainer within sandbox \"569668d7e46297601b132356470d59435d519fc23b2886b9585b3b68fe31e857\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:49:49.281340 kubelet[2134]: E1213 01:49:49.281295 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-1addd118d4?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="1.6s" Dec 13 01:49:49.288156 env[1413]: time="2024-12-13T01:49:49.288124458Z" level=info msg="CreateContainer within sandbox \"97f9da6f438bb59fecb0a779b17927f4a467da8303979c547a58f5103665aa30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ffea3fdfe63c228cd5de4c0e477e358cb07eb3359a8354eead64d071a620199\"" Dec 13 01:49:49.288940 env[1413]: time="2024-12-13T01:49:49.288909195Z" level=info msg="StartContainer for \"5ffea3fdfe63c228cd5de4c0e477e358cb07eb3359a8354eead64d071a620199\"" Dec 13 01:49:49.308011 systemd[1]: Started cri-containerd-5ffea3fdfe63c228cd5de4c0e477e358cb07eb3359a8354eead64d071a620199.scope. Dec 13 01:49:49.313266 kubelet[2134]: W1213 01:49:49.313179 2134 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:49.313400 kubelet[2134]: E1213 01:49:49.313272 2134 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.23:6443: connect: connection refused Dec 13 01:49:49.327224 env[1413]: time="2024-12-13T01:49:49.327168505Z" level=info msg="CreateContainer within sandbox \"7d5fedd6190799fcfd1e4852285a7b41be5fc83d2d4242bf1778220041077cf9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9f1d64c17d3e6bf863fd9a925170f7272f96c5a204d274708d055c7924f959a\"" Dec 13 01:49:49.327803 env[1413]: time="2024-12-13T01:49:49.327768333Z" level=info msg="StartContainer for \"f9f1d64c17d3e6bf863fd9a925170f7272f96c5a204d274708d055c7924f959a\"" Dec 13 01:49:49.338691 env[1413]: time="2024-12-13T01:49:49.338645848Z" level=info msg="CreateContainer within sandbox \"569668d7e46297601b132356470d59435d519fc23b2886b9585b3b68fe31e857\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"859414255827968abec87229bd77bdb64fb80cbf9ca0531ac0534bfe804cd631\"" Dec 13 01:49:49.339300 env[1413]: time="2024-12-13T01:49:49.339253576Z" level=info msg="StartContainer for \"859414255827968abec87229bd77bdb64fb80cbf9ca0531ac0534bfe804cd631\"" Dec 13 01:49:49.358686 systemd[1]: Started cri-containerd-f9f1d64c17d3e6bf863fd9a925170f7272f96c5a204d274708d055c7924f959a.scope. Dec 13 01:49:49.379398 systemd[1]: Started cri-containerd-859414255827968abec87229bd77bdb64fb80cbf9ca0531ac0534bfe804cd631.scope. Dec 13 01:49:49.389668 kubelet[2134]: I1213 01:49:49.389594 2134 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:49.391620 kubelet[2134]: E1213 01:49:49.390110 2134 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:49.404066 env[1413]: time="2024-12-13T01:49:49.403906535Z" level=info msg="StartContainer for \"5ffea3fdfe63c228cd5de4c0e477e358cb07eb3359a8354eead64d071a620199\" returns successfully" Dec 13 01:49:49.482257 env[1413]: time="2024-12-13T01:49:49.482197238Z" level=info msg="StartContainer for \"859414255827968abec87229bd77bdb64fb80cbf9ca0531ac0534bfe804cd631\" returns successfully" Dec 13 01:49:49.497081 env[1413]: time="2024-12-13T01:49:49.497030740Z" level=info msg="StartContainer for \"f9f1d64c17d3e6bf863fd9a925170f7272f96c5a204d274708d055c7924f959a\" returns successfully" Dec 13 01:49:50.992875 kubelet[2134]: I1213 01:49:50.992842 2134 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:51.514784 kubelet[2134]: E1213 01:49:51.514736 2134 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-1addd118d4\" not found" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:51.655370 kubelet[2134]: I1213 01:49:51.655329 2134 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:51.852224 kubelet[2134]: I1213 01:49:51.852187 2134 apiserver.go:52] "Watching apiserver" Dec 13 01:49:51.879773 kubelet[2134]: I1213 01:49:51.879696 2134 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:49:53.857084 systemd[1]: Reloading. Dec 13 01:49:53.944261 /usr/lib/systemd/system-generators/torcx-generator[2424]: time="2024-12-13T01:49:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:49:53.944299 /usr/lib/systemd/system-generators/torcx-generator[2424]: time="2024-12-13T01:49:53Z" level=info msg="torcx already run" Dec 13 01:49:54.049384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:49:54.049406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:49:54.066165 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:49:54.177324 kubelet[2134]: E1213 01:49:54.176905 2134 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510.3.6-a-1addd118d4.18109975d252af5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-1addd118d4,UID:ci-3510.3.6-a-1addd118d4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-1addd118d4,},FirstTimestamp:2024-12-13 01:49:47.856498525 +0000 UTC m=+1.013116774,LastTimestamp:2024-12-13 01:49:47.856498525 +0000 UTC m=+1.013116774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-1addd118d4,}" Dec 13 01:49:54.177087 systemd[1]: Stopping kubelet.service... Dec 13 01:49:54.191958 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:49:54.192188 systemd[1]: Stopped kubelet.service. Dec 13 01:49:54.194207 systemd[1]: Starting kubelet.service... Dec 13 01:49:54.281586 systemd[1]: Started kubelet.service. Dec 13 01:49:54.347865 kubelet[2490]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:49:54.347865 kubelet[2490]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:49:54.347865 kubelet[2490]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:49:54.347865 kubelet[2490]: I1213 01:49:54.347764 2490 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:49:54.353133 kubelet[2490]: I1213 01:49:54.353104 2490 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:49:54.353133 kubelet[2490]: I1213 01:49:54.353126 2490 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:49:54.353366 kubelet[2490]: I1213 01:49:54.353346 2490 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:49:54.354489 kubelet[2490]: I1213 01:49:54.354459 2490 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:49:54.355611 kubelet[2490]: I1213 01:49:54.355583 2490 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:49:54.362255 kubelet[2490]: I1213 01:49:54.362237 2490 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:49:54.362636 kubelet[2490]: I1213 01:49:54.362567 2490 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:49:54.362866 kubelet[2490]: I1213 01:49:54.362712 2490 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-1addd118d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:49:54.363018 kubelet[2490]: I1213 01:49:54.363008 2490 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:49:54.363085 kubelet[2490]: I1213 01:49:54.363077 2490 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:49:54.363177 kubelet[2490]: I1213 01:49:54.363169 2490 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:49:54.363323 kubelet[2490]: I1213 01:49:54.363312 2490 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:49:54.363411 kubelet[2490]: I1213 01:49:54.363402 2490 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:49:54.363523 kubelet[2490]: I1213 01:49:54.363512 2490 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:49:54.363619 kubelet[2490]: I1213 01:49:54.363608 2490 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:49:54.367644 kubelet[2490]: I1213 01:49:54.367619 2490 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:49:54.367810 kubelet[2490]: I1213 01:49:54.367793 2490 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:49:54.369709 kubelet[2490]: I1213 01:49:54.369691 2490 server.go:1264] "Started kubelet" Dec 13 01:49:54.377214 kubelet[2490]: I1213 01:49:54.373021 2490 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:49:54.377919 kubelet[2490]: I1213 01:49:54.377897 2490 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:49:54.384073 kubelet[2490]: I1213 01:49:54.381900 2490 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:49:54.384851 kubelet[2490]: I1213 01:49:54.384804 2490 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:49:54.385134 kubelet[2490]: I1213 01:49:54.385118 2490 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:49:54.390718 kubelet[2490]: I1213 01:49:54.390696 2490 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:49:54.391065 kubelet[2490]: I1213 01:49:54.391041 2490 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:49:54.391205 kubelet[2490]: I1213 01:49:54.391188 2490 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:49:54.395189 kubelet[2490]: I1213 01:49:54.395169 2490 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:49:54.395405 kubelet[2490]: I1213 01:49:54.395379 2490 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:49:54.398938 kubelet[2490]: I1213 01:49:54.398919 2490 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:49:54.409724 kubelet[2490]: I1213 01:49:54.409691 2490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:49:54.410716 kubelet[2490]: I1213 01:49:54.410690 2490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:49:54.410815 kubelet[2490]: I1213 01:49:54.410723 2490 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:49:54.410815 kubelet[2490]: I1213 01:49:54.410742 2490 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:49:54.410898 kubelet[2490]: E1213 01:49:54.410806 2490 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:49:54.418559 kubelet[2490]: E1213 01:49:54.418538 2490 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:49:54.449861 kubelet[2490]: I1213 01:49:54.448612 2490 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:49:54.450103 kubelet[2490]: I1213 01:49:54.450072 2490 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:49:54.450199 kubelet[2490]: I1213 01:49:54.450192 2490 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:49:54.450504 kubelet[2490]: I1213 01:49:54.450486 2490 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:49:54.450679 kubelet[2490]: I1213 01:49:54.450642 2490 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:49:54.450798 kubelet[2490]: I1213 01:49:54.450788 2490 policy_none.go:49] "None policy: Start" Dec 13 01:49:54.452585 kubelet[2490]: I1213 01:49:54.452564 2490 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:49:54.452768 kubelet[2490]: I1213 01:49:54.452591 2490 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:49:54.452881 kubelet[2490]: I1213 01:49:54.452829 2490 state_mem.go:75] "Updated machine memory state" Dec 13 01:49:54.459145 kubelet[2490]: I1213 01:49:54.459124 2490 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:49:54.459333 kubelet[2490]: I1213 01:49:54.459294 2490 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:49:54.459416 kubelet[2490]: I1213 01:49:54.459402 2490 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:49:54.500234 kubelet[2490]: I1213 01:49:54.500201 2490 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.509856 kubelet[2490]: I1213 01:49:54.509812 2490 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.862752 kubelet[2490]: I1213 01:49:54.518268 2490 topology_manager.go:215] "Topology Admit Handler" podUID="ab2651504d7696700cd2e6e4fc7a9ce7" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.862752 kubelet[2490]: I1213 01:49:54.518410 2490 topology_manager.go:215] "Topology Admit Handler" podUID="e0642ec4b5491c779d60657dac104585" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.862752 kubelet[2490]: I1213 01:49:54.518487 2490 topology_manager.go:215] "Topology Admit Handler" podUID="3446de64db40f2fb517fdfe3c0a21663" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.865773 kubelet[2490]: I1213 01:49:54.864678 2490 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.879360 kubelet[2490]: W1213 01:49:54.879328 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:49:54.881539 kubelet[2490]: W1213 01:49:54.881515 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:49:54.882271 kubelet[2490]: W1213 01:49:54.882230 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:49:54.921625 sudo[2520]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:49:54.921936 sudo[2520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 01:49:54.965324 kubelet[2490]: I1213 01:49:54.965277 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2651504d7696700cd2e6e4fc7a9ce7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" (UID: \"ab2651504d7696700cd2e6e4fc7a9ce7\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965324 kubelet[2490]: I1213 01:49:54.965323 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2651504d7696700cd2e6e4fc7a9ce7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" (UID: \"ab2651504d7696700cd2e6e4fc7a9ce7\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965552 kubelet[2490]: I1213 01:49:54.965352 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965552 kubelet[2490]: I1213 01:49:54.965375 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965552 kubelet[2490]: I1213 01:49:54.965396 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965552 kubelet[2490]: I1213 01:49:54.965416 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2651504d7696700cd2e6e4fc7a9ce7-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" (UID: \"ab2651504d7696700cd2e6e4fc7a9ce7\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965552 kubelet[2490]: I1213 01:49:54.965446 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965785 kubelet[2490]: I1213 01:49:54.965469 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3446de64db40f2fb517fdfe3c0a21663-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-1addd118d4\" (UID: \"3446de64db40f2fb517fdfe3c0a21663\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:54.965785 kubelet[2490]: I1213 01:49:54.965491 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0642ec4b5491c779d60657dac104585-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" (UID: \"e0642ec4b5491c779d60657dac104585\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:55.364773 kubelet[2490]: I1213 01:49:55.364722 2490 apiserver.go:52] "Watching apiserver" Dec 13 01:49:55.392207 kubelet[2490]: I1213 01:49:55.392172 2490 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:49:55.446224 kubelet[2490]: W1213 01:49:55.446194 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:49:55.446506 kubelet[2490]: E1213 01:49:55.446475 2490 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-1addd118d4\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:55.447171 kubelet[2490]: W1213 01:49:55.447150 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:49:55.447407 kubelet[2490]: E1213 01:49:55.447372 2490 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-1addd118d4\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" Dec 13 01:49:55.469976 kubelet[2490]: I1213 01:49:55.469896 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-1addd118d4" podStartSLOduration=1.469878366 podStartE2EDuration="1.469878366s" podCreationTimestamp="2024-12-13 01:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:49:55.458078392 +0000 UTC m=+1.171454284" watchObservedRunningTime="2024-12-13 01:49:55.469878366 +0000 UTC m=+1.183254258" Dec 13 01:49:55.481656 sudo[2520]: pam_unix(sudo:session): session closed for user root Dec 13 01:49:55.484656 kubelet[2490]: I1213 01:49:55.484577 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-1addd118d4" podStartSLOduration=1.484558856 podStartE2EDuration="1.484558856s" podCreationTimestamp="2024-12-13 01:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:49:55.470818404 +0000 UTC m=+1.184194296" watchObservedRunningTime="2024-12-13 01:49:55.484558856 +0000 UTC m=+1.197934848" Dec 13 01:49:55.494490 kubelet[2490]: I1213 01:49:55.494441 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-1addd118d4" podStartSLOduration=1.494428753 podStartE2EDuration="1.494428753s" podCreationTimestamp="2024-12-13 01:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:49:55.48488797 +0000 UTC m=+1.198263862" watchObservedRunningTime="2024-12-13 01:49:55.494428753 +0000 UTC m=+1.207804745" Dec 13 01:49:57.093506 sudo[1720]: pam_unix(sudo:session): session closed for user root Dec 13 01:49:57.195423 sshd[1717]: pam_unix(sshd:session): session closed for user core Dec 13 01:49:57.198507 systemd[1]: sshd@4-10.200.8.23:22-10.200.16.10:59608.service: Deactivated successfully. Dec 13 01:49:57.199372 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:49:57.199558 systemd[1]: session-7.scope: Consumed 3.976s CPU time. Dec 13 01:49:57.200145 systemd-logind[1403]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:49:57.201030 systemd-logind[1403]: Removed session 7. Dec 13 01:50:10.643742 kubelet[2490]: I1213 01:50:10.643647 2490 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:50:10.644980 env[1413]: time="2024-12-13T01:50:10.644850319Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:50:10.645517 kubelet[2490]: I1213 01:50:10.645500 2490 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:50:10.670343 kubelet[2490]: I1213 01:50:10.670301 2490 topology_manager.go:215] "Topology Admit Handler" podUID="df36fed7-3d76-4f81-a275-46255556f48c" podNamespace="kube-system" podName="cilium-operator-599987898-ltqkt" Dec 13 01:50:10.676968 systemd[1]: Created slice kubepods-besteffort-poddf36fed7_3d76_4f81_a275_46255556f48c.slice. Dec 13 01:50:10.724255 kubelet[2490]: I1213 01:50:10.724213 2490 topology_manager.go:215] "Topology Admit Handler" podUID="e989d3c9-a656-4cb8-9868-90b20bc1f5c4" podNamespace="kube-system" podName="kube-proxy-76qbd" Dec 13 01:50:10.730247 systemd[1]: Created slice kubepods-besteffort-pode989d3c9_a656_4cb8_9868_90b20bc1f5c4.slice. Dec 13 01:50:10.742349 kubelet[2490]: I1213 01:50:10.742317 2490 topology_manager.go:215] "Topology Admit Handler" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" podNamespace="kube-system" podName="cilium-q9hn2" Dec 13 01:50:10.752683 systemd[1]: Created slice kubepods-burstable-podd9bc9dbf_5015_4510_9213_3412b58b39e0.slice. Dec 13 01:50:10.771899 kubelet[2490]: I1213 01:50:10.771868 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6fb\" (UniqueName: \"kubernetes.io/projected/df36fed7-3d76-4f81-a275-46255556f48c-kube-api-access-4l6fb\") pod \"cilium-operator-599987898-ltqkt\" (UID: \"df36fed7-3d76-4f81-a275-46255556f48c\") " pod="kube-system/cilium-operator-599987898-ltqkt" Dec 13 01:50:10.772674 kubelet[2490]: I1213 01:50:10.772644 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df36fed7-3d76-4f81-a275-46255556f48c-cilium-config-path\") pod \"cilium-operator-599987898-ltqkt\" (UID: \"df36fed7-3d76-4f81-a275-46255556f48c\") " pod="kube-system/cilium-operator-599987898-ltqkt" Dec 13 01:50:10.873285 kubelet[2490]: I1213 01:50:10.873239 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9bc9dbf-5015-4510-9213-3412b58b39e0-clustermesh-secrets\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873285 kubelet[2490]: I1213 01:50:10.873280 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-kernel\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873527 kubelet[2490]: I1213 01:50:10.873306 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-bpf-maps\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873527 kubelet[2490]: I1213 01:50:10.873336 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-xtables-lock\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873527 kubelet[2490]: I1213 01:50:10.873358 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-hostproc\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873527 kubelet[2490]: I1213 01:50:10.873377 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-lib-modules\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873527 kubelet[2490]: I1213 01:50:10.873398 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-hubble-tls\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873527 kubelet[2490]: I1213 01:50:10.873421 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6vj\" (UniqueName: \"kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-kube-api-access-wt6vj\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873811 kubelet[2490]: I1213 01:50:10.873441 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qmv9\" (UniqueName: \"kubernetes.io/projected/e989d3c9-a656-4cb8-9868-90b20bc1f5c4-kube-api-access-4qmv9\") pod \"kube-proxy-76qbd\" (UID: \"e989d3c9-a656-4cb8-9868-90b20bc1f5c4\") " pod="kube-system/kube-proxy-76qbd" Dec 13 01:50:10.873811 kubelet[2490]: I1213 01:50:10.873461 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-etc-cni-netd\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873811 kubelet[2490]: I1213 01:50:10.873480 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-config-path\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.873811 kubelet[2490]: I1213 01:50:10.873508 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e989d3c9-a656-4cb8-9868-90b20bc1f5c4-kube-proxy\") pod \"kube-proxy-76qbd\" (UID: \"e989d3c9-a656-4cb8-9868-90b20bc1f5c4\") " pod="kube-system/kube-proxy-76qbd" Dec 13 01:50:10.873811 kubelet[2490]: I1213 01:50:10.873531 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e989d3c9-a656-4cb8-9868-90b20bc1f5c4-xtables-lock\") pod \"kube-proxy-76qbd\" (UID: \"e989d3c9-a656-4cb8-9868-90b20bc1f5c4\") " pod="kube-system/kube-proxy-76qbd" Dec 13 01:50:10.874002 kubelet[2490]: I1213 01:50:10.873552 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cni-path\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.874002 kubelet[2490]: I1213 01:50:10.873617 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-cgroup\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.874002 kubelet[2490]: I1213 01:50:10.873666 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-net\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.874002 kubelet[2490]: I1213 01:50:10.873691 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e989d3c9-a656-4cb8-9868-90b20bc1f5c4-lib-modules\") pod \"kube-proxy-76qbd\" (UID: \"e989d3c9-a656-4cb8-9868-90b20bc1f5c4\") " pod="kube-system/kube-proxy-76qbd" Dec 13 01:50:10.874002 kubelet[2490]: I1213 01:50:10.873714 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-run\") pod \"cilium-q9hn2\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " pod="kube-system/cilium-q9hn2" Dec 13 01:50:10.999346 env[1413]: time="2024-12-13T01:50:10.997557481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ltqkt,Uid:df36fed7-3d76-4f81-a275-46255556f48c,Namespace:kube-system,Attempt:0,}" Dec 13 01:50:11.033676 env[1413]: time="2024-12-13T01:50:11.033636248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76qbd,Uid:e989d3c9-a656-4cb8-9868-90b20bc1f5c4,Namespace:kube-system,Attempt:0,}" Dec 13 01:50:11.035520 env[1413]: time="2024-12-13T01:50:11.035465197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:50:11.035667 env[1413]: time="2024-12-13T01:50:11.035531099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:50:11.035667 env[1413]: time="2024-12-13T01:50:11.035560099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:50:11.035897 env[1413]: time="2024-12-13T01:50:11.035848107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2 pid=2570 runtime=io.containerd.runc.v2 Dec 13 01:50:11.051202 systemd[1]: Started cri-containerd-82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2.scope. Dec 13 01:50:11.057083 env[1413]: time="2024-12-13T01:50:11.057038474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9hn2,Uid:d9bc9dbf-5015-4510-9213-3412b58b39e0,Namespace:kube-system,Attempt:0,}" Dec 13 01:50:11.074794 env[1413]: time="2024-12-13T01:50:11.074723247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:50:11.074936 env[1413]: time="2024-12-13T01:50:11.074817349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:50:11.074936 env[1413]: time="2024-12-13T01:50:11.074846350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:50:11.075177 env[1413]: time="2024-12-13T01:50:11.075112957Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70c0261593d002e143130cb430c818069853b914ea2c9e43b40f77d54ae3612d pid=2603 runtime=io.containerd.runc.v2 Dec 13 01:50:11.098498 systemd[1]: Started cri-containerd-70c0261593d002e143130cb430c818069853b914ea2c9e43b40f77d54ae3612d.scope. Dec 13 01:50:11.106108 env[1413]: time="2024-12-13T01:50:11.106033884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:50:11.106315 env[1413]: time="2024-12-13T01:50:11.106290691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:50:11.106417 env[1413]: time="2024-12-13T01:50:11.106394593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:50:11.108493 env[1413]: time="2024-12-13T01:50:11.108440948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6 pid=2629 runtime=io.containerd.runc.v2 Dec 13 01:50:11.130287 systemd[1]: Started cri-containerd-1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6.scope. Dec 13 01:50:11.136872 env[1413]: time="2024-12-13T01:50:11.136826507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ltqkt,Uid:df36fed7-3d76-4f81-a275-46255556f48c,Namespace:kube-system,Attempt:0,} returns sandbox id \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\"" Dec 13 01:50:11.139373 env[1413]: time="2024-12-13T01:50:11.139334974Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:50:11.167943 env[1413]: time="2024-12-13T01:50:11.167901638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76qbd,Uid:e989d3c9-a656-4cb8-9868-90b20bc1f5c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"70c0261593d002e143130cb430c818069853b914ea2c9e43b40f77d54ae3612d\"" Dec 13 01:50:11.173525 env[1413]: time="2024-12-13T01:50:11.173489688Z" level=info msg="CreateContainer within sandbox \"70c0261593d002e143130cb430c818069853b914ea2c9e43b40f77d54ae3612d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:50:11.177412 env[1413]: time="2024-12-13T01:50:11.177365991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9hn2,Uid:d9bc9dbf-5015-4510-9213-3412b58b39e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\"" Dec 13 01:50:11.212072 env[1413]: time="2024-12-13T01:50:11.212001517Z" level=info msg="CreateContainer within sandbox \"70c0261593d002e143130cb430c818069853b914ea2c9e43b40f77d54ae3612d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"57d0d08442389942bdc0f72c1e8635499429790c1b41f03dcd33482ff2a2a087\"" Dec 13 01:50:11.213902 env[1413]: time="2024-12-13T01:50:11.213074346Z" level=info msg="StartContainer for \"57d0d08442389942bdc0f72c1e8635499429790c1b41f03dcd33482ff2a2a087\"" Dec 13 01:50:11.229629 systemd[1]: Started cri-containerd-57d0d08442389942bdc0f72c1e8635499429790c1b41f03dcd33482ff2a2a087.scope. Dec 13 01:50:11.264423 env[1413]: time="2024-12-13T01:50:11.264330116Z" level=info msg="StartContainer for \"57d0d08442389942bdc0f72c1e8635499429790c1b41f03dcd33482ff2a2a087\" returns successfully" Dec 13 01:50:12.414325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854202698.mount: Deactivated successfully. Dec 13 01:50:13.402913 env[1413]: time="2024-12-13T01:50:13.402863358Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:50:13.408701 env[1413]: time="2024-12-13T01:50:13.408667106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:50:13.412495 env[1413]: time="2024-12-13T01:50:13.412395601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:50:13.413227 env[1413]: time="2024-12-13T01:50:13.413196722Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:50:13.415083 env[1413]: time="2024-12-13T01:50:13.415057469Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:50:13.416884 env[1413]: time="2024-12-13T01:50:13.416852715Z" level=info msg="CreateContainer within sandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:50:13.446025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676563868.mount: Deactivated successfully. Dec 13 01:50:13.454547 env[1413]: time="2024-12-13T01:50:13.454445173Z" level=info msg="CreateContainer within sandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\"" Dec 13 01:50:13.456455 env[1413]: time="2024-12-13T01:50:13.456428624Z" level=info msg="StartContainer for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\"" Dec 13 01:50:13.480247 systemd[1]: Started cri-containerd-c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea.scope. Dec 13 01:50:13.513646 env[1413]: time="2024-12-13T01:50:13.512672857Z" level=info msg="StartContainer for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" returns successfully" Dec 13 01:50:14.432909 kubelet[2490]: I1213 01:50:14.432831 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-76qbd" podStartSLOduration=4.43280235 podStartE2EDuration="4.43280235s" podCreationTimestamp="2024-12-13 01:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:50:11.494788079 +0000 UTC m=+17.208164071" watchObservedRunningTime="2024-12-13 01:50:14.43280235 +0000 UTC m=+20.146178342" Dec 13 01:50:14.500224 kubelet[2490]: I1213 01:50:14.497594 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ltqkt" podStartSLOduration=2.22201687 podStartE2EDuration="4.497573362s" podCreationTimestamp="2024-12-13 01:50:10 +0000 UTC" firstStartedPulling="2024-12-13 01:50:11.138695257 +0000 UTC m=+16.852071149" lastFinishedPulling="2024-12-13 01:50:13.414251749 +0000 UTC m=+19.127627641" observedRunningTime="2024-12-13 01:50:14.497154152 +0000 UTC m=+20.210530044" watchObservedRunningTime="2024-12-13 01:50:14.497573362 +0000 UTC m=+20.210949254" Dec 13 01:50:19.608234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519767642.mount: Deactivated successfully. Dec 13 01:50:22.338334 env[1413]: time="2024-12-13T01:50:22.338283401Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:50:22.343893 env[1413]: time="2024-12-13T01:50:22.343804615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:50:22.347653 env[1413]: time="2024-12-13T01:50:22.347523993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:50:22.348465 env[1413]: time="2024-12-13T01:50:22.348431011Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:50:22.351663 env[1413]: time="2024-12-13T01:50:22.351617078Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:50:22.380174 env[1413]: time="2024-12-13T01:50:22.380096068Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\"" Dec 13 01:50:22.380710 env[1413]: time="2024-12-13T01:50:22.380679380Z" level=info msg="StartContainer for \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\"" Dec 13 01:50:22.412451 systemd[1]: Started cri-containerd-c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42.scope. Dec 13 01:50:22.452099 env[1413]: time="2024-12-13T01:50:22.452038260Z" level=info msg="StartContainer for \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\" returns successfully" Dec 13 01:50:22.460078 systemd[1]: cri-containerd-c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42.scope: Deactivated successfully. Dec 13 01:50:23.372940 systemd[1]: run-containerd-runc-k8s.io-c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42-runc.ly8HXh.mount: Deactivated successfully. Dec 13 01:50:23.373044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42-rootfs.mount: Deactivated successfully. Dec 13 01:50:26.863938 env[1413]: time="2024-12-13T01:50:26.863874149Z" level=info msg="shim disconnected" id=c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42 Dec 13 01:50:26.863938 env[1413]: time="2024-12-13T01:50:26.863936551Z" level=warning msg="cleaning up after shim disconnected" id=c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42 namespace=k8s.io Dec 13 01:50:26.863938 env[1413]: time="2024-12-13T01:50:26.863949251Z" level=info msg="cleaning up dead shim" Dec 13 01:50:26.872294 env[1413]: time="2024-12-13T01:50:26.872247409Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:50:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2932 runtime=io.containerd.runc.v2\n" Dec 13 01:50:27.574871 env[1413]: time="2024-12-13T01:50:27.574823551Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:50:27.605371 env[1413]: time="2024-12-13T01:50:27.605259618Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\"" Dec 13 01:50:27.605986 env[1413]: time="2024-12-13T01:50:27.605950731Z" level=info msg="StartContainer for \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\"" Dec 13 01:50:27.631676 systemd[1]: run-containerd-runc-k8s.io-34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a-runc.x2ZDCr.mount: Deactivated successfully. Dec 13 01:50:27.633874 systemd[1]: Started cri-containerd-34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a.scope. Dec 13 01:50:27.669020 env[1413]: time="2024-12-13T01:50:27.666833065Z" level=info msg="StartContainer for \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\" returns successfully" Dec 13 01:50:27.674300 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:50:27.674660 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:50:27.675388 systemd[1]: Stopping systemd-sysctl.service... Dec 13 01:50:27.677902 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:50:27.685672 systemd[1]: cri-containerd-34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a.scope: Deactivated successfully. Dec 13 01:50:27.688687 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:50:27.720806 env[1413]: time="2024-12-13T01:50:27.720766970Z" level=info msg="shim disconnected" id=34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a Dec 13 01:50:27.721120 env[1413]: time="2024-12-13T01:50:27.720814271Z" level=warning msg="cleaning up after shim disconnected" id=34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a namespace=k8s.io Dec 13 01:50:27.721120 env[1413]: time="2024-12-13T01:50:27.720826471Z" level=info msg="cleaning up dead shim" Dec 13 01:50:27.728159 env[1413]: time="2024-12-13T01:50:27.728121407Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:50:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2998 runtime=io.containerd.runc.v2\n" Dec 13 01:50:28.578662 env[1413]: time="2024-12-13T01:50:28.578617334Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:50:28.593891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a-rootfs.mount: Deactivated successfully. Dec 13 01:50:28.621217 env[1413]: time="2024-12-13T01:50:28.621172910Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\"" Dec 13 01:50:28.622763 env[1413]: time="2024-12-13T01:50:28.622089027Z" level=info msg="StartContainer for \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\"" Dec 13 01:50:28.650831 systemd[1]: Started cri-containerd-50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4.scope. Dec 13 01:50:28.681210 systemd[1]: cri-containerd-50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4.scope: Deactivated successfully. Dec 13 01:50:28.684014 env[1413]: time="2024-12-13T01:50:28.683944456Z" level=info msg="StartContainer for \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\" returns successfully" Dec 13 01:50:28.701284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4-rootfs.mount: Deactivated successfully. Dec 13 01:50:28.712649 env[1413]: time="2024-12-13T01:50:28.712483077Z" level=info msg="shim disconnected" id=50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4 Dec 13 01:50:28.713040 env[1413]: time="2024-12-13T01:50:28.712794783Z" level=warning msg="cleaning up after shim disconnected" id=50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4 namespace=k8s.io Dec 13 01:50:28.713040 env[1413]: time="2024-12-13T01:50:28.712851584Z" level=info msg="cleaning up dead shim" Dec 13 01:50:28.720403 env[1413]: time="2024-12-13T01:50:28.720370721Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:50:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3057 runtime=io.containerd.runc.v2\n" Dec 13 01:50:29.583771 env[1413]: time="2024-12-13T01:50:29.583719261Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:50:29.613894 env[1413]: time="2024-12-13T01:50:29.613845000Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\"" Dec 13 01:50:29.615629 env[1413]: time="2024-12-13T01:50:29.614491912Z" level=info msg="StartContainer for \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\"" Dec 13 01:50:29.640820 systemd[1]: run-containerd-runc-k8s.io-7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6-runc.qBXJe0.mount: Deactivated successfully. Dec 13 01:50:29.646423 systemd[1]: Started cri-containerd-7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6.scope. Dec 13 01:50:29.674737 systemd[1]: cri-containerd-7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6.scope: Deactivated successfully. Dec 13 01:50:29.679741 env[1413]: time="2024-12-13T01:50:29.677406736Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9bc9dbf_5015_4510_9213_3412b58b39e0.slice/cri-containerd-7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6.scope/memory.events\": no such file or directory" Dec 13 01:50:29.688396 env[1413]: time="2024-12-13T01:50:29.688355332Z" level=info msg="StartContainer for \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\" returns successfully" Dec 13 01:50:29.712061 env[1413]: time="2024-12-13T01:50:29.712007555Z" level=info msg="shim disconnected" id=7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6 Dec 13 01:50:29.712262 env[1413]: time="2024-12-13T01:50:29.712061556Z" level=warning msg="cleaning up after shim disconnected" id=7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6 namespace=k8s.io Dec 13 01:50:29.712262 env[1413]: time="2024-12-13T01:50:29.712076156Z" level=info msg="cleaning up dead shim" Dec 13 01:50:29.721716 env[1413]: time="2024-12-13T01:50:29.721676328Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:50:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3113 runtime=io.containerd.runc.v2\n" Dec 13 01:50:30.588185 env[1413]: time="2024-12-13T01:50:30.588106009Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:50:30.606232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6-rootfs.mount: Deactivated successfully. Dec 13 01:50:30.625244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422245331.mount: Deactivated successfully. Dec 13 01:50:30.633586 env[1413]: time="2024-12-13T01:50:30.633537005Z" level=info msg="CreateContainer within sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\"" Dec 13 01:50:30.635187 env[1413]: time="2024-12-13T01:50:30.634185316Z" level=info msg="StartContainer for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\"" Dec 13 01:50:30.658767 systemd[1]: Started cri-containerd-a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2.scope. Dec 13 01:50:30.696286 env[1413]: time="2024-12-13T01:50:30.696248904Z" level=info msg="StartContainer for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" returns successfully" Dec 13 01:50:30.821830 kubelet[2490]: I1213 01:50:30.821788 2490 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:50:30.859333 kubelet[2490]: I1213 01:50:30.858638 2490 topology_manager.go:215] "Topology Admit Handler" podUID="0d6f77b7-5e95-4a63-9a21-7df75ddf1777" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n2lzm" Dec 13 01:50:30.862660 kubelet[2490]: I1213 01:50:30.862618 2490 topology_manager.go:215] "Topology Admit Handler" podUID="cadc8a0f-5fa0-4f66-8385-e6c089b4678f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6v565" Dec 13 01:50:30.866871 systemd[1]: Created slice kubepods-burstable-pod0d6f77b7_5e95_4a63_9a21_7df75ddf1777.slice. Dec 13 01:50:30.877044 systemd[1]: Created slice kubepods-burstable-podcadc8a0f_5fa0_4f66_8385_e6c089b4678f.slice. Dec 13 01:50:31.021381 kubelet[2490]: I1213 01:50:31.021341 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cadc8a0f-5fa0-4f66-8385-e6c089b4678f-config-volume\") pod \"coredns-7db6d8ff4d-6v565\" (UID: \"cadc8a0f-5fa0-4f66-8385-e6c089b4678f\") " pod="kube-system/coredns-7db6d8ff4d-6v565" Dec 13 01:50:31.021381 kubelet[2490]: I1213 01:50:31.021387 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d6f77b7-5e95-4a63-9a21-7df75ddf1777-config-volume\") pod \"coredns-7db6d8ff4d-n2lzm\" (UID: \"0d6f77b7-5e95-4a63-9a21-7df75ddf1777\") " pod="kube-system/coredns-7db6d8ff4d-n2lzm" Dec 13 01:50:31.021738 kubelet[2490]: I1213 01:50:31.021418 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45p7\" (UniqueName: \"kubernetes.io/projected/0d6f77b7-5e95-4a63-9a21-7df75ddf1777-kube-api-access-t45p7\") pod \"coredns-7db6d8ff4d-n2lzm\" (UID: \"0d6f77b7-5e95-4a63-9a21-7df75ddf1777\") " pod="kube-system/coredns-7db6d8ff4d-n2lzm" Dec 13 01:50:31.021738 kubelet[2490]: I1213 01:50:31.021442 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblsw\" (UniqueName: \"kubernetes.io/projected/cadc8a0f-5fa0-4f66-8385-e6c089b4678f-kube-api-access-bblsw\") pod \"coredns-7db6d8ff4d-6v565\" (UID: \"cadc8a0f-5fa0-4f66-8385-e6c089b4678f\") " pod="kube-system/coredns-7db6d8ff4d-6v565" Dec 13 01:50:31.173876 env[1413]: time="2024-12-13T01:50:31.173739310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n2lzm,Uid:0d6f77b7-5e95-4a63-9a21-7df75ddf1777,Namespace:kube-system,Attempt:0,}" Dec 13 01:50:31.182187 env[1413]: time="2024-12-13T01:50:31.181901150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6v565,Uid:cadc8a0f-5fa0-4f66-8385-e6c089b4678f,Namespace:kube-system,Attempt:0,}" Dec 13 01:50:31.612743 kubelet[2490]: I1213 01:50:31.612683 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q9hn2" podStartSLOduration=10.441951844 podStartE2EDuration="21.612660547s" podCreationTimestamp="2024-12-13 01:50:10 +0000 UTC" firstStartedPulling="2024-12-13 01:50:11.178918233 +0000 UTC m=+16.892294125" lastFinishedPulling="2024-12-13 01:50:22.349626936 +0000 UTC m=+28.063002828" observedRunningTime="2024-12-13 01:50:31.611990936 +0000 UTC m=+37.325366928" watchObservedRunningTime="2024-12-13 01:50:31.612660547 +0000 UTC m=+37.326036439" Dec 13 01:50:32.860673 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 01:50:32.861523 systemd-networkd[1572]: cilium_host: Link UP Dec 13 01:50:32.861708 systemd-networkd[1572]: cilium_net: Link UP Dec 13 01:50:32.861712 systemd-networkd[1572]: cilium_net: Gained carrier Dec 13 01:50:32.861902 systemd-networkd[1572]: cilium_host: Gained carrier Dec 13 01:50:32.864657 systemd-networkd[1572]: cilium_net: Gained IPv6LL Dec 13 01:50:32.864898 systemd-networkd[1572]: cilium_host: Gained IPv6LL Dec 13 01:50:32.987368 systemd-networkd[1572]: cilium_vxlan: Link UP Dec 13 01:50:32.987376 systemd-networkd[1572]: cilium_vxlan: Gained carrier Dec 13 01:50:33.206635 kernel: NET: Registered PF_ALG protocol family Dec 13 01:50:33.867387 systemd-networkd[1572]: lxc_health: Link UP Dec 13 01:50:33.885932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:50:33.886825 systemd-networkd[1572]: lxc_health: Gained carrier Dec 13 01:50:34.245762 systemd-networkd[1572]: lxc9f90b6e48725: Link UP Dec 13 01:50:34.253630 kernel: eth0: renamed from tmp17745 Dec 13 01:50:34.270653 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9f90b6e48725: link becomes ready Dec 13 01:50:34.268330 systemd-networkd[1572]: lxc9f90b6e48725: Gained carrier Dec 13 01:50:34.274155 systemd-networkd[1572]: lxc6d352e47c124: Link UP Dec 13 01:50:34.290624 kernel: eth0: renamed from tmp11dbe Dec 13 01:50:34.302617 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6d352e47c124: link becomes ready Dec 13 01:50:34.305441 systemd-networkd[1572]: lxc6d352e47c124: Gained carrier Dec 13 01:50:34.899901 systemd-networkd[1572]: cilium_vxlan: Gained IPv6LL Dec 13 01:50:35.036894 systemd-networkd[1572]: lxc_health: Gained IPv6LL Dec 13 01:50:35.603881 systemd-networkd[1572]: lxc9f90b6e48725: Gained IPv6LL Dec 13 01:50:35.987868 systemd-networkd[1572]: lxc6d352e47c124: Gained IPv6LL Dec 13 01:50:38.012968 env[1413]: time="2024-12-13T01:50:38.012846111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:50:38.012968 env[1413]: time="2024-12-13T01:50:38.012912812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:50:38.012968 env[1413]: time="2024-12-13T01:50:38.012928612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:50:38.013742 env[1413]: time="2024-12-13T01:50:38.013071014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11dbe313a6b3d7e06abe05c72c41d8ea819172017bbe7959506185ca2ecf41f0 pid=3658 runtime=io.containerd.runc.v2 Dec 13 01:50:38.059019 systemd[1]: run-containerd-runc-k8s.io-11dbe313a6b3d7e06abe05c72c41d8ea819172017bbe7959506185ca2ecf41f0-runc.IyEoUy.mount: Deactivated successfully. Dec 13 01:50:38.062755 env[1413]: time="2024-12-13T01:50:38.062676360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:50:38.062950 env[1413]: time="2024-12-13T01:50:38.062923263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:50:38.063075 env[1413]: time="2024-12-13T01:50:38.063051665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:50:38.063342 env[1413]: time="2024-12-13T01:50:38.063309069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/177459ec8f3fcbae061876a2ad51731ac25aa3359e287ed4dec4f96cd606973e pid=3681 runtime=io.containerd.runc.v2 Dec 13 01:50:38.073732 systemd[1]: Started cri-containerd-11dbe313a6b3d7e06abe05c72c41d8ea819172017bbe7959506185ca2ecf41f0.scope. Dec 13 01:50:38.091139 systemd[1]: Started cri-containerd-177459ec8f3fcbae061876a2ad51731ac25aa3359e287ed4dec4f96cd606973e.scope. Dec 13 01:50:38.162349 env[1413]: time="2024-12-13T01:50:38.162297856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6v565,Uid:cadc8a0f-5fa0-4f66-8385-e6c089b4678f,Namespace:kube-system,Attempt:0,} returns sandbox id \"11dbe313a6b3d7e06abe05c72c41d8ea819172017bbe7959506185ca2ecf41f0\"" Dec 13 01:50:38.165717 env[1413]: time="2024-12-13T01:50:38.165673507Z" level=info msg="CreateContainer within sandbox \"11dbe313a6b3d7e06abe05c72c41d8ea819172017bbe7959506185ca2ecf41f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:50:38.188376 env[1413]: time="2024-12-13T01:50:38.188314747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n2lzm,Uid:0d6f77b7-5e95-4a63-9a21-7df75ddf1777,Namespace:kube-system,Attempt:0,} returns sandbox id \"177459ec8f3fcbae061876a2ad51731ac25aa3359e287ed4dec4f96cd606973e\"" Dec 13 01:50:38.194028 env[1413]: time="2024-12-13T01:50:38.193991532Z" level=info msg="CreateContainer within sandbox \"177459ec8f3fcbae061876a2ad51731ac25aa3359e287ed4dec4f96cd606973e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:50:38.203710 env[1413]: time="2024-12-13T01:50:38.203664678Z" level=info msg="CreateContainer within sandbox \"11dbe313a6b3d7e06abe05c72c41d8ea819172017bbe7959506185ca2ecf41f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44f60b25492c96e61956932d44cebaa58688d54b6be252fe302120565e7055ca\"" Dec 13 01:50:38.208191 env[1413]: time="2024-12-13T01:50:38.208159845Z" level=info msg="StartContainer for \"44f60b25492c96e61956932d44cebaa58688d54b6be252fe302120565e7055ca\"" Dec 13 01:50:38.251925 systemd[1]: Started cri-containerd-44f60b25492c96e61956932d44cebaa58688d54b6be252fe302120565e7055ca.scope. Dec 13 01:50:38.256614 env[1413]: time="2024-12-13T01:50:38.256546772Z" level=info msg="CreateContainer within sandbox \"177459ec8f3fcbae061876a2ad51731ac25aa3359e287ed4dec4f96cd606973e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"047378fc1f18ea626a6ba6f446f3d6e7f950a8cb9ec2c4529dec8775b5171950\"" Dec 13 01:50:38.260030 env[1413]: time="2024-12-13T01:50:38.259989224Z" level=info msg="StartContainer for \"047378fc1f18ea626a6ba6f446f3d6e7f950a8cb9ec2c4529dec8775b5171950\"" Dec 13 01:50:38.303433 systemd[1]: Started cri-containerd-047378fc1f18ea626a6ba6f446f3d6e7f950a8cb9ec2c4529dec8775b5171950.scope. Dec 13 01:50:38.317159 env[1413]: time="2024-12-13T01:50:38.317096982Z" level=info msg="StartContainer for \"44f60b25492c96e61956932d44cebaa58688d54b6be252fe302120565e7055ca\" returns successfully" Dec 13 01:50:38.356663 env[1413]: time="2024-12-13T01:50:38.356594175Z" level=info msg="StartContainer for \"047378fc1f18ea626a6ba6f446f3d6e7f950a8cb9ec2c4529dec8775b5171950\" returns successfully" Dec 13 01:50:38.639218 kubelet[2490]: I1213 01:50:38.639153 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n2lzm" podStartSLOduration=28.63912842 podStartE2EDuration="28.63912842s" podCreationTimestamp="2024-12-13 01:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:50:38.637586797 +0000 UTC m=+44.350962789" watchObservedRunningTime="2024-12-13 01:50:38.63912842 +0000 UTC m=+44.352504312" Dec 13 01:50:38.639879 kubelet[2490]: I1213 01:50:38.639831 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6v565" podStartSLOduration=28.63982013 podStartE2EDuration="28.63982013s" podCreationTimestamp="2024-12-13 01:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:50:38.623683388 +0000 UTC m=+44.337059280" watchObservedRunningTime="2024-12-13 01:50:38.63982013 +0000 UTC m=+44.353196122" Dec 13 01:52:26.292666 systemd[1]: Started sshd@5-10.200.8.23:22-10.200.16.10:35208.service. Dec 13 01:52:26.916649 sshd[3836]: Accepted publickey for core from 10.200.16.10 port 35208 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:26.918459 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:26.924287 systemd[1]: Started session-8.scope. Dec 13 01:52:26.924936 systemd-logind[1403]: New session 8 of user core. Dec 13 01:52:27.424278 sshd[3836]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:27.427427 systemd[1]: sshd@5-10.200.8.23:22-10.200.16.10:35208.service: Deactivated successfully. Dec 13 01:52:27.428484 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:52:27.429219 systemd-logind[1403]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:52:27.430186 systemd-logind[1403]: Removed session 8. Dec 13 01:52:32.529977 systemd[1]: Started sshd@6-10.200.8.23:22-10.200.16.10:54164.service. Dec 13 01:52:33.151352 sshd[3849]: Accepted publickey for core from 10.200.16.10 port 54164 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:33.152816 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:33.157349 systemd-logind[1403]: New session 9 of user core. Dec 13 01:52:33.158025 systemd[1]: Started session-9.scope. Dec 13 01:52:33.653485 sshd[3849]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:33.656813 systemd[1]: sshd@6-10.200.8.23:22-10.200.16.10:54164.service: Deactivated successfully. Dec 13 01:52:33.657833 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:52:33.658515 systemd-logind[1403]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:52:33.659416 systemd-logind[1403]: Removed session 9. Dec 13 01:52:38.762929 systemd[1]: Started sshd@7-10.200.8.23:22-10.200.16.10:46434.service. Dec 13 01:52:39.383538 sshd[3861]: Accepted publickey for core from 10.200.16.10 port 46434 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:39.385381 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:39.390837 systemd[1]: Started session-10.scope. Dec 13 01:52:39.391284 systemd-logind[1403]: New session 10 of user core. Dec 13 01:52:39.880685 sshd[3861]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:39.883854 systemd-logind[1403]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:52:39.884048 systemd[1]: sshd@7-10.200.8.23:22-10.200.16.10:46434.service: Deactivated successfully. Dec 13 01:52:39.885033 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:52:39.885951 systemd-logind[1403]: Removed session 10. Dec 13 01:52:44.986412 systemd[1]: Started sshd@8-10.200.8.23:22-10.200.16.10:46450.service. Dec 13 01:52:45.610000 sshd[3876]: Accepted publickey for core from 10.200.16.10 port 46450 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:45.611792 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:45.617317 systemd[1]: Started session-11.scope. Dec 13 01:52:45.617826 systemd-logind[1403]: New session 11 of user core. Dec 13 01:52:46.116586 sshd[3876]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:46.119702 systemd[1]: sshd@8-10.200.8.23:22-10.200.16.10:46450.service: Deactivated successfully. Dec 13 01:52:46.120675 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:52:46.121426 systemd-logind[1403]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:52:46.122345 systemd-logind[1403]: Removed session 11. Dec 13 01:52:51.227570 systemd[1]: Started sshd@9-10.200.8.23:22-10.200.16.10:56396.service. Dec 13 01:52:51.850151 sshd[3888]: Accepted publickey for core from 10.200.16.10 port 56396 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:51.851651 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:51.856778 systemd-logind[1403]: New session 12 of user core. Dec 13 01:52:51.857368 systemd[1]: Started session-12.scope. Dec 13 01:52:52.345300 sshd[3888]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:52.349008 systemd-logind[1403]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:52:52.349256 systemd[1]: sshd@9-10.200.8.23:22-10.200.16.10:56396.service: Deactivated successfully. Dec 13 01:52:52.350444 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:52:52.351579 systemd-logind[1403]: Removed session 12. Dec 13 01:52:52.449864 systemd[1]: Started sshd@10-10.200.8.23:22-10.200.16.10:56398.service. Dec 13 01:52:53.073265 sshd[3900]: Accepted publickey for core from 10.200.16.10 port 56398 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:53.075014 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:53.081016 systemd[1]: Started session-13.scope. Dec 13 01:52:53.081471 systemd-logind[1403]: New session 13 of user core. Dec 13 01:52:53.604869 sshd[3900]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:53.608295 systemd[1]: sshd@10-10.200.8.23:22-10.200.16.10:56398.service: Deactivated successfully. Dec 13 01:52:53.609318 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:52:53.610101 systemd-logind[1403]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:52:53.611051 systemd-logind[1403]: Removed session 13. Dec 13 01:52:53.709788 systemd[1]: Started sshd@11-10.200.8.23:22-10.200.16.10:56412.service. Dec 13 01:52:54.356456 sshd[3909]: Accepted publickey for core from 10.200.16.10 port 56412 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:54.357917 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:54.363438 systemd-logind[1403]: New session 14 of user core. Dec 13 01:52:54.366434 systemd[1]: Started session-14.scope. Dec 13 01:52:54.856418 sshd[3909]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:54.859942 systemd[1]: sshd@11-10.200.8.23:22-10.200.16.10:56412.service: Deactivated successfully. Dec 13 01:52:54.861105 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:52:54.861978 systemd-logind[1403]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:52:54.862969 systemd-logind[1403]: Removed session 14. Dec 13 01:52:59.979514 systemd[1]: Started sshd@12-10.200.8.23:22-10.200.16.10:34350.service. Dec 13 01:53:00.602208 sshd[3922]: Accepted publickey for core from 10.200.16.10 port 34350 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:00.603782 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:00.609114 systemd[1]: Started session-15.scope. Dec 13 01:53:00.609571 systemd-logind[1403]: New session 15 of user core. Dec 13 01:53:01.106227 sshd[3922]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:01.109386 systemd[1]: sshd@12-10.200.8.23:22-10.200.16.10:34350.service: Deactivated successfully. Dec 13 01:53:01.110404 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:53:01.111183 systemd-logind[1403]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:53:01.112098 systemd-logind[1403]: Removed session 15. Dec 13 01:53:06.213289 systemd[1]: Started sshd@13-10.200.8.23:22-10.200.16.10:34356.service. Dec 13 01:53:06.835825 sshd[3934]: Accepted publickey for core from 10.200.16.10 port 34356 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:06.837688 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:06.842583 systemd-logind[1403]: New session 16 of user core. Dec 13 01:53:06.843343 systemd[1]: Started session-16.scope. Dec 13 01:53:07.334046 sshd[3934]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:07.337763 systemd[1]: sshd@13-10.200.8.23:22-10.200.16.10:34356.service: Deactivated successfully. Dec 13 01:53:07.338982 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:53:07.339785 systemd-logind[1403]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:53:07.340667 systemd-logind[1403]: Removed session 16. Dec 13 01:53:07.447186 systemd[1]: Started sshd@14-10.200.8.23:22-10.200.16.10:34364.service. Dec 13 01:53:08.071074 sshd[3946]: Accepted publickey for core from 10.200.16.10 port 34364 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:08.072666 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:08.078111 systemd[1]: Started session-17.scope. Dec 13 01:53:08.078567 systemd-logind[1403]: New session 17 of user core. Dec 13 01:53:08.633385 sshd[3946]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:08.636906 systemd[1]: sshd@14-10.200.8.23:22-10.200.16.10:34364.service: Deactivated successfully. Dec 13 01:53:08.637898 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:53:08.638540 systemd-logind[1403]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:53:08.639400 systemd-logind[1403]: Removed session 17. Dec 13 01:53:08.738812 systemd[1]: Started sshd@15-10.200.8.23:22-10.200.16.10:47790.service. Dec 13 01:53:09.363226 sshd[3957]: Accepted publickey for core from 10.200.16.10 port 47790 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:09.364736 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:09.369666 systemd-logind[1403]: New session 18 of user core. Dec 13 01:53:09.369940 systemd[1]: Started session-18.scope. Dec 13 01:53:11.263950 sshd[3957]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:11.268334 systemd[1]: sshd@15-10.200.8.23:22-10.200.16.10:47790.service: Deactivated successfully. Dec 13 01:53:11.269296 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:53:11.269954 systemd-logind[1403]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:53:11.271011 systemd-logind[1403]: Removed session 18. Dec 13 01:53:11.368586 systemd[1]: Started sshd@16-10.200.8.23:22-10.200.16.10:47806.service. Dec 13 01:53:11.991290 sshd[3976]: Accepted publickey for core from 10.200.16.10 port 47806 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:11.993088 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:11.998370 systemd[1]: Started session-19.scope. Dec 13 01:53:11.998847 systemd-logind[1403]: New session 19 of user core. Dec 13 01:53:12.599378 sshd[3976]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:12.603107 systemd-logind[1403]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:53:12.603390 systemd[1]: sshd@16-10.200.8.23:22-10.200.16.10:47806.service: Deactivated successfully. Dec 13 01:53:12.604575 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:53:12.605828 systemd-logind[1403]: Removed session 19. Dec 13 01:53:12.704943 systemd[1]: Started sshd@17-10.200.8.23:22-10.200.16.10:47816.service. Dec 13 01:53:13.328085 sshd[3986]: Accepted publickey for core from 10.200.16.10 port 47816 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:13.329944 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:13.335648 systemd[1]: Started session-20.scope. Dec 13 01:53:13.336283 systemd-logind[1403]: New session 20 of user core. Dec 13 01:53:13.824827 sshd[3986]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:13.828406 systemd[1]: sshd@17-10.200.8.23:22-10.200.16.10:47816.service: Deactivated successfully. Dec 13 01:53:13.829576 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:53:13.830430 systemd-logind[1403]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:53:13.831475 systemd-logind[1403]: Removed session 20. Dec 13 01:53:18.930637 systemd[1]: Started sshd@18-10.200.8.23:22-10.200.16.10:53284.service. Dec 13 01:53:19.553377 sshd[4000]: Accepted publickey for core from 10.200.16.10 port 53284 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:19.555189 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:19.562316 systemd[1]: Started session-21.scope. Dec 13 01:53:19.562974 systemd-logind[1403]: New session 21 of user core. Dec 13 01:53:20.055955 sshd[4000]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:20.059583 systemd[1]: sshd@18-10.200.8.23:22-10.200.16.10:53284.service: Deactivated successfully. Dec 13 01:53:20.060841 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:53:20.061765 systemd-logind[1403]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:53:20.062940 systemd-logind[1403]: Removed session 21. Dec 13 01:53:25.162254 systemd[1]: Started sshd@19-10.200.8.23:22-10.200.16.10:53292.service. Dec 13 01:53:25.789520 sshd[4012]: Accepted publickey for core from 10.200.16.10 port 53292 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:25.791043 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:25.795128 systemd-logind[1403]: New session 22 of user core. Dec 13 01:53:25.797043 systemd[1]: Started session-22.scope. Dec 13 01:53:26.284475 sshd[4012]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:26.287926 systemd[1]: sshd@19-10.200.8.23:22-10.200.16.10:53292.service: Deactivated successfully. Dec 13 01:53:26.288981 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:53:26.289722 systemd-logind[1403]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:53:26.290563 systemd-logind[1403]: Removed session 22. Dec 13 01:53:31.390518 systemd[1]: Started sshd@20-10.200.8.23:22-10.200.16.10:33804.service. Dec 13 01:53:32.014236 sshd[4024]: Accepted publickey for core from 10.200.16.10 port 33804 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:32.015741 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:32.020887 systemd[1]: Started session-23.scope. Dec 13 01:53:32.021335 systemd-logind[1403]: New session 23 of user core. Dec 13 01:53:32.517571 sshd[4024]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:32.521136 systemd[1]: sshd@20-10.200.8.23:22-10.200.16.10:33804.service: Deactivated successfully. Dec 13 01:53:32.522143 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:53:32.523101 systemd-logind[1403]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:53:32.523968 systemd-logind[1403]: Removed session 23. Dec 13 01:53:32.623478 systemd[1]: Started sshd@21-10.200.8.23:22-10.200.16.10:33808.service. Dec 13 01:53:33.248372 sshd[4036]: Accepted publickey for core from 10.200.16.10 port 33808 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:33.250000 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.255290 systemd[1]: Started session-24.scope. Dec 13 01:53:33.256134 systemd-logind[1403]: New session 24 of user core. Dec 13 01:53:34.937959 systemd[1]: run-containerd-runc-k8s.io-a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2-runc.h3atjv.mount: Deactivated successfully. Dec 13 01:53:34.945265 env[1413]: time="2024-12-13T01:53:34.942304903Z" level=info msg="StopContainer for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" with timeout 30 (s)" Dec 13 01:53:34.945265 env[1413]: time="2024-12-13T01:53:34.942807713Z" level=info msg="Stop container \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" with signal terminated" Dec 13 01:53:34.962821 systemd[1]: cri-containerd-c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea.scope: Deactivated successfully. Dec 13 01:53:34.972063 env[1413]: time="2024-12-13T01:53:34.971994647Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:34.981766 env[1413]: time="2024-12-13T01:53:34.981722925Z" level=info msg="StopContainer for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" with timeout 2 (s)" Dec 13 01:53:34.982230 env[1413]: time="2024-12-13T01:53:34.982185433Z" level=info msg="Stop container \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" with signal terminated" Dec 13 01:53:34.990367 systemd-networkd[1572]: lxc_health: Link DOWN Dec 13 01:53:34.990375 systemd-networkd[1572]: lxc_health: Lost carrier Dec 13 01:53:34.996993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea-rootfs.mount: Deactivated successfully. Dec 13 01:53:35.013433 systemd[1]: cri-containerd-a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2.scope: Deactivated successfully. Dec 13 01:53:35.013781 systemd[1]: cri-containerd-a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2.scope: Consumed 7.213s CPU time. Dec 13 01:53:35.016361 env[1413]: time="2024-12-13T01:53:35.016301055Z" level=info msg="shim disconnected" id=c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea Dec 13 01:53:35.016523 env[1413]: time="2024-12-13T01:53:35.016507559Z" level=warning msg="cleaning up after shim disconnected" id=c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea namespace=k8s.io Dec 13 01:53:35.016711 env[1413]: time="2024-12-13T01:53:35.016586561Z" level=info msg="cleaning up dead shim" Dec 13 01:53:35.027814 env[1413]: time="2024-12-13T01:53:35.027780464Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4093 runtime=io.containerd.runc.v2\n" Dec 13 01:53:35.033924 env[1413]: time="2024-12-13T01:53:35.033881275Z" level=info msg="StopContainer for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" returns successfully" Dec 13 01:53:35.034740 env[1413]: time="2024-12-13T01:53:35.034667689Z" level=info msg="StopPodSandbox for \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\"" Dec 13 01:53:35.034853 env[1413]: time="2024-12-13T01:53:35.034819492Z" level=info msg="Container to stop \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:35.037547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2-shm.mount: Deactivated successfully. Dec 13 01:53:35.043785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2-rootfs.mount: Deactivated successfully. Dec 13 01:53:35.052905 systemd[1]: cri-containerd-82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2.scope: Deactivated successfully. Dec 13 01:53:35.055592 env[1413]: time="2024-12-13T01:53:35.055538469Z" level=info msg="shim disconnected" id=a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2 Dec 13 01:53:35.055751 env[1413]: time="2024-12-13T01:53:35.055698972Z" level=warning msg="cleaning up after shim disconnected" id=a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2 namespace=k8s.io Dec 13 01:53:35.055751 env[1413]: time="2024-12-13T01:53:35.055720272Z" level=info msg="cleaning up dead shim" Dec 13 01:53:35.066828 env[1413]: time="2024-12-13T01:53:35.066775873Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4126 runtime=io.containerd.runc.v2\n" Dec 13 01:53:35.077326 env[1413]: time="2024-12-13T01:53:35.077276564Z" level=info msg="StopContainer for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" returns successfully" Dec 13 01:53:35.078684 env[1413]: time="2024-12-13T01:53:35.078639889Z" level=info msg="StopPodSandbox for \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\"" Dec 13 01:53:35.078845 env[1413]: time="2024-12-13T01:53:35.078732890Z" level=info msg="Container to stop \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:35.078845 env[1413]: time="2024-12-13T01:53:35.078754491Z" level=info msg="Container to stop \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:35.078845 env[1413]: time="2024-12-13T01:53:35.078779291Z" level=info msg="Container to stop \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:35.078845 env[1413]: time="2024-12-13T01:53:35.078795091Z" level=info msg="Container to stop \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:35.078845 env[1413]: time="2024-12-13T01:53:35.078810792Z" level=info msg="Container to stop \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:35.088018 systemd[1]: cri-containerd-1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6.scope: Deactivated successfully. Dec 13 01:53:35.098001 env[1413]: time="2024-12-13T01:53:35.097940139Z" level=info msg="shim disconnected" id=82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2 Dec 13 01:53:35.098231 env[1413]: time="2024-12-13T01:53:35.098194244Z" level=warning msg="cleaning up after shim disconnected" id=82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2 namespace=k8s.io Dec 13 01:53:35.098231 env[1413]: time="2024-12-13T01:53:35.098216344Z" level=info msg="cleaning up dead shim" Dec 13 01:53:35.110871 env[1413]: time="2024-12-13T01:53:35.110802173Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4162 runtime=io.containerd.runc.v2\n" Dec 13 01:53:35.111275 env[1413]: time="2024-12-13T01:53:35.111233481Z" level=info msg="TearDown network for sandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" successfully" Dec 13 01:53:35.111389 env[1413]: time="2024-12-13T01:53:35.111275082Z" level=info msg="StopPodSandbox for \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" returns successfully" Dec 13 01:53:35.128229 env[1413]: time="2024-12-13T01:53:35.128175389Z" level=info msg="shim disconnected" id=1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6 Dec 13 01:53:35.128519 env[1413]: time="2024-12-13T01:53:35.128492595Z" level=warning msg="cleaning up after shim disconnected" id=1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6 namespace=k8s.io Dec 13 01:53:35.128669 env[1413]: time="2024-12-13T01:53:35.128649098Z" level=info msg="cleaning up dead shim" Dec 13 01:53:35.138875 env[1413]: time="2024-12-13T01:53:35.138828483Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Dec 13 01:53:35.139194 env[1413]: time="2024-12-13T01:53:35.139160589Z" level=info msg="TearDown network for sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" successfully" Dec 13 01:53:35.139287 env[1413]: time="2024-12-13T01:53:35.139195489Z" level=info msg="StopPodSandbox for \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" returns successfully" Dec 13 01:53:35.148741 kubelet[2490]: I1213 01:53:35.147989 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df36fed7-3d76-4f81-a275-46255556f48c-cilium-config-path\") pod \"df36fed7-3d76-4f81-a275-46255556f48c\" (UID: \"df36fed7-3d76-4f81-a275-46255556f48c\") " Dec 13 01:53:35.148741 kubelet[2490]: I1213 01:53:35.148031 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9bc9dbf-5015-4510-9213-3412b58b39e0-clustermesh-secrets\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.148741 kubelet[2490]: I1213 01:53:35.148052 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-xtables-lock\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.148741 kubelet[2490]: I1213 01:53:35.148075 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-run\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.148741 kubelet[2490]: I1213 01:53:35.148099 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l6fb\" (UniqueName: \"kubernetes.io/projected/df36fed7-3d76-4f81-a275-46255556f48c-kube-api-access-4l6fb\") pod \"df36fed7-3d76-4f81-a275-46255556f48c\" (UID: \"df36fed7-3d76-4f81-a275-46255556f48c\") " Dec 13 01:53:35.148741 kubelet[2490]: I1213 01:53:35.148120 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-cgroup\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149394 kubelet[2490]: I1213 01:53:35.148143 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-config-path\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149394 kubelet[2490]: I1213 01:53:35.148161 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-bpf-maps\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149394 kubelet[2490]: I1213 01:53:35.148178 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-hostproc\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149394 kubelet[2490]: I1213 01:53:35.148200 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-hubble-tls\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149394 kubelet[2490]: I1213 01:53:35.148224 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6vj\" (UniqueName: \"kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-kube-api-access-wt6vj\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149394 kubelet[2490]: I1213 01:53:35.148247 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-kernel\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149669 kubelet[2490]: I1213 01:53:35.148271 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cni-path\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149669 kubelet[2490]: I1213 01:53:35.148293 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-lib-modules\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149669 kubelet[2490]: I1213 01:53:35.148314 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-etc-cni-netd\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149669 kubelet[2490]: I1213 01:53:35.148334 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-net\") pod \"d9bc9dbf-5015-4510-9213-3412b58b39e0\" (UID: \"d9bc9dbf-5015-4510-9213-3412b58b39e0\") " Dec 13 01:53:35.149669 kubelet[2490]: I1213 01:53:35.148396 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.152813 kubelet[2490]: I1213 01:53:35.148704 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.153783 kubelet[2490]: I1213 01:53:35.153756 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bc9dbf-5015-4510-9213-3412b58b39e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:53:35.153984 kubelet[2490]: I1213 01:53:35.153956 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.154088 kubelet[2490]: I1213 01:53:35.154074 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.158700 kubelet[2490]: I1213 01:53:35.154741 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.158700 kubelet[2490]: I1213 01:53:35.157362 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df36fed7-3d76-4f81-a275-46255556f48c-kube-api-access-4l6fb" (OuterVolumeSpecName: "kube-api-access-4l6fb") pod "df36fed7-3d76-4f81-a275-46255556f48c" (UID: "df36fed7-3d76-4f81-a275-46255556f48c"). InnerVolumeSpecName "kube-api-access-4l6fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:35.158700 kubelet[2490]: I1213 01:53:35.157411 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.158700 kubelet[2490]: I1213 01:53:35.157434 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.159345 kubelet[2490]: I1213 01:53:35.159320 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.159510 kubelet[2490]: I1213 01:53:35.159491 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.159656 kubelet[2490]: I1213 01:53:35.159636 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:35.161922 kubelet[2490]: I1213 01:53:35.161895 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df36fed7-3d76-4f81-a275-46255556f48c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df36fed7-3d76-4f81-a275-46255556f48c" (UID: "df36fed7-3d76-4f81-a275-46255556f48c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:53:35.162523 kubelet[2490]: I1213 01:53:35.162498 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:53:35.162722 kubelet[2490]: I1213 01:53:35.162704 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:35.165789 kubelet[2490]: I1213 01:53:35.165566 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-kube-api-access-wt6vj" (OuterVolumeSpecName: "kube-api-access-wt6vj") pod "d9bc9dbf-5015-4510-9213-3412b58b39e0" (UID: "d9bc9dbf-5015-4510-9213-3412b58b39e0"). InnerVolumeSpecName "kube-api-access-wt6vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.248900 2490 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-xtables-lock\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.248941 2490 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4l6fb\" (UniqueName: \"kubernetes.io/projected/df36fed7-3d76-4f81-a275-46255556f48c-kube-api-access-4l6fb\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.248958 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-run\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.248973 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-cgroup\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.248987 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9bc9dbf-5015-4510-9213-3412b58b39e0-cilium-config-path\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.249001 2490 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wt6vj\" (UniqueName: \"kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-kube-api-access-wt6vj\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.249017 2490 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-bpf-maps\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.249663 kubelet[2490]: I1213 01:53:35.249031 2490 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-hostproc\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249044 2490 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9bc9dbf-5015-4510-9213-3412b58b39e0-hubble-tls\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249057 2490 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-cni-path\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249069 2490 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249082 2490 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-lib-modules\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249097 2490 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-etc-cni-netd\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249110 2490 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9bc9dbf-5015-4510-9213-3412b58b39e0-host-proc-sys-net\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249123 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df36fed7-3d76-4f81-a275-46255556f48c-cilium-config-path\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.250136 kubelet[2490]: I1213 01:53:35.249136 2490 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9bc9dbf-5015-4510-9213-3412b58b39e0-clustermesh-secrets\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:35.933960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6-rootfs.mount: Deactivated successfully. Dec 13 01:53:35.934110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6-shm.mount: Deactivated successfully. Dec 13 01:53:35.934213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2-rootfs.mount: Deactivated successfully. Dec 13 01:53:35.934302 systemd[1]: var-lib-kubelet-pods-d9bc9dbf\x2d5015\x2d4510\x2d9213\x2d3412b58b39e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwt6vj.mount: Deactivated successfully. Dec 13 01:53:35.934379 systemd[1]: var-lib-kubelet-pods-d9bc9dbf\x2d5015\x2d4510\x2d9213\x2d3412b58b39e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:53:35.934461 systemd[1]: var-lib-kubelet-pods-d9bc9dbf\x2d5015\x2d4510\x2d9213\x2d3412b58b39e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:53:35.934542 systemd[1]: var-lib-kubelet-pods-df36fed7\x2d3d76\x2d4f81\x2da275\x2d46255556f48c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4l6fb.mount: Deactivated successfully. Dec 13 01:53:36.005441 kubelet[2490]: I1213 01:53:36.005409 2490 scope.go:117] "RemoveContainer" containerID="a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2" Dec 13 01:53:36.010153 systemd[1]: Removed slice kubepods-burstable-podd9bc9dbf_5015_4510_9213_3412b58b39e0.slice. Dec 13 01:53:36.010292 systemd[1]: kubepods-burstable-podd9bc9dbf_5015_4510_9213_3412b58b39e0.slice: Consumed 7.314s CPU time. Dec 13 01:53:36.011005 env[1413]: time="2024-12-13T01:53:36.010668531Z" level=info msg="RemoveContainer for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\"" Dec 13 01:53:36.017736 systemd[1]: Removed slice kubepods-besteffort-poddf36fed7_3d76_4f81_a275_46255556f48c.slice. Dec 13 01:53:36.021558 env[1413]: time="2024-12-13T01:53:36.021511227Z" level=info msg="RemoveContainer for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" returns successfully" Dec 13 01:53:36.021816 kubelet[2490]: I1213 01:53:36.021791 2490 scope.go:117] "RemoveContainer" containerID="7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6" Dec 13 01:53:36.023303 env[1413]: time="2024-12-13T01:53:36.023267559Z" level=info msg="RemoveContainer for \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\"" Dec 13 01:53:36.033068 env[1413]: time="2024-12-13T01:53:36.033033835Z" level=info msg="RemoveContainer for \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\" returns successfully" Dec 13 01:53:36.033343 kubelet[2490]: I1213 01:53:36.033308 2490 scope.go:117] "RemoveContainer" containerID="50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4" Dec 13 01:53:36.035062 env[1413]: time="2024-12-13T01:53:36.035029971Z" level=info msg="RemoveContainer for \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\"" Dec 13 01:53:36.043266 env[1413]: time="2024-12-13T01:53:36.043226519Z" level=info msg="RemoveContainer for \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\" returns successfully" Dec 13 01:53:36.043684 kubelet[2490]: I1213 01:53:36.043665 2490 scope.go:117] "RemoveContainer" containerID="34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a" Dec 13 01:53:36.045075 env[1413]: time="2024-12-13T01:53:36.044753847Z" level=info msg="RemoveContainer for \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\"" Dec 13 01:53:36.054987 env[1413]: time="2024-12-13T01:53:36.054951631Z" level=info msg="RemoveContainer for \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\" returns successfully" Dec 13 01:53:36.055228 kubelet[2490]: I1213 01:53:36.055145 2490 scope.go:117] "RemoveContainer" containerID="c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42" Dec 13 01:53:36.056315 env[1413]: time="2024-12-13T01:53:36.056286955Z" level=info msg="RemoveContainer for \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\"" Dec 13 01:53:36.064722 env[1413]: time="2024-12-13T01:53:36.064691407Z" level=info msg="RemoveContainer for \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\" returns successfully" Dec 13 01:53:36.064919 kubelet[2490]: I1213 01:53:36.064899 2490 scope.go:117] "RemoveContainer" containerID="a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2" Dec 13 01:53:36.065252 env[1413]: time="2024-12-13T01:53:36.065156615Z" level=error msg="ContainerStatus for \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\": not found" Dec 13 01:53:36.065454 kubelet[2490]: E1213 01:53:36.065424 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\": not found" containerID="a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2" Dec 13 01:53:36.065545 kubelet[2490]: I1213 01:53:36.065458 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2"} err="failed to get container status \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5458e0bab9746799b2b4feba4f87c1d28c8a87f41ab39076587b5eb892700d2\": not found" Dec 13 01:53:36.065636 kubelet[2490]: I1213 01:53:36.065551 2490 scope.go:117] "RemoveContainer" containerID="7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6" Dec 13 01:53:36.065806 env[1413]: time="2024-12-13T01:53:36.065746626Z" level=error msg="ContainerStatus for \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\": not found" Dec 13 01:53:36.065914 kubelet[2490]: E1213 01:53:36.065889 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\": not found" containerID="7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6" Dec 13 01:53:36.065977 kubelet[2490]: I1213 01:53:36.065933 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6"} err="failed to get container status \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dd84a452df42ee189beb685698da3f29b245428df497580cb57b5a7240c6cc6\": not found" Dec 13 01:53:36.065977 kubelet[2490]: I1213 01:53:36.065956 2490 scope.go:117] "RemoveContainer" containerID="50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4" Dec 13 01:53:36.066240 env[1413]: time="2024-12-13T01:53:36.066193234Z" level=error msg="ContainerStatus for \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\": not found" Dec 13 01:53:36.066373 kubelet[2490]: E1213 01:53:36.066347 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\": not found" containerID="50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4" Dec 13 01:53:36.066444 kubelet[2490]: I1213 01:53:36.066383 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4"} err="failed to get container status \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"50f5c034a37ed94624ca521395b16e15246b5072cd6e950e714ca2fd0201b5b4\": not found" Dec 13 01:53:36.066444 kubelet[2490]: I1213 01:53:36.066404 2490 scope.go:117] "RemoveContainer" containerID="34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a" Dec 13 01:53:36.066663 env[1413]: time="2024-12-13T01:53:36.066585341Z" level=error msg="ContainerStatus for \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\": not found" Dec 13 01:53:36.066801 kubelet[2490]: E1213 01:53:36.066775 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\": not found" containerID="34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a" Dec 13 01:53:36.066874 kubelet[2490]: I1213 01:53:36.066810 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a"} err="failed to get container status \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\": rpc error: code = NotFound desc = an error occurred when try to find container \"34ef3cb3852fe1b91077c5a1b03c95607797b812741f8525cc96986bf1a9285a\": not found" Dec 13 01:53:36.066874 kubelet[2490]: I1213 01:53:36.066832 2490 scope.go:117] "RemoveContainer" containerID="c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42" Dec 13 01:53:36.067141 env[1413]: time="2024-12-13T01:53:36.067093850Z" level=error msg="ContainerStatus for \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\": not found" Dec 13 01:53:36.067334 kubelet[2490]: E1213 01:53:36.067306 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\": not found" containerID="c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42" Dec 13 01:53:36.067410 kubelet[2490]: I1213 01:53:36.067327 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42"} err="failed to get container status \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\": rpc error: code = NotFound desc = an error occurred when try to find container \"c78291e1d119521f38432852dff608e81bcb8dd6a3091cecf12ae6958d8fca42\": not found" Dec 13 01:53:36.067410 kubelet[2490]: I1213 01:53:36.067370 2490 scope.go:117] "RemoveContainer" containerID="c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea" Dec 13 01:53:36.068520 env[1413]: time="2024-12-13T01:53:36.068485276Z" level=info msg="RemoveContainer for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\"" Dec 13 01:53:36.074828 env[1413]: time="2024-12-13T01:53:36.074797790Z" level=info msg="RemoveContainer for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" returns successfully" Dec 13 01:53:36.075029 kubelet[2490]: I1213 01:53:36.074995 2490 scope.go:117] "RemoveContainer" containerID="c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea" Dec 13 01:53:36.075319 env[1413]: time="2024-12-13T01:53:36.075264298Z" level=error msg="ContainerStatus for \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\": not found" Dec 13 01:53:36.075520 kubelet[2490]: E1213 01:53:36.075494 2490 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\": not found" containerID="c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea" Dec 13 01:53:36.075587 kubelet[2490]: I1213 01:53:36.075519 2490 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea"} err="failed to get container status \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"c76cd6fe2e367b4c1a2b5d9edc78e2e2ddd3da04c2bf96bedf1b9eeac14940ea\": not found" Dec 13 01:53:36.414457 kubelet[2490]: I1213 01:53:36.414417 2490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" path="/var/lib/kubelet/pods/d9bc9dbf-5015-4510-9213-3412b58b39e0/volumes" Dec 13 01:53:36.415192 kubelet[2490]: I1213 01:53:36.415166 2490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df36fed7-3d76-4f81-a275-46255556f48c" path="/var/lib/kubelet/pods/df36fed7-3d76-4f81-a275-46255556f48c/volumes" Dec 13 01:53:36.953040 sshd[4036]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:36.956986 systemd[1]: sshd@21-10.200.8.23:22-10.200.16.10:33808.service: Deactivated successfully. Dec 13 01:53:36.958243 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:53:36.958753 systemd-logind[1403]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:53:36.959732 systemd-logind[1403]: Removed session 24. Dec 13 01:53:37.057683 systemd[1]: Started sshd@22-10.200.8.23:22-10.200.16.10:33824.service. Dec 13 01:53:37.680961 sshd[4202]: Accepted publickey for core from 10.200.16.10 port 33824 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:37.682543 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:37.689273 systemd[1]: Started session-25.scope. Dec 13 01:53:37.689934 systemd-logind[1403]: New session 25 of user core. Dec 13 01:53:38.576815 kubelet[2490]: I1213 01:53:38.576763 2490 topology_manager.go:215] "Topology Admit Handler" podUID="beee48c4-4c2e-4b85-b61f-7d77372dc3ec" podNamespace="kube-system" podName="cilium-9cq46" Dec 13 01:53:38.577340 kubelet[2490]: E1213 01:53:38.576835 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df36fed7-3d76-4f81-a275-46255556f48c" containerName="cilium-operator" Dec 13 01:53:38.577340 kubelet[2490]: E1213 01:53:38.576849 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" containerName="mount-cgroup" Dec 13 01:53:38.577340 kubelet[2490]: E1213 01:53:38.576857 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" containerName="apply-sysctl-overwrites" Dec 13 01:53:38.577340 kubelet[2490]: E1213 01:53:38.576864 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" containerName="clean-cilium-state" Dec 13 01:53:38.577340 kubelet[2490]: E1213 01:53:38.576872 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" containerName="cilium-agent" Dec 13 01:53:38.577340 kubelet[2490]: E1213 01:53:38.576882 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" containerName="mount-bpf-fs" Dec 13 01:53:38.577340 kubelet[2490]: I1213 01:53:38.576915 2490 memory_manager.go:354] "RemoveStaleState removing state" podUID="df36fed7-3d76-4f81-a275-46255556f48c" containerName="cilium-operator" Dec 13 01:53:38.577340 kubelet[2490]: I1213 01:53:38.576923 2490 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bc9dbf-5015-4510-9213-3412b58b39e0" containerName="cilium-agent" Dec 13 01:53:38.585259 systemd[1]: Created slice kubepods-burstable-podbeee48c4_4c2e_4b85_b61f_7d77372dc3ec.slice. Dec 13 01:53:38.635416 sshd[4202]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:38.638356 systemd[1]: sshd@22-10.200.8.23:22-10.200.16.10:33824.service: Deactivated successfully. Dec 13 01:53:38.639869 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:53:38.640807 systemd-logind[1403]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:53:38.641699 systemd-logind[1403]: Removed session 25. Dec 13 01:53:38.739589 systemd[1]: Started sshd@23-10.200.8.23:22-10.200.16.10:35172.service. Dec 13 01:53:38.772687 kubelet[2490]: I1213 01:53:38.772631 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hostproc\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772687 kubelet[2490]: I1213 01:53:38.772688 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-config-path\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772958 kubelet[2490]: I1213 01:53:38.772718 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cni-path\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772958 kubelet[2490]: I1213 01:53:38.772744 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-lib-modules\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772958 kubelet[2490]: I1213 01:53:38.772768 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-ipsec-secrets\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772958 kubelet[2490]: I1213 01:53:38.772795 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-clustermesh-secrets\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772958 kubelet[2490]: I1213 01:53:38.772825 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-etc-cni-netd\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.772958 kubelet[2490]: I1213 01:53:38.772852 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k95x7\" (UniqueName: \"kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-kube-api-access-k95x7\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773327 kubelet[2490]: I1213 01:53:38.772879 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-cgroup\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773327 kubelet[2490]: I1213 01:53:38.772906 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-xtables-lock\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773327 kubelet[2490]: I1213 01:53:38.772933 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-net\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773327 kubelet[2490]: I1213 01:53:38.772962 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-kernel\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773327 kubelet[2490]: I1213 01:53:38.773033 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-run\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773327 kubelet[2490]: I1213 01:53:38.773066 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-bpf-maps\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:38.773515 kubelet[2490]: I1213 01:53:38.773093 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hubble-tls\") pod \"cilium-9cq46\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " pod="kube-system/cilium-9cq46" Dec 13 01:53:39.189710 env[1413]: time="2024-12-13T01:53:39.189649979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cq46,Uid:beee48c4-4c2e-4b85-b61f-7d77372dc3ec,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:39.219126 env[1413]: time="2024-12-13T01:53:39.219051801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:39.219126 env[1413]: time="2024-12-13T01:53:39.219089001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:39.219361 env[1413]: time="2024-12-13T01:53:39.219324806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:39.219650 env[1413]: time="2024-12-13T01:53:39.219555510Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40 pid=4231 runtime=io.containerd.runc.v2 Dec 13 01:53:39.233302 systemd[1]: Started cri-containerd-c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40.scope. Dec 13 01:53:39.262751 env[1413]: time="2024-12-13T01:53:39.262686675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cq46,Uid:beee48c4-4c2e-4b85-b61f-7d77372dc3ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\"" Dec 13 01:53:39.266980 env[1413]: time="2024-12-13T01:53:39.266943950Z" level=info msg="CreateContainer within sandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:53:39.306766 env[1413]: time="2024-12-13T01:53:39.306718856Z" level=info msg="CreateContainer within sandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\"" Dec 13 01:53:39.307545 env[1413]: time="2024-12-13T01:53:39.307510970Z" level=info msg="StartContainer for \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\"" Dec 13 01:53:39.327185 systemd[1]: Started cri-containerd-bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523.scope. Dec 13 01:53:39.342938 systemd[1]: cri-containerd-bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523.scope: Deactivated successfully. Dec 13 01:53:39.366134 sshd[4215]: Accepted publickey for core from 10.200.16.10 port 35172 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:39.367764 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:39.373271 systemd[1]: Started session-26.scope. Dec 13 01:53:39.373887 systemd-logind[1403]: New session 26 of user core. Dec 13 01:53:39.419200 env[1413]: time="2024-12-13T01:53:39.419107350Z" level=info msg="shim disconnected" id=bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523 Dec 13 01:53:39.419200 env[1413]: time="2024-12-13T01:53:39.419197552Z" level=warning msg="cleaning up after shim disconnected" id=bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523 namespace=k8s.io Dec 13 01:53:39.419493 env[1413]: time="2024-12-13T01:53:39.419210452Z" level=info msg="cleaning up dead shim" Dec 13 01:53:39.427541 env[1413]: time="2024-12-13T01:53:39.427499099Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4290 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T01:53:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 01:53:39.427937 env[1413]: time="2024-12-13T01:53:39.427832605Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Dec 13 01:53:39.429725 env[1413]: time="2024-12-13T01:53:39.429672738Z" level=error msg="Failed to pipe stderr of container \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\"" error="reading from a closed fifo" Dec 13 01:53:39.429725 env[1413]: time="2024-12-13T01:53:39.429683138Z" level=error msg="Failed to pipe stdout of container \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\"" error="reading from a closed fifo" Dec 13 01:53:39.434158 env[1413]: time="2024-12-13T01:53:39.433994115Z" level=error msg="StartContainer for \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 01:53:39.434641 kubelet[2490]: E1213 01:53:39.434562 2490 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523" Dec 13 01:53:39.436354 kubelet[2490]: E1213 01:53:39.435141 2490 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 01:53:39.436354 kubelet[2490]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 01:53:39.436354 kubelet[2490]: rm /hostbin/cilium-mount Dec 13 01:53:39.436534 kubelet[2490]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k95x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9cq46_kube-system(beee48c4-4c2e-4b85-b61f-7d77372dc3ec): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 01:53:39.436534 kubelet[2490]: E1213 01:53:39.435200 2490 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9cq46" podUID="beee48c4-4c2e-4b85-b61f-7d77372dc3ec" Dec 13 01:53:39.503339 kubelet[2490]: E1213 01:53:39.503192 2490 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:53:39.858271 kubelet[2490]: I1213 01:53:39.858219 2490 setters.go:580] "Node became not ready" node="ci-3510.3.6-a-1addd118d4" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:53:39Z","lastTransitionTime":"2024-12-13T01:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:53:39.892820 sshd[4215]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:39.895825 systemd-logind[1403]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:53:39.898993 systemd[1]: sshd@23-10.200.8.23:22-10.200.16.10:35172.service: Deactivated successfully. Dec 13 01:53:39.899934 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:53:39.901396 systemd-logind[1403]: Removed session 26. Dec 13 01:53:39.993732 systemd[1]: Started sshd@24-10.200.8.23:22-10.200.16.10:35182.service. Dec 13 01:53:40.024191 env[1413]: time="2024-12-13T01:53:40.024130383Z" level=info msg="StopPodSandbox for \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\"" Dec 13 01:53:40.024480 env[1413]: time="2024-12-13T01:53:40.024439789Z" level=info msg="Container to stop \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:40.030405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40-shm.mount: Deactivated successfully. Dec 13 01:53:40.036757 systemd[1]: cri-containerd-c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40.scope: Deactivated successfully. Dec 13 01:53:40.066309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40-rootfs.mount: Deactivated successfully. Dec 13 01:53:40.080987 env[1413]: time="2024-12-13T01:53:40.080918785Z" level=info msg="shim disconnected" id=c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40 Dec 13 01:53:40.080987 env[1413]: time="2024-12-13T01:53:40.080983086Z" level=warning msg="cleaning up after shim disconnected" id=c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40 namespace=k8s.io Dec 13 01:53:40.081274 env[1413]: time="2024-12-13T01:53:40.080995486Z" level=info msg="cleaning up dead shim" Dec 13 01:53:40.089362 env[1413]: time="2024-12-13T01:53:40.089321133Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4331 runtime=io.containerd.runc.v2\n" Dec 13 01:53:40.089709 env[1413]: time="2024-12-13T01:53:40.089671339Z" level=info msg="TearDown network for sandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" successfully" Dec 13 01:53:40.089814 env[1413]: time="2024-12-13T01:53:40.089706240Z" level=info msg="StopPodSandbox for \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" returns successfully" Dec 13 01:53:40.280947 kubelet[2490]: I1213 01:53:40.280781 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-net\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.280947 kubelet[2490]: I1213 01:53:40.280862 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-bpf-maps\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.280947 kubelet[2490]: I1213 01:53:40.280911 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-ipsec-secrets\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.280947 kubelet[2490]: I1213 01:53:40.280941 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-clustermesh-secrets\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.280987 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-run\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281009 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-lib-modules\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281033 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-cgroup\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281071 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-xtables-lock\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281098 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-kernel\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281138 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cni-path\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281199 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k95x7\" (UniqueName: \"kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-kube-api-access-k95x7\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281250 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hubble-tls\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281274 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hostproc\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281322 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-config-path\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281376 kubelet[2490]: I1213 01:53:40.281352 2490 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-etc-cni-netd\") pod \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\" (UID: \"beee48c4-4c2e-4b85-b61f-7d77372dc3ec\") " Dec 13 01:53:40.281985 kubelet[2490]: I1213 01:53:40.281475 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.281985 kubelet[2490]: I1213 01:53:40.281522 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.282200 kubelet[2490]: I1213 01:53:40.282170 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.284645 kubelet[2490]: I1213 01:53:40.284589 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cni-path" (OuterVolumeSpecName: "cni-path") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.285462 kubelet[2490]: I1213 01:53:40.285431 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.285891 kubelet[2490]: I1213 01:53:40.285637 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.286294 kubelet[2490]: I1213 01:53:40.285656 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.286294 kubelet[2490]: I1213 01:53:40.285681 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.286512 kubelet[2490]: I1213 01:53:40.285692 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.286512 kubelet[2490]: I1213 01:53:40.285709 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hostproc" (OuterVolumeSpecName: "hostproc") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.288762 kubelet[2490]: I1213 01:53:40.288694 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:53:40.292117 systemd[1]: var-lib-kubelet-pods-beee48c4\x2d4c2e\x2d4b85\x2db61f\x2d7d77372dc3ec-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 01:53:40.296729 systemd[1]: var-lib-kubelet-pods-beee48c4\x2d4c2e\x2d4b85\x2db61f\x2d7d77372dc3ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk95x7.mount: Deactivated successfully. Dec 13 01:53:40.298371 kubelet[2490]: I1213 01:53:40.298344 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:53:40.298609 kubelet[2490]: I1213 01:53:40.298575 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-kube-api-access-k95x7" (OuterVolumeSpecName: "kube-api-access-k95x7") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "kube-api-access-k95x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:40.298855 kubelet[2490]: I1213 01:53:40.298833 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:53:40.300851 kubelet[2490]: I1213 01:53:40.300804 2490 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "beee48c4-4c2e-4b85-b61f-7d77372dc3ec" (UID: "beee48c4-4c2e-4b85-b61f-7d77372dc3ec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:40.382708 kubelet[2490]: I1213 01:53:40.382570 2490 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k95x7\" (UniqueName: \"kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-kube-api-access-k95x7\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.382708 kubelet[2490]: I1213 01:53:40.382688 2490 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hubble-tls\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382729 2490 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-hostproc\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382764 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-config-path\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382798 2490 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-etc-cni-netd\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382829 2490 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-net\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382868 2490 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-bpf-maps\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382904 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382940 2490 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-clustermesh-secrets\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.382979 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-run\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.383014 2490 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-lib-modules\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.383048 2490 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cilium-cgroup\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383147 kubelet[2490]: I1213 01:53:40.383115 2490 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-xtables-lock\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383746 kubelet[2490]: I1213 01:53:40.383296 2490 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.383746 kubelet[2490]: I1213 01:53:40.383336 2490 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beee48c4-4c2e-4b85-b61f-7d77372dc3ec-cni-path\") on node \"ci-3510.3.6-a-1addd118d4\" DevicePath \"\"" Dec 13 01:53:40.417475 systemd[1]: Removed slice kubepods-burstable-podbeee48c4_4c2e_4b85_b61f_7d77372dc3ec.slice. Dec 13 01:53:40.618234 sshd[4311]: Accepted publickey for core from 10.200.16.10 port 35182 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:40.620004 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:40.625648 systemd[1]: Started session-27.scope. Dec 13 01:53:40.626104 systemd-logind[1403]: New session 27 of user core. Dec 13 01:53:40.885481 systemd[1]: var-lib-kubelet-pods-beee48c4\x2d4c2e\x2d4b85\x2db61f\x2d7d77372dc3ec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:53:40.885622 systemd[1]: var-lib-kubelet-pods-beee48c4\x2d4c2e\x2d4b85\x2db61f\x2d7d77372dc3ec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:53:41.026532 kubelet[2490]: I1213 01:53:41.026488 2490 scope.go:117] "RemoveContainer" containerID="bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523" Dec 13 01:53:41.030583 env[1413]: time="2024-12-13T01:53:41.030537932Z" level=info msg="RemoveContainer for \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\"" Dec 13 01:53:41.040872 env[1413]: time="2024-12-13T01:53:41.040782112Z" level=info msg="RemoveContainer for \"bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523\" returns successfully" Dec 13 01:53:41.092657 kubelet[2490]: I1213 01:53:41.092615 2490 topology_manager.go:215] "Topology Admit Handler" podUID="41a1c08f-ef0f-467a-8681-c6d7bd1c56c3" podNamespace="kube-system" podName="cilium-m8jv4" Dec 13 01:53:41.092965 kubelet[2490]: E1213 01:53:41.092940 2490 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="beee48c4-4c2e-4b85-b61f-7d77372dc3ec" containerName="mount-cgroup" Dec 13 01:53:41.093108 kubelet[2490]: I1213 01:53:41.093094 2490 memory_manager.go:354] "RemoveStaleState removing state" podUID="beee48c4-4c2e-4b85-b61f-7d77372dc3ec" containerName="mount-cgroup" Dec 13 01:53:41.100904 systemd[1]: Created slice kubepods-burstable-pod41a1c08f_ef0f_467a_8681_c6d7bd1c56c3.slice. Dec 13 01:53:41.187903 kubelet[2490]: I1213 01:53:41.187785 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-xtables-lock\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.187903 kubelet[2490]: I1213 01:53:41.187825 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-host-proc-sys-net\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.187903 kubelet[2490]: I1213 01:53:41.187850 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-hostproc\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.187903 kubelet[2490]: I1213 01:53:41.187871 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-etc-cni-netd\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188443 kubelet[2490]: I1213 01:53:41.188284 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-hubble-tls\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188443 kubelet[2490]: I1213 01:53:41.188324 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-cni-path\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188443 kubelet[2490]: I1213 01:53:41.188346 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-cilium-ipsec-secrets\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188443 kubelet[2490]: I1213 01:53:41.188370 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-cilium-run\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188443 kubelet[2490]: I1213 01:53:41.188396 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpdrr\" (UniqueName: \"kubernetes.io/projected/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-kube-api-access-kpdrr\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188443 kubelet[2490]: I1213 01:53:41.188424 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-lib-modules\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188712 kubelet[2490]: I1213 01:53:41.188445 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-clustermesh-secrets\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188712 kubelet[2490]: I1213 01:53:41.188474 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-cilium-config-path\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188712 kubelet[2490]: I1213 01:53:41.188499 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-host-proc-sys-kernel\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188712 kubelet[2490]: I1213 01:53:41.188521 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-bpf-maps\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.188712 kubelet[2490]: I1213 01:53:41.188545 2490 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41a1c08f-ef0f-467a-8681-c6d7bd1c56c3-cilium-cgroup\") pod \"cilium-m8jv4\" (UID: \"41a1c08f-ef0f-467a-8681-c6d7bd1c56c3\") " pod="kube-system/cilium-m8jv4" Dec 13 01:53:41.404920 env[1413]: time="2024-12-13T01:53:41.404869897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8jv4,Uid:41a1c08f-ef0f-467a-8681-c6d7bd1c56c3,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:41.439021 env[1413]: time="2024-12-13T01:53:41.438885093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:41.439188 env[1413]: time="2024-12-13T01:53:41.438922994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:41.439188 env[1413]: time="2024-12-13T01:53:41.438951494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:41.439928 env[1413]: time="2024-12-13T01:53:41.439883911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1 pid=4367 runtime=io.containerd.runc.v2 Dec 13 01:53:41.451988 systemd[1]: Started cri-containerd-c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1.scope. Dec 13 01:53:41.478473 env[1413]: time="2024-12-13T01:53:41.478420686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8jv4,Uid:41a1c08f-ef0f-467a-8681-c6d7bd1c56c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\"" Dec 13 01:53:41.483016 env[1413]: time="2024-12-13T01:53:41.482978466Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:53:41.511716 env[1413]: time="2024-12-13T01:53:41.511672869Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3\"" Dec 13 01:53:41.513315 env[1413]: time="2024-12-13T01:53:41.512315081Z" level=info msg="StartContainer for \"6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3\"" Dec 13 01:53:41.529425 systemd[1]: Started cri-containerd-6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3.scope. Dec 13 01:53:41.560619 env[1413]: time="2024-12-13T01:53:41.560562027Z" level=info msg="StartContainer for \"6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3\" returns successfully" Dec 13 01:53:41.566664 systemd[1]: cri-containerd-6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3.scope: Deactivated successfully. Dec 13 01:53:41.618267 env[1413]: time="2024-12-13T01:53:41.618206038Z" level=info msg="shim disconnected" id=6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3 Dec 13 01:53:41.618267 env[1413]: time="2024-12-13T01:53:41.618269739Z" level=warning msg="cleaning up after shim disconnected" id=6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3 namespace=k8s.io Dec 13 01:53:41.618581 env[1413]: time="2024-12-13T01:53:41.618281639Z" level=info msg="cleaning up dead shim" Dec 13 01:53:41.626980 env[1413]: time="2024-12-13T01:53:41.626931691Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4452 runtime=io.containerd.runc.v2\n" Dec 13 01:53:42.034065 env[1413]: time="2024-12-13T01:53:42.033991726Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:53:42.063955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518787374.mount: Deactivated successfully. Dec 13 01:53:42.072053 env[1413]: time="2024-12-13T01:53:42.071998088Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112\"" Dec 13 01:53:42.072757 env[1413]: time="2024-12-13T01:53:42.072718901Z" level=info msg="StartContainer for \"737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112\"" Dec 13 01:53:42.094154 systemd[1]: Started cri-containerd-737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112.scope. Dec 13 01:53:42.126049 env[1413]: time="2024-12-13T01:53:42.125889428Z" level=info msg="StartContainer for \"737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112\" returns successfully" Dec 13 01:53:42.130915 systemd[1]: cri-containerd-737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112.scope: Deactivated successfully. Dec 13 01:53:42.159310 env[1413]: time="2024-12-13T01:53:42.159255909Z" level=info msg="shim disconnected" id=737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112 Dec 13 01:53:42.159582 env[1413]: time="2024-12-13T01:53:42.159313610Z" level=warning msg="cleaning up after shim disconnected" id=737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112 namespace=k8s.io Dec 13 01:53:42.159582 env[1413]: time="2024-12-13T01:53:42.159326011Z" level=info msg="cleaning up dead shim" Dec 13 01:53:42.166766 env[1413]: time="2024-12-13T01:53:42.166730340Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4513 runtime=io.containerd.runc.v2\n" Dec 13 01:53:42.414008 kubelet[2490]: I1213 01:53:42.413964 2490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beee48c4-4c2e-4b85-b61f-7d77372dc3ec" path="/var/lib/kubelet/pods/beee48c4-4c2e-4b85-b61f-7d77372dc3ec/volumes" Dec 13 01:53:42.524916 kubelet[2490]: W1213 01:53:42.524860 2490 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeee48c4_4c2e_4b85_b61f_7d77372dc3ec.slice/cri-containerd-bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523.scope WatchSource:0}: container "bf56e478a6e92a1ad96674114dbd0cc54b2e2e334b28408de0ec7305d5da5523" in namespace "k8s.io": not found Dec 13 01:53:42.885464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112-rootfs.mount: Deactivated successfully. Dec 13 01:53:43.041261 env[1413]: time="2024-12-13T01:53:43.041206582Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:53:43.071324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039828291.mount: Deactivated successfully. Dec 13 01:53:43.083002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880994186.mount: Deactivated successfully. Dec 13 01:53:43.094444 env[1413]: time="2024-12-13T01:53:43.094393804Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7\"" Dec 13 01:53:43.096198 env[1413]: time="2024-12-13T01:53:43.096162335Z" level=info msg="StartContainer for \"ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7\"" Dec 13 01:53:43.116340 systemd[1]: Started cri-containerd-ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7.scope. Dec 13 01:53:43.160267 env[1413]: time="2024-12-13T01:53:43.160144844Z" level=info msg="StartContainer for \"ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7\" returns successfully" Dec 13 01:53:43.178376 systemd[1]: cri-containerd-ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7.scope: Deactivated successfully. Dec 13 01:53:43.225865 env[1413]: time="2024-12-13T01:53:43.225807682Z" level=info msg="shim disconnected" id=ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7 Dec 13 01:53:43.226262 env[1413]: time="2024-12-13T01:53:43.226234789Z" level=warning msg="cleaning up after shim disconnected" id=ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7 namespace=k8s.io Dec 13 01:53:43.226369 env[1413]: time="2024-12-13T01:53:43.226350491Z" level=info msg="cleaning up dead shim" Dec 13 01:53:43.239721 env[1413]: time="2024-12-13T01:53:43.239679022Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4570 runtime=io.containerd.runc.v2\n" Dec 13 01:53:44.044580 env[1413]: time="2024-12-13T01:53:44.044528970Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:53:44.071403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530472139.mount: Deactivated successfully. Dec 13 01:53:44.085088 env[1413]: time="2024-12-13T01:53:44.085040368Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f\"" Dec 13 01:53:44.085789 env[1413]: time="2024-12-13T01:53:44.085755481Z" level=info msg="StartContainer for \"f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f\"" Dec 13 01:53:44.108628 systemd[1]: Started cri-containerd-f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f.scope. Dec 13 01:53:44.135786 systemd[1]: cri-containerd-f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f.scope: Deactivated successfully. Dec 13 01:53:44.139170 env[1413]: time="2024-12-13T01:53:44.139122900Z" level=info msg="StartContainer for \"f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f\" returns successfully" Dec 13 01:53:44.168186 env[1413]: time="2024-12-13T01:53:44.168122300Z" level=info msg="shim disconnected" id=f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f Dec 13 01:53:44.168186 env[1413]: time="2024-12-13T01:53:44.168182401Z" level=warning msg="cleaning up after shim disconnected" id=f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f namespace=k8s.io Dec 13 01:53:44.168186 env[1413]: time="2024-12-13T01:53:44.168194901Z" level=info msg="cleaning up dead shim" Dec 13 01:53:44.175874 env[1413]: time="2024-12-13T01:53:44.175836833Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4628 runtime=io.containerd.runc.v2\n" Dec 13 01:53:44.504747 kubelet[2490]: E1213 01:53:44.504689 2490 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:53:44.885659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f-rootfs.mount: Deactivated successfully. Dec 13 01:53:45.049674 env[1413]: time="2024-12-13T01:53:45.049620889Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:53:45.077839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833730919.mount: Deactivated successfully. Dec 13 01:53:45.089552 env[1413]: time="2024-12-13T01:53:45.089505573Z" level=info msg="CreateContainer within sandbox \"c0def376c1740f5b50968abbd6263f6665b6164c531fed08cec892488e582ab1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13\"" Dec 13 01:53:45.091192 env[1413]: time="2024-12-13T01:53:45.090204885Z" level=info msg="StartContainer for \"c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13\"" Dec 13 01:53:45.111565 systemd[1]: Started cri-containerd-c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13.scope. Dec 13 01:53:45.148559 env[1413]: time="2024-12-13T01:53:45.148446683Z" level=info msg="StartContainer for \"c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13\" returns successfully" Dec 13 01:53:45.558633 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:53:45.639621 kubelet[2490]: W1213 01:53:45.637823 2490 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a1c08f_ef0f_467a_8681_c6d7bd1c56c3.slice/cri-containerd-6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3.scope WatchSource:0}: task 6b57577016c5183fbf3137448ba6f172864202191fd8a6af4aa42f25f0e588f3 not found: not found Dec 13 01:53:47.414256 systemd[1]: run-containerd-runc-k8s.io-c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13-runc.xjt8Yg.mount: Deactivated successfully. Dec 13 01:53:48.335187 systemd-networkd[1572]: lxc_health: Link UP Dec 13 01:53:48.356624 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:53:48.358779 systemd-networkd[1572]: lxc_health: Gained carrier Dec 13 01:53:48.749613 kubelet[2490]: W1213 01:53:48.749539 2490 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a1c08f_ef0f_467a_8681_c6d7bd1c56c3.slice/cri-containerd-737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112.scope WatchSource:0}: task 737eb6513b8e95d23937601eeac18bc9e8f61469e766c8198818bac143229112 not found: not found Dec 13 01:53:49.433621 kubelet[2490]: I1213 01:53:49.433528 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m8jv4" podStartSLOduration=8.433486902 podStartE2EDuration="8.433486902s" podCreationTimestamp="2024-12-13 01:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:53:46.076440282 +0000 UTC m=+231.789816174" watchObservedRunningTime="2024-12-13 01:53:49.433486902 +0000 UTC m=+235.146862794" Dec 13 01:53:49.594647 systemd[1]: run-containerd-runc-k8s.io-c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13-runc.wRxKrC.mount: Deactivated successfully. Dec 13 01:53:49.843815 systemd-networkd[1572]: lxc_health: Gained IPv6LL Dec 13 01:53:51.812270 systemd[1]: run-containerd-runc-k8s.io-c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13-runc.UpYhke.mount: Deactivated successfully. Dec 13 01:53:51.866902 kubelet[2490]: W1213 01:53:51.866849 2490 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a1c08f_ef0f_467a_8681_c6d7bd1c56c3.slice/cri-containerd-ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7.scope WatchSource:0}: task ef384fa294b66dcc8db87fdbfe50494ff1b37bbfc5d4ee7f610b5c97996532e7 not found: not found Dec 13 01:53:54.013635 systemd[1]: run-containerd-runc-k8s.io-c8a9b93d681f5ec063edc799f25a77d1024fbbdeb257390591cf42fc6af7ee13-runc.AD4CvP.mount: Deactivated successfully. Dec 13 01:53:54.170877 sshd[4311]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:54.174496 systemd[1]: sshd@24-10.200.8.23:22-10.200.16.10:35182.service: Deactivated successfully. Dec 13 01:53:54.175407 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:53:54.176463 systemd-logind[1403]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:53:54.177369 systemd-logind[1403]: Removed session 27. Dec 13 01:53:54.414929 env[1413]: time="2024-12-13T01:53:54.414855568Z" level=info msg="StopPodSandbox for \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\"" Dec 13 01:53:54.415477 env[1413]: time="2024-12-13T01:53:54.415018071Z" level=info msg="TearDown network for sandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" successfully" Dec 13 01:53:54.415477 env[1413]: time="2024-12-13T01:53:54.415076772Z" level=info msg="StopPodSandbox for \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" returns successfully" Dec 13 01:53:54.415641 env[1413]: time="2024-12-13T01:53:54.415584280Z" level=info msg="RemovePodSandbox for \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\"" Dec 13 01:53:54.415716 env[1413]: time="2024-12-13T01:53:54.415652282Z" level=info msg="Forcibly stopping sandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\"" Dec 13 01:53:54.415808 env[1413]: time="2024-12-13T01:53:54.415773083Z" level=info msg="TearDown network for sandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" successfully" Dec 13 01:53:54.427884 env[1413]: time="2024-12-13T01:53:54.427836381Z" level=info msg="RemovePodSandbox \"c59c34c9d7a5d50b09507eb799a819be895b3d5382c720cf7fc09a914431ae40\" returns successfully" Dec 13 01:53:54.428345 env[1413]: time="2024-12-13T01:53:54.428311688Z" level=info msg="StopPodSandbox for \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\"" Dec 13 01:53:54.428461 env[1413]: time="2024-12-13T01:53:54.428399990Z" level=info msg="TearDown network for sandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" successfully" Dec 13 01:53:54.428461 env[1413]: time="2024-12-13T01:53:54.428442890Z" level=info msg="StopPodSandbox for \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" returns successfully" Dec 13 01:53:54.428781 env[1413]: time="2024-12-13T01:53:54.428754296Z" level=info msg="RemovePodSandbox for \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\"" Dec 13 01:53:54.428884 env[1413]: time="2024-12-13T01:53:54.428782196Z" level=info msg="Forcibly stopping sandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\"" Dec 13 01:53:54.428884 env[1413]: time="2024-12-13T01:53:54.428859997Z" level=info msg="TearDown network for sandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" successfully" Dec 13 01:53:54.435885 env[1413]: time="2024-12-13T01:53:54.435848311Z" level=info msg="RemovePodSandbox \"82296a29701596f0f68514c7f3875ae37c097a51ef1b17dfd5c98bcc0ca354b2\" returns successfully" Dec 13 01:53:54.436254 env[1413]: time="2024-12-13T01:53:54.436219617Z" level=info msg="StopPodSandbox for \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\"" Dec 13 01:53:54.436376 env[1413]: time="2024-12-13T01:53:54.436308019Z" level=info msg="TearDown network for sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" successfully" Dec 13 01:53:54.436448 env[1413]: time="2024-12-13T01:53:54.436429221Z" level=info msg="StopPodSandbox for \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" returns successfully" Dec 13 01:53:54.436748 env[1413]: time="2024-12-13T01:53:54.436722626Z" level=info msg="RemovePodSandbox for \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\"" Dec 13 01:53:54.436847 env[1413]: time="2024-12-13T01:53:54.436750726Z" level=info msg="Forcibly stopping sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\"" Dec 13 01:53:54.436847 env[1413]: time="2024-12-13T01:53:54.436826827Z" level=info msg="TearDown network for sandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" successfully" Dec 13 01:53:54.443234 env[1413]: time="2024-12-13T01:53:54.443206732Z" level=info msg="RemovePodSandbox \"1dc010509df7d36c45e5ca6a055585afd7bdfd0cdeb00b90836a4f96975141f6\" returns successfully" Dec 13 01:53:54.982353 kubelet[2490]: W1213 01:53:54.982300 2490 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41a1c08f_ef0f_467a_8681_c6d7bd1c56c3.slice/cri-containerd-f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f.scope WatchSource:0}: task f0a0722a8b10d57ea1d4da4ba5f69ba64f488abae2d06b0488df36c80c49db1f not found: not found