Dec 13 02:03:03.026380 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:03:03.026410 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:03:03.026424 kernel: BIOS-provided physical RAM map: Dec 13 02:03:03.026434 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 02:03:03.026444 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 02:03:03.026454 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 02:03:03.026481 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 02:03:03.026491 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 02:03:03.026501 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 02:03:03.026511 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 02:03:03.026520 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 02:03:03.026530 kernel: printk: bootconsole [earlyser0] enabled Dec 13 02:03:03.026540 kernel: NX (Execute Disable) protection: active Dec 13 02:03:03.026550 kernel: efi: EFI v2.70 by Microsoft Dec 13 02:03:03.026565 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 02:03:03.026576 kernel: random: crng init done Dec 13 02:03:03.026587 kernel: SMBIOS 3.1.0 present. Dec 13 02:03:03.026597 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 02:03:03.026608 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 02:03:03.026619 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 02:03:03.026629 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 02:03:03.026639 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 02:03:03.026652 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 02:03:03.026663 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 02:03:03.026673 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 02:03:03.026684 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 02:03:03.026695 kernel: tsc: Detected 2593.907 MHz processor Dec 13 02:03:03.026707 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:03:03.026717 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:03:03.026728 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 02:03:03.026739 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:03:03.026750 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 02:03:03.026762 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 02:03:03.026773 kernel: Using GB pages for direct mapping Dec 13 02:03:03.026784 kernel: Secure boot disabled Dec 13 02:03:03.026795 kernel: ACPI: Early table checksum verification disabled Dec 13 02:03:03.026805 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 02:03:03.026817 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026828 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026839 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 02:03:03.026856 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 02:03:03.026867 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026879 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026890 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026902 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026914 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026928 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026940 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:03:03.026951 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 02:03:03.026963 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 02:03:03.026975 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 02:03:03.026986 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 02:03:03.026998 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 02:03:03.027009 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 02:03:03.027023 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 02:03:03.027035 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 02:03:03.027046 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 02:03:03.027059 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 02:03:03.027070 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:03:03.027082 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:03:03.027094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 02:03:03.027105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 02:03:03.027117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 02:03:03.027130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 02:03:03.027142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 02:03:03.027154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 02:03:03.027165 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 02:03:03.027177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 02:03:03.027189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 02:03:03.027201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 02:03:03.027212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 02:03:03.027223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 02:03:03.027237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 02:03:03.027249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 02:03:03.027262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 02:03:03.027273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 02:03:03.027299 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 02:03:03.027314 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 02:03:03.027327 kernel: Zone ranges: Dec 13 02:03:03.027339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:03:03.027351 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:03:03.027366 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 02:03:03.027379 kernel: Movable zone start for each node Dec 13 02:03:03.027391 kernel: Early memory node ranges Dec 13 02:03:03.027404 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 02:03:03.027416 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 02:03:03.027429 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 02:03:03.027440 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 02:03:03.027453 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 02:03:03.048142 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:03:03.048165 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 02:03:03.048178 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 02:03:03.048190 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 02:03:03.048200 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 02:03:03.048211 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:03:03.048223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:03:03.048235 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:03:03.048247 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 02:03:03.048259 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:03:03.048273 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 02:03:03.048285 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 02:03:03.048297 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:03:03.048310 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:03:03.048321 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:03:03.048333 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:03:03.048344 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:03:03.048353 kernel: Hyper-V: PV spinlocks enabled Dec 13 02:03:03.048365 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:03:03.048379 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 02:03:03.048392 kernel: Policy zone: Normal Dec 13 02:03:03.048406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:03:03.048420 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:03:03.048433 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:03:03.048446 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:03:03.048470 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:03:03.048484 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved) Dec 13 02:03:03.048500 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:03:03.048513 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:03:03.048535 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:03:03.048551 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:03:03.048565 kernel: rcu: RCU event tracing is enabled. Dec 13 02:03:03.048578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:03:03.048591 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:03:03.048604 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:03:03.048617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:03:03.048631 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:03:03.048645 kernel: Using NULL legacy PIC Dec 13 02:03:03.048660 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 02:03:03.048674 kernel: Console: colour dummy device 80x25 Dec 13 02:03:03.048687 kernel: printk: console [tty1] enabled Dec 13 02:03:03.048700 kernel: printk: console [ttyS0] enabled Dec 13 02:03:03.048713 kernel: printk: bootconsole [earlyser0] disabled Dec 13 02:03:03.048729 kernel: ACPI: Core revision 20210730 Dec 13 02:03:03.048743 kernel: Failed to register legacy timer interrupt Dec 13 02:03:03.048756 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:03:03.048769 kernel: Hyper-V: Using IPI hypercalls Dec 13 02:03:03.048782 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Dec 13 02:03:03.048796 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:03:03.048809 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:03:03.048822 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:03:03.048835 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:03:03.048848 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:03:03.048863 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:03:03.048877 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:03:03.048891 kernel: RETBleed: Vulnerable Dec 13 02:03:03.048904 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:03:03.048917 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:03:03.048929 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:03:03.048941 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:03:03.048954 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:03:03.048967 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:03:03.048980 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:03:03.048996 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:03:03.049009 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:03:03.049022 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:03:03.049036 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:03:03.049048 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 02:03:03.049062 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 02:03:03.049075 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 02:03:03.049087 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 02:03:03.049100 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:03:03.049113 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:03:03.049126 kernel: LSM: Security Framework initializing Dec 13 02:03:03.049138 kernel: SELinux: Initializing. Dec 13 02:03:03.049155 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:03:03.049168 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:03:03.049180 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:03:03.049193 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:03:03.049207 kernel: signal: max sigframe size: 3632 Dec 13 02:03:03.049219 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:03:03.049232 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:03:03.049246 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:03:03.049258 kernel: x86: Booting SMP configuration: Dec 13 02:03:03.049272 kernel: .... node #0, CPUs: #1 Dec 13 02:03:03.049288 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 02:03:03.049302 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:03:03.049316 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:03:03.049329 kernel: smpboot: Max logical packages: 1 Dec 13 02:03:03.049342 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 02:03:03.049355 kernel: devtmpfs: initialized Dec 13 02:03:03.049368 kernel: x86/mm: Memory block size: 128MB Dec 13 02:03:03.049382 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 02:03:03.049397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:03:03.049411 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:03:03.049424 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:03:03.049437 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:03:03.049450 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:03:03.055531 kernel: audit: type=2000 audit(1734055382.025:1): state=initialized audit_enabled=0 res=1 Dec 13 02:03:03.055552 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:03:03.055567 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:03:03.055581 kernel: cpuidle: using governor menu Dec 13 02:03:03.055599 kernel: ACPI: bus type PCI registered Dec 13 02:03:03.055613 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:03:03.055627 kernel: dca service started, version 1.12.1 Dec 13 02:03:03.055642 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:03:03.055655 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:03:03.055669 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:03:03.055683 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:03:03.055697 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:03:03.055711 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:03:03.055727 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:03:03.055741 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:03:03.055755 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:03:03.055768 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:03:03.055782 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:03:03.055796 kernel: ACPI: Interpreter enabled Dec 13 02:03:03.055809 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:03:03.055823 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:03:03.055837 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:03:03.055852 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 02:03:03.055866 kernel: iommu: Default domain type: Translated Dec 13 02:03:03.055880 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:03:03.055893 kernel: vgaarb: loaded Dec 13 02:03:03.055907 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:03:03.055919 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:03:03.055932 kernel: PTP clock support registered Dec 13 02:03:03.055945 kernel: Registered efivars operations Dec 13 02:03:03.055959 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:03:03.055972 kernel: PCI: System does not support PCI Dec 13 02:03:03.055988 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 02:03:03.056001 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:03:03.056014 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:03:03.056029 kernel: pnp: PnP ACPI init Dec 13 02:03:03.056042 kernel: pnp: PnP ACPI: found 3 devices Dec 13 02:03:03.056056 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:03:03.056070 kernel: NET: Registered PF_INET protocol family Dec 13 02:03:03.056083 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:03:03.056100 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:03:03.056114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:03:03.056127 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:03:03.056141 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:03:03.056155 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:03:03.056168 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:03:03.056182 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:03:03.056196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:03:03.056209 kernel: NET: Registered PF_XDP protocol family Dec 13 02:03:03.056224 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:03:03.056238 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:03:03.056251 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 02:03:03.056265 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:03:03.056279 kernel: Initialise system trusted keyrings Dec 13 02:03:03.056290 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:03:03.056304 kernel: Key type asymmetric registered Dec 13 02:03:03.056317 kernel: Asymmetric key parser 'x509' registered Dec 13 02:03:03.056330 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:03:03.056346 kernel: io scheduler mq-deadline registered Dec 13 02:03:03.056360 kernel: io scheduler kyber registered Dec 13 02:03:03.056373 kernel: io scheduler bfq registered Dec 13 02:03:03.056387 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:03:03.056401 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:03:03.056415 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:03:03.056429 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:03:03.056443 kernel: i8042: PNP: No PS/2 controller found. Dec 13 02:03:03.056632 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 02:03:03.056763 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T02:03:02 UTC (1734055382) Dec 13 02:03:03.056870 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 02:03:03.056904 kernel: fail to initialize ptp_kvm Dec 13 02:03:03.056918 kernel: intel_pstate: CPU model not supported Dec 13 02:03:03.056932 kernel: efifb: probing for efifb Dec 13 02:03:03.056945 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 02:03:03.056958 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 02:03:03.056970 kernel: efifb: scrolling: redraw Dec 13 02:03:03.056988 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 02:03:03.057003 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:03:03.057017 kernel: fb0: EFI VGA frame buffer device Dec 13 02:03:03.057032 kernel: pstore: Registered efi as persistent store backend Dec 13 02:03:03.057045 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:03:03.057059 kernel: Segment Routing with IPv6 Dec 13 02:03:03.057073 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:03:03.057087 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:03:03.057102 kernel: Key type dns_resolver registered Dec 13 02:03:03.057118 kernel: IPI shorthand broadcast: enabled Dec 13 02:03:03.057133 kernel: sched_clock: Marking stable (789647300, 20248600)->(993736400, -183840500) Dec 13 02:03:03.057147 kernel: registered taskstats version 1 Dec 13 02:03:03.057161 kernel: Loading compiled-in X.509 certificates Dec 13 02:03:03.057175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:03:03.057188 kernel: Key type .fscrypt registered Dec 13 02:03:03.057201 kernel: Key type fscrypt-provisioning registered Dec 13 02:03:03.057214 kernel: pstore: Using crash dump compression: deflate Dec 13 02:03:03.057230 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:03:03.057241 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:03:03.057251 kernel: ima: No architecture policies found Dec 13 02:03:03.057264 kernel: clk: Disabling unused clocks Dec 13 02:03:03.057276 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:03:03.057288 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:03:03.057300 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:03:03.057313 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:03:03.057327 kernel: Run /init as init process Dec 13 02:03:03.057339 kernel: with arguments: Dec 13 02:03:03.057356 kernel: /init Dec 13 02:03:03.057369 kernel: with environment: Dec 13 02:03:03.057381 kernel: HOME=/ Dec 13 02:03:03.057393 kernel: TERM=linux Dec 13 02:03:03.057406 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:03:03.057422 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:03:03.057439 systemd[1]: Detected virtualization microsoft. Dec 13 02:03:03.057457 systemd[1]: Detected architecture x86-64. Dec 13 02:03:03.060499 systemd[1]: Running in initrd. Dec 13 02:03:03.060511 systemd[1]: No hostname configured, using default hostname. Dec 13 02:03:03.060522 systemd[1]: Hostname set to . Dec 13 02:03:03.060531 systemd[1]: Initializing machine ID from random generator. Dec 13 02:03:03.060539 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:03:03.060550 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:03:03.060558 systemd[1]: Reached target cryptsetup.target. Dec 13 02:03:03.060566 systemd[1]: Reached target paths.target. Dec 13 02:03:03.060579 systemd[1]: Reached target slices.target. Dec 13 02:03:03.060587 systemd[1]: Reached target swap.target. Dec 13 02:03:03.060597 systemd[1]: Reached target timers.target. Dec 13 02:03:03.060605 systemd[1]: Listening on iscsid.socket. Dec 13 02:03:03.060617 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:03:03.060626 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:03:03.060636 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:03:03.060650 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:03:03.060660 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:03:03.060671 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:03:03.060682 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:03:03.060692 systemd[1]: Reached target sockets.target. Dec 13 02:03:03.060703 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:03:03.060714 systemd[1]: Finished network-cleanup.service. Dec 13 02:03:03.060726 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:03:03.060740 systemd[1]: Starting systemd-journald.service... Dec 13 02:03:03.060756 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:03:03.060768 systemd[1]: Starting systemd-resolved.service... Dec 13 02:03:03.060779 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:03:03.060792 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:03:03.060805 kernel: audit: type=1130 audit(1734055383.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.060819 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:03:03.060836 systemd-journald[183]: Journal started Dec 13 02:03:03.060902 systemd-journald[183]: Runtime Journal (/run/log/journal/b58f36f301644958b29c76ca54cce0bb) is 8.0M, max 159.0M, 151.0M free. Dec 13 02:03:03.060937 systemd[1]: Started systemd-resolved.service. Dec 13 02:03:03.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.024010 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 02:03:03.077888 kernel: audit: type=1130 audit(1734055383.060:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.053181 systemd-resolved[185]: Positive Trust Anchors: Dec 13 02:03:03.053192 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:03:03.053247 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:03:03.056971 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 02:03:03.116826 systemd[1]: Started systemd-journald.service. Dec 13 02:03:03.116884 kernel: audit: type=1130 audit(1734055383.100:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.127556 kernel: audit: type=1130 audit(1734055383.104:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.116965 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:03:03.132534 systemd[1]: Reached target nss-lookup.target. Dec 13 02:03:03.160311 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:03:03.160346 kernel: audit: type=1130 audit(1734055383.131:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.163966 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 02:03:03.173566 kernel: Bridge firewalling registered Dec 13 02:03:03.164078 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:03:03.166812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:03:03.175526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:03:03.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.194476 kernel: audit: type=1130 audit(1734055383.181:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.206748 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:03:03.228204 kernel: SCSI subsystem initialized Dec 13 02:03:03.228239 kernel: audit: type=1130 audit(1734055383.208:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.226174 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:03:03.238023 dracut-cmdline[202]: dracut-dracut-053 Dec 13 02:03:03.240758 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:03:03.271577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:03:03.271619 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:03:03.276911 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:03:03.280844 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 02:03:03.283733 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:03:03.289010 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:03:03.304791 kernel: audit: type=1130 audit(1734055383.287:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.305710 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:03:03.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.321479 kernel: audit: type=1130 audit(1734055383.309:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.325480 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:03:03.344481 kernel: iscsi: registered transport (tcp) Dec 13 02:03:03.370954 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:03:03.371018 kernel: QLogic iSCSI HBA Driver Dec 13 02:03:03.399847 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:03:03.402995 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:03:03.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.453489 kernel: raid6: avx512x4 gen() 18323 MB/s Dec 13 02:03:03.472475 kernel: raid6: avx512x4 xor() 7802 MB/s Dec 13 02:03:03.491473 kernel: raid6: avx512x2 gen() 18177 MB/s Dec 13 02:03:03.511478 kernel: raid6: avx512x2 xor() 30136 MB/s Dec 13 02:03:03.531480 kernel: raid6: avx512x1 gen() 18272 MB/s Dec 13 02:03:03.551475 kernel: raid6: avx512x1 xor() 27372 MB/s Dec 13 02:03:03.572473 kernel: raid6: avx2x4 gen() 18131 MB/s Dec 13 02:03:03.592472 kernel: raid6: avx2x4 xor() 7447 MB/s Dec 13 02:03:03.612471 kernel: raid6: avx2x2 gen() 18249 MB/s Dec 13 02:03:03.633474 kernel: raid6: avx2x2 xor() 22266 MB/s Dec 13 02:03:03.653471 kernel: raid6: avx2x1 gen() 13986 MB/s Dec 13 02:03:03.673471 kernel: raid6: avx2x1 xor() 19508 MB/s Dec 13 02:03:03.693472 kernel: raid6: sse2x4 gen() 11750 MB/s Dec 13 02:03:03.712471 kernel: raid6: sse2x4 xor() 7310 MB/s Dec 13 02:03:03.732471 kernel: raid6: sse2x2 gen() 12972 MB/s Dec 13 02:03:03.753472 kernel: raid6: sse2x2 xor() 7513 MB/s Dec 13 02:03:03.773469 kernel: raid6: sse2x1 gen() 11693 MB/s Dec 13 02:03:03.797313 kernel: raid6: sse2x1 xor() 5930 MB/s Dec 13 02:03:03.797332 kernel: raid6: using algorithm avx512x4 gen() 18323 MB/s Dec 13 02:03:03.797343 kernel: raid6: .... xor() 7802 MB/s, rmw enabled Dec 13 02:03:03.800683 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:03:03.819484 kernel: xor: automatically using best checksumming function avx Dec 13 02:03:03.915487 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:03:03.923431 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:03:03.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.927000 audit: BPF prog-id=7 op=LOAD Dec 13 02:03:03.927000 audit: BPF prog-id=8 op=LOAD Dec 13 02:03:03.928289 systemd[1]: Starting systemd-udevd.service... Dec 13 02:03:03.941954 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 02:03:03.946585 systemd[1]: Started systemd-udevd.service. Dec 13 02:03:03.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:03.954799 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:03:03.971879 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Dec 13 02:03:04.002027 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:03:04.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:04.007721 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:03:04.044210 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:03:04.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:04.089480 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:03:04.119554 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 02:03:04.119607 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:03:04.126692 kernel: AES CTR mode by8 optimization enabled Dec 13 02:03:04.140476 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 02:03:04.156503 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 02:03:04.163355 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:03:04.173351 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 02:03:04.181451 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 02:03:04.181492 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 02:03:04.191451 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 02:03:04.191498 kernel: scsi host1: storvsc_host_t Dec 13 02:03:04.191534 kernel: scsi host0: storvsc_host_t Dec 13 02:03:04.200082 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 02:03:04.200254 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 02:03:04.212482 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 02:03:04.244147 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 02:03:04.252617 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:03:04.252637 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 02:03:04.270768 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 02:03:04.270953 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 02:03:04.271120 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 02:03:04.271284 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 02:03:04.271449 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 02:03:04.271632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:03:04.271651 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 02:03:04.384250 kernel: hv_netvsc 7c1e5235-ea32-7c1e-5235-ea327c1e5235 eth0: VF slot 1 added Dec 13 02:03:04.397898 kernel: hv_vmbus: registering driver hv_pci Dec 13 02:03:04.397946 kernel: hv_pci dd0c9cc2-3d40-4ba0-9332-a421e08b2a06: PCI VMBus probing: Using version 0x10004 Dec 13 02:03:04.472590 kernel: hv_pci dd0c9cc2-3d40-4ba0-9332-a421e08b2a06: PCI host bridge to bus 3d40:00 Dec 13 02:03:04.472779 kernel: pci_bus 3d40:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 02:03:04.472950 kernel: pci_bus 3d40:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 02:03:04.473104 kernel: pci 3d40:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 02:03:04.473272 kernel: pci 3d40:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 02:03:04.473425 kernel: pci 3d40:00:02.0: enabling Extended Tags Dec 13 02:03:04.473603 kernel: pci 3d40:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3d40:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 02:03:04.473753 kernel: pci_bus 3d40:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 02:03:04.473895 kernel: pci 3d40:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 02:03:04.565484 kernel: mlx5_core 3d40:00:02.0: firmware version: 14.30.5000 Dec 13 02:03:04.818253 kernel: mlx5_core 3d40:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 02:03:04.818445 kernel: mlx5_core 3d40:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 02:03:04.818590 kernel: mlx5_core 3d40:00:02.0: mlx5e_tc_post_act_init:40:(pid 519): firmware level support is missing Dec 13 02:03:04.818689 kernel: hv_netvsc 7c1e5235-ea32-7c1e-5235-ea327c1e5235 eth0: VF registering: eth1 Dec 13 02:03:04.818781 kernel: mlx5_core 3d40:00:02.0 eth1: joined to eth0 Dec 13 02:03:04.808978 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:03:04.827498 kernel: mlx5_core 3d40:00:02.0 enP15680s1: renamed from eth1 Dec 13 02:03:04.905490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Dec 13 02:03:04.920564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:03:05.098898 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:03:05.101820 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:03:05.102736 systemd[1]: Starting disk-uuid.service... Dec 13 02:03:05.121819 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:03:06.120487 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:03:06.121212 disk-uuid[557]: The operation has completed successfully. Dec 13 02:03:06.188090 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:03:06.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.188192 systemd[1]: Finished disk-uuid.service. Dec 13 02:03:06.203065 systemd[1]: Starting verity-setup.service... Dec 13 02:03:06.237483 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:03:06.466837 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:03:06.471565 systemd[1]: Finished verity-setup.service. Dec 13 02:03:06.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.476108 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:03:06.553185 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:03:06.553093 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:03:06.555063 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:03:06.555829 systemd[1]: Starting ignition-setup.service... Dec 13 02:03:06.563847 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:03:06.579732 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:03:06.579772 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:03:06.579787 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:03:06.628455 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:03:06.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.630000 audit: BPF prog-id=9 op=LOAD Dec 13 02:03:06.631913 systemd[1]: Starting systemd-networkd.service... Dec 13 02:03:06.657743 systemd-networkd[775]: lo: Link UP Dec 13 02:03:06.657752 systemd-networkd[775]: lo: Gained carrier Dec 13 02:03:06.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.658659 systemd-networkd[775]: Enumeration completed Dec 13 02:03:06.658729 systemd[1]: Started systemd-networkd.service. Dec 13 02:03:06.661676 systemd[1]: Reached target network.target. Dec 13 02:03:06.663813 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:03:06.667381 systemd[1]: Starting iscsiuio.service... Dec 13 02:03:06.682484 systemd[1]: Started iscsiuio.service. Dec 13 02:03:06.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.685821 systemd[1]: Starting iscsid.service... Dec 13 02:03:06.689166 iscsid[783]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:03:06.689166 iscsid[783]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:03:06.689166 iscsid[783]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:03:06.689166 iscsid[783]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:03:06.689166 iscsid[783]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:03:06.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.720764 iscsid[783]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:03:06.694168 systemd[1]: Started iscsid.service. Dec 13 02:03:06.730381 kernel: mlx5_core 3d40:00:02.0 enP15680s1: Link up Dec 13 02:03:06.709153 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:03:06.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.710128 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:03:06.722223 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:03:06.730504 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:03:06.734852 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:03:06.738966 systemd[1]: Reached target remote-fs.target. Dec 13 02:03:06.743634 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:03:06.755119 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:03:06.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:06.773588 kernel: hv_netvsc 7c1e5235-ea32-7c1e-5235-ea327c1e5235 eth0: Data path switched to VF: enP15680s1 Dec 13 02:03:06.773801 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:03:06.774071 systemd-networkd[775]: enP15680s1: Link UP Dec 13 02:03:06.774326 systemd-networkd[775]: eth0: Link UP Dec 13 02:03:06.774802 systemd-networkd[775]: eth0: Gained carrier Dec 13 02:03:06.781924 systemd-networkd[775]: enP15680s1: Gained carrier Dec 13 02:03:06.803522 systemd-networkd[775]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 02:03:07.051214 systemd[1]: Finished ignition-setup.service. Dec 13 02:03:07.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:07.056264 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:03:08.563598 systemd-networkd[775]: eth0: Gained IPv6LL Dec 13 02:03:11.300058 ignition[802]: Ignition 2.14.0 Dec 13 02:03:11.300077 ignition[802]: Stage: fetch-offline Dec 13 02:03:11.300167 ignition[802]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:11.300219 ignition[802]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:11.346514 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:11.406338 ignition[802]: parsed url from cmdline: "" Dec 13 02:03:11.406473 ignition[802]: no config URL provided Dec 13 02:03:11.406497 ignition[802]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:03:11.406518 ignition[802]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:03:11.406528 ignition[802]: failed to fetch config: resource requires networking Dec 13 02:03:11.406910 ignition[802]: Ignition finished successfully Dec 13 02:03:11.420856 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:03:11.430648 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 02:03:11.430688 kernel: audit: type=1130 audit(1734055391.425:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.426572 systemd[1]: Starting ignition-fetch.service... Dec 13 02:03:11.435711 ignition[808]: Ignition 2.14.0 Dec 13 02:03:11.435718 ignition[808]: Stage: fetch Dec 13 02:03:11.435827 ignition[808]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:11.435851 ignition[808]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:11.439211 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:11.441801 ignition[808]: parsed url from cmdline: "" Dec 13 02:03:11.441809 ignition[808]: no config URL provided Dec 13 02:03:11.441816 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:03:11.441828 ignition[808]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:03:11.441865 ignition[808]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 02:03:11.528796 ignition[808]: GET result: OK Dec 13 02:03:11.528831 ignition[808]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Dec 13 02:03:11.806806 ignition[808]: opening config device: "/dev/sr0" Dec 13 02:03:11.807225 ignition[808]: getting drive status for "/dev/sr0" Dec 13 02:03:11.807298 ignition[808]: drive status: OK Dec 13 02:03:11.807335 ignition[808]: mounting config device Dec 13 02:03:11.807379 ignition[808]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure2253921752" Dec 13 02:03:11.830481 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/12/14 00:00 (1000) Dec 13 02:03:11.830685 ignition[808]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure2253921752" Dec 13 02:03:11.831628 ignition[808]: checking for config drive Dec 13 02:03:11.832626 systemd[1]: tmp-ignition\x2dazure2253921752.mount: Deactivated successfully. Dec 13 02:03:11.831916 ignition[808]: reading config Dec 13 02:03:11.832269 ignition[808]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure2253921752" Dec 13 02:03:11.833827 ignition[808]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure2253921752" Dec 13 02:03:11.840146 unknown[808]: fetched base config from "system" Dec 13 02:03:11.833842 ignition[808]: config has been read from custom data Dec 13 02:03:11.840154 unknown[808]: fetched base config from "system" Dec 13 02:03:11.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.833956 ignition[808]: parsing config with SHA512: 322a3e93017fd735f67534e41e3f9585cd966e2b1106a5ea82adc1762899c6fdda1a0e8ced00bf525da1435b43f5ae833dc65eae13720ace57658379c9189eaf Dec 13 02:03:11.840160 unknown[808]: fetched user config from "azure" Dec 13 02:03:11.869162 kernel: audit: type=1130 audit(1734055391.849:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.840824 ignition[808]: fetch: fetch complete Dec 13 02:03:11.847835 systemd[1]: Finished ignition-fetch.service. Dec 13 02:03:11.840831 ignition[808]: fetch: fetch passed Dec 13 02:03:11.850931 systemd[1]: Starting ignition-kargs.service... Dec 13 02:03:11.840868 ignition[808]: Ignition finished successfully Dec 13 02:03:11.889509 kernel: audit: type=1130 audit(1734055391.884:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.873033 ignition[816]: Ignition 2.14.0 Dec 13 02:03:11.880683 systemd[1]: Finished ignition-kargs.service. Dec 13 02:03:11.873039 ignition[816]: Stage: kargs Dec 13 02:03:11.885736 systemd[1]: Starting ignition-disks.service... Dec 13 02:03:11.873200 ignition[816]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:11.873234 ignition[816]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:11.877518 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:11.879914 ignition[816]: kargs: kargs passed Dec 13 02:03:11.879953 ignition[816]: Ignition finished successfully Dec 13 02:03:11.916280 ignition[822]: Ignition 2.14.0 Dec 13 02:03:11.916290 ignition[822]: Stage: disks Dec 13 02:03:11.916413 ignition[822]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:11.916445 ignition[822]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:11.925311 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:11.930006 ignition[822]: disks: disks passed Dec 13 02:03:11.930062 ignition[822]: Ignition finished successfully Dec 13 02:03:11.950291 kernel: audit: type=1130 audit(1734055391.934:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:11.931097 systemd[1]: Finished ignition-disks.service. Dec 13 02:03:11.934567 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:03:11.950283 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:03:11.952365 systemd[1]: Reached target local-fs.target. Dec 13 02:03:11.954256 systemd[1]: Reached target sysinit.target. Dec 13 02:03:11.958044 systemd[1]: Reached target basic.target. Dec 13 02:03:11.960674 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:03:12.056014 systemd-fsck[830]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 02:03:12.060132 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:03:12.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:12.065021 systemd[1]: Mounting sysroot.mount... Dec 13 02:03:12.080424 kernel: audit: type=1130 audit(1734055392.062:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:12.089477 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:03:12.090097 systemd[1]: Mounted sysroot.mount. Dec 13 02:03:12.093711 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:03:12.127726 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:03:12.133887 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 02:03:12.139387 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:03:12.139437 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:03:12.148623 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:03:12.200190 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:03:12.206270 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:03:12.222492 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (840) Dec 13 02:03:12.226951 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:03:12.240428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:03:12.240476 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:03:12.240490 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:03:12.237740 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:03:12.244981 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:03:12.281900 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:03:12.286535 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:03:13.023969 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:03:13.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:13.030180 systemd[1]: Starting ignition-mount.service... Dec 13 02:03:13.047088 kernel: audit: type=1130 audit(1734055393.028:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:13.045253 systemd[1]: Starting sysroot-boot.service... Dec 13 02:03:13.051908 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:03:13.052108 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:03:13.071891 ignition[907]: INFO : Ignition 2.14.0 Dec 13 02:03:13.074101 ignition[907]: INFO : Stage: mount Dec 13 02:03:13.076110 ignition[907]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:13.079359 ignition[907]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:13.088724 systemd[1]: Finished sysroot-boot.service. Dec 13 02:03:13.104467 kernel: audit: type=1130 audit(1734055393.090:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:13.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:13.104529 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:13.104529 ignition[907]: INFO : mount: mount passed Dec 13 02:03:13.104529 ignition[907]: INFO : Ignition finished successfully Dec 13 02:03:13.123551 kernel: audit: type=1130 audit(1734055393.109:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:13.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:13.100716 systemd[1]: Finished ignition-mount.service. Dec 13 02:03:14.525370 coreos-metadata[839]: Dec 13 02:03:14.525 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 02:03:14.542834 coreos-metadata[839]: Dec 13 02:03:14.542 INFO Fetch successful Dec 13 02:03:14.577815 coreos-metadata[839]: Dec 13 02:03:14.577 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 02:03:14.589489 coreos-metadata[839]: Dec 13 02:03:14.589 INFO Fetch successful Dec 13 02:03:14.606195 coreos-metadata[839]: Dec 13 02:03:14.606 INFO wrote hostname ci-3510.3.6-a-eca73107d2 to /sysroot/etc/hostname Dec 13 02:03:14.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:14.608370 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 02:03:14.629751 kernel: audit: type=1130 audit(1734055394.612:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:14.614732 systemd[1]: Starting ignition-files.service... Dec 13 02:03:14.633023 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:03:14.660594 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (919) Dec 13 02:03:14.660636 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:03:14.669223 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:03:14.669248 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:03:14.678109 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:03:14.692090 ignition[938]: INFO : Ignition 2.14.0 Dec 13 02:03:14.692090 ignition[938]: INFO : Stage: files Dec 13 02:03:14.695860 ignition[938]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:14.695860 ignition[938]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:14.708662 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:14.734389 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:03:14.737826 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:03:14.737826 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:03:14.828818 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:03:14.832645 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:03:14.836088 unknown[938]: wrote ssh authorized keys file for user: core Dec 13 02:03:14.838814 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:03:14.857439 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:03:14.862217 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:03:15.192543 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:03:15.352124 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:03:15.357788 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:03:15.357788 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:03:15.866613 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 02:03:16.007510 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 02:03:16.013169 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:03:16.085480 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (938) Dec 13 02:03:16.029803 systemd[1]: mnt-oem1237041965.mount: Deactivated successfully. Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1237041965" Dec 13 02:03:16.088224 ignition[938]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1237041965": device or resource busy Dec 13 02:03:16.088224 ignition[938]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1237041965", trying btrfs: device or resource busy Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1237041965" Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1237041965" Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1237041965" Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1237041965" Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3920410488" Dec 13 02:03:16.088224 ignition[938]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3920410488": device or resource busy Dec 13 02:03:16.088224 ignition[938]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3920410488", trying btrfs: device or resource busy Dec 13 02:03:16.088224 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3920410488" Dec 13 02:03:16.050707 systemd[1]: mnt-oem3920410488.mount: Deactivated successfully. Dec 13 02:03:16.159842 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3920410488" Dec 13 02:03:16.159842 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3920410488" Dec 13 02:03:16.159842 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3920410488" Dec 13 02:03:16.159842 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:03:16.159842 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:03:16.159842 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 02:03:16.506566 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Dec 13 02:03:16.856408 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:03:16.856408 ignition[938]: INFO : files: op(14): [started] processing unit "waagent.service" Dec 13 02:03:16.856408 ignition[938]: INFO : files: op(14): [finished] processing unit "waagent.service" Dec 13 02:03:16.856408 ignition[938]: INFO : files: op(15): [started] processing unit "nvidia.service" Dec 13 02:03:16.856408 ignition[938]: INFO : files: op(15): [finished] processing unit "nvidia.service" Dec 13 02:03:16.856408 ignition[938]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:03:16.867247 ignition[938]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:03:16.867247 ignition[938]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:03:16.867247 ignition[938]: INFO : files: files passed Dec 13 02:03:16.867247 ignition[938]: INFO : Ignition finished successfully Dec 13 02:03:16.864749 systemd[1]: Finished ignition-files.service. Dec 13 02:03:16.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.927469 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:03:16.938253 kernel: audit: type=1130 audit(1734055396.921:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.938230 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:03:16.943491 systemd[1]: Starting ignition-quench.service... Dec 13 02:03:16.946355 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:03:16.946446 systemd[1]: Finished ignition-quench.service. Dec 13 02:03:16.974944 kernel: audit: type=1130 audit(1734055396.952:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.974971 kernel: audit: type=1131 audit(1734055396.952:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.975078 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:03:16.953369 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:03:16.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:16.983699 systemd[1]: Reached target ignition-complete.target. Dec 13 02:03:17.000014 kernel: audit: type=1130 audit(1734055396.983:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.000804 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:03:17.014458 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:03:17.014595 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:03:17.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.021016 systemd[1]: Reached target initrd-fs.target. Dec 13 02:03:17.046578 kernel: audit: type=1130 audit(1734055397.020:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.046609 kernel: audit: type=1131 audit(1734055397.020:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.042767 systemd[1]: Reached target initrd.target. Dec 13 02:03:17.046505 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:03:17.047299 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:03:17.061813 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:03:17.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.066820 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:03:17.078478 kernel: audit: type=1130 audit(1734055397.065:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.087246 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:03:17.091689 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:03:17.096128 systemd[1]: Stopped target timers.target. Dec 13 02:03:17.197349 kernel: audit: type=1131 audit(1734055397.095:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.197380 kernel: audit: type=1131 audit(1734055397.101:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.197391 kernel: audit: type=1131 audit(1734055397.101:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.096368 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:03:17.096488 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:03:17.201850 ignition[976]: INFO : Ignition 2.14.0 Dec 13 02:03:17.201850 ignition[976]: INFO : Stage: umount Dec 13 02:03:17.201850 ignition[976]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:03:17.201850 ignition[976]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:03:17.201850 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:03:17.201850 ignition[976]: INFO : umount: umount passed Dec 13 02:03:17.201850 ignition[976]: INFO : Ignition finished successfully Dec 13 02:03:17.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.096853 systemd[1]: Stopped target initrd.target. Dec 13 02:03:17.097151 systemd[1]: Stopped target basic.target. Dec 13 02:03:17.097709 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:03:17.098143 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:03:17.098548 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:03:17.098968 systemd[1]: Stopped target remote-fs.target. Dec 13 02:03:17.099371 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:03:17.099796 systemd[1]: Stopped target sysinit.target. Dec 13 02:03:17.100190 systemd[1]: Stopped target local-fs.target. Dec 13 02:03:17.100594 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:03:17.100981 systemd[1]: Stopped target swap.target. Dec 13 02:03:17.101360 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:03:17.101451 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:03:17.101845 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:03:17.102161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:03:17.102245 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:03:17.102658 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:03:17.102747 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:03:17.103004 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:03:17.103086 systemd[1]: Stopped ignition-files.service. Dec 13 02:03:17.103406 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:03:17.103503 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 02:03:17.116407 systemd[1]: Stopping ignition-mount.service... Dec 13 02:03:17.119822 systemd[1]: Stopping iscsiuio.service... Dec 13 02:03:17.119966 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:03:17.120082 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:03:17.121428 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:03:17.122031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:03:17.122170 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:03:17.122533 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:03:17.122627 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:03:17.133146 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:03:17.135593 systemd[1]: Stopped iscsiuio.service. Dec 13 02:03:17.156663 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:03:17.156752 systemd[1]: Stopped ignition-mount.service. Dec 13 02:03:17.157528 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:03:17.157619 systemd[1]: Stopped ignition-disks.service. Dec 13 02:03:17.157777 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:03:17.157861 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:03:17.158211 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:03:17.158292 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:03:17.160356 systemd[1]: Stopped target network.target. Dec 13 02:03:17.160735 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:03:17.160778 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:03:17.161158 systemd[1]: Stopped target paths.target. Dec 13 02:03:17.162038 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:03:17.180753 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:03:17.180826 systemd[1]: Stopped target slices.target. Dec 13 02:03:17.181240 systemd[1]: Stopped target sockets.target. Dec 13 02:03:17.181677 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:03:17.181703 systemd[1]: Closed iscsid.socket. Dec 13 02:03:17.182059 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:03:17.182091 systemd[1]: Closed iscsiuio.socket. Dec 13 02:03:17.182494 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:03:17.182529 systemd[1]: Stopped ignition-setup.service. Dec 13 02:03:17.182999 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:03:17.183569 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:03:17.183930 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:03:17.184015 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:03:17.201942 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:03:17.210636 systemd-networkd[775]: eth0: DHCPv6 lease lost Dec 13 02:03:17.213641 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:03:17.225797 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:03:17.231564 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:03:17.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.342000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:03:17.339722 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:03:17.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.349000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:03:17.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.340233 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:03:17.340275 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:03:17.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.343537 systemd[1]: Stopping network-cleanup.service... Dec 13 02:03:17.345258 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:03:17.345320 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:03:17.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.347485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:03:17.347538 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:03:17.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.349855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:03:17.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.349904 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:03:17.352058 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:03:17.355720 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:03:17.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.364523 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:03:17.415379 kernel: hv_netvsc 7c1e5235-ea32-7c1e-5235-ea327c1e5235 eth0: Data path switched from VF: enP15680s1 Dec 13 02:03:17.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.364694 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:03:17.369124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:03:17.369163 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:03:17.372987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:03:17.373027 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:03:17.377516 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:03:17.377571 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:03:17.381708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:03:17.381753 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:03:17.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:17.388033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:03:17.388081 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:03:17.390797 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:03:17.402159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:03:17.402211 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:03:17.404733 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:03:17.404818 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:03:17.434104 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:03:17.434195 systemd[1]: Stopped network-cleanup.service. Dec 13 02:03:18.047624 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:03:18.047762 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:03:18.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:18.054635 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:03:18.059070 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:03:18.059142 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:03:18.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:18.064041 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:03:18.077360 systemd[1]: Switching root. Dec 13 02:03:18.104330 iscsid[783]: iscsid shutting down. Dec 13 02:03:18.106380 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Dec 13 02:03:18.106441 systemd-journald[183]: Journal stopped Dec 13 02:03:40.674331 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:03:40.674372 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:03:40.674392 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:03:40.674407 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:03:40.674422 kernel: SELinux: policy capability open_perms=1 Dec 13 02:03:40.674437 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:03:40.674455 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:03:40.674482 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:03:40.674501 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:03:40.674514 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:03:40.674527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:03:40.674543 systemd[1]: Successfully loaded SELinux policy in 338.878ms. Dec 13 02:03:40.674559 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.698ms. Dec 13 02:03:40.674575 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:03:40.674598 systemd[1]: Detected virtualization microsoft. Dec 13 02:03:40.674616 systemd[1]: Detected architecture x86-64. Dec 13 02:03:40.674633 systemd[1]: Detected first boot. Dec 13 02:03:40.674652 systemd[1]: Hostname set to . Dec 13 02:03:40.674669 systemd[1]: Initializing machine ID from random generator. Dec 13 02:03:40.674689 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 13 02:03:40.674712 kernel: audit: type=1400 audit(1734055402.956:81): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:03:40.674731 kernel: audit: type=1400 audit(1734055402.975:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:03:40.674750 kernel: audit: type=1400 audit(1734055402.975:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:03:40.674766 kernel: audit: type=1334 audit(1734055402.999:84): prog-id=10 op=LOAD Dec 13 02:03:40.674783 kernel: audit: type=1334 audit(1734055402.999:85): prog-id=10 op=UNLOAD Dec 13 02:03:40.674801 kernel: audit: type=1334 audit(1734055403.004:86): prog-id=11 op=LOAD Dec 13 02:03:40.674819 kernel: audit: type=1334 audit(1734055403.004:87): prog-id=11 op=UNLOAD Dec 13 02:03:40.674837 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:03:40.674855 kernel: audit: type=1400 audit(1734055405.305:88): avc: denied { associate } for pid=1009 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:03:40.674873 kernel: audit: type=1300 audit(1734055405.305:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=992 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:03:40.674891 kernel: audit: type=1327 audit(1734055405.305:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:03:40.674909 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:03:40.674930 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:03:40.674949 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:03:40.674970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:03:40.674988 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 02:03:40.675005 kernel: audit: type=1334 audit(1734055419.989:90): prog-id=12 op=LOAD Dec 13 02:03:40.675021 kernel: audit: type=1334 audit(1734055419.989:91): prog-id=3 op=UNLOAD Dec 13 02:03:40.675037 kernel: audit: type=1334 audit(1734055419.995:92): prog-id=13 op=LOAD Dec 13 02:03:40.675057 kernel: audit: type=1334 audit(1734055419.999:93): prog-id=14 op=LOAD Dec 13 02:03:40.675077 kernel: audit: type=1334 audit(1734055419.999:94): prog-id=4 op=UNLOAD Dec 13 02:03:40.675098 kernel: audit: type=1334 audit(1734055419.999:95): prog-id=5 op=UNLOAD Dec 13 02:03:40.675115 kernel: audit: type=1334 audit(1734055420.004:96): prog-id=15 op=LOAD Dec 13 02:03:40.675133 kernel: audit: type=1334 audit(1734055420.004:97): prog-id=12 op=UNLOAD Dec 13 02:03:40.675150 kernel: audit: type=1334 audit(1734055420.023:98): prog-id=16 op=LOAD Dec 13 02:03:40.675168 kernel: audit: type=1334 audit(1734055420.027:99): prog-id=17 op=LOAD Dec 13 02:03:40.675185 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:03:40.675203 systemd[1]: Stopped iscsid.service. Dec 13 02:03:40.675227 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:03:40.675246 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:03:40.675264 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:03:40.675283 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:03:40.675302 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:03:40.675321 systemd[1]: Created slice system-getty.slice. Dec 13 02:03:40.675341 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:03:40.675359 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:03:40.675380 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:03:40.675398 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:03:40.675417 systemd[1]: Created slice user.slice. Dec 13 02:03:40.675436 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:03:40.675455 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:03:40.675494 systemd[1]: Set up automount boot.automount. Dec 13 02:03:40.675512 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:03:40.675531 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:03:40.675549 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:03:40.675572 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:03:40.675590 systemd[1]: Reached target integritysetup.target. Dec 13 02:03:40.675609 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:03:40.675627 systemd[1]: Reached target remote-fs.target. Dec 13 02:03:40.675646 systemd[1]: Reached target slices.target. Dec 13 02:03:40.675665 systemd[1]: Reached target swap.target. Dec 13 02:03:40.675685 systemd[1]: Reached target torcx.target. Dec 13 02:03:40.675707 systemd[1]: Reached target veritysetup.target. Dec 13 02:03:40.675726 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:03:40.675745 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:03:40.675764 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:03:40.675784 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:03:40.675806 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:03:40.675826 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:03:40.675846 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:03:40.675864 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:03:40.675883 systemd[1]: Mounting media.mount... Dec 13 02:03:40.675902 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:40.675922 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:03:40.675942 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:03:40.675961 systemd[1]: Mounting tmp.mount... Dec 13 02:03:40.675983 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:03:40.676003 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:03:40.676022 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:03:40.676041 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:03:40.676061 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:03:40.676081 systemd[1]: Starting modprobe@drm.service... Dec 13 02:03:40.676099 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:03:40.676119 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:03:40.676137 systemd[1]: Starting modprobe@loop.service... Dec 13 02:03:40.676160 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:03:40.676180 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:03:40.676201 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:03:40.676218 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:03:40.676239 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:03:40.676257 systemd[1]: Stopped systemd-journald.service. Dec 13 02:03:40.676277 systemd[1]: Starting systemd-journald.service... Dec 13 02:03:40.676296 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:03:40.676316 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:03:40.676336 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:03:40.676355 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:03:40.676374 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:03:40.676393 systemd[1]: Stopped verity-setup.service. Dec 13 02:03:40.676413 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:40.676432 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:03:40.676451 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:03:40.676498 systemd[1]: Mounted media.mount. Dec 13 02:03:40.676521 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:03:40.676541 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:03:40.676560 systemd[1]: Mounted tmp.mount. Dec 13 02:03:40.676579 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:03:40.676597 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:03:40.676617 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:03:40.676644 kernel: loop: module loaded Dec 13 02:03:40.676665 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:03:40.676684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:03:40.676705 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:03:40.676724 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:03:40.676746 systemd[1]: Finished modprobe@drm.service. Dec 13 02:03:40.676767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:03:40.676787 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:03:40.676806 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:03:40.676825 systemd[1]: Finished modprobe@loop.service. Dec 13 02:03:40.676846 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:03:40.676865 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:03:40.676885 systemd[1]: Reached target network-pre.target. Dec 13 02:03:40.676905 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:03:40.676932 systemd-journald[1090]: Journal started Dec 13 02:03:40.677002 systemd-journald[1090]: Runtime Journal (/run/log/journal/1abd12d39eb8424aadd5369c204622a1) is 8.0M, max 159.0M, 151.0M free. Dec 13 02:03:21.905000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:03:22.956000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:03:22.975000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:03:22.975000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:03:22.999000 audit: BPF prog-id=10 op=LOAD Dec 13 02:03:22.999000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:03:23.004000 audit: BPF prog-id=11 op=LOAD Dec 13 02:03:23.004000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:03:25.305000 audit[1009]: AVC avc: denied { associate } for pid=1009 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:03:25.305000 audit[1009]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=992 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:03:25.305000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:03:25.313000 audit[1009]: AVC avc: denied { associate } for pid=1009 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:03:25.313000 audit[1009]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=992 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:03:25.313000 audit: CWD cwd="/" Dec 13 02:03:25.313000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:25.313000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:25.313000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:03:39.989000 audit: BPF prog-id=12 op=LOAD Dec 13 02:03:39.989000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:03:39.995000 audit: BPF prog-id=13 op=LOAD Dec 13 02:03:39.999000 audit: BPF prog-id=14 op=LOAD Dec 13 02:03:39.999000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:03:39.999000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:03:40.004000 audit: BPF prog-id=15 op=LOAD Dec 13 02:03:40.004000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:03:40.023000 audit: BPF prog-id=16 op=LOAD Dec 13 02:03:40.027000 audit: BPF prog-id=17 op=LOAD Dec 13 02:03:40.027000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:03:40.027000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:03:40.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.049000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:03:40.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.484000 audit: BPF prog-id=18 op=LOAD Dec 13 02:03:40.485000 audit: BPF prog-id=19 op=LOAD Dec 13 02:03:40.485000 audit: BPF prog-id=20 op=LOAD Dec 13 02:03:40.485000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:03:40.485000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:03:40.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.670000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:03:40.670000 audit[1090]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdaca34c10 a2=4000 a3=7ffdaca34cac items=0 ppid=1 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:03:40.670000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:03:39.988819 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:03:25.255706 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:03:40.029065 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:03:25.256501 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:03:25.256527 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:03:25.256566 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:03:25.256577 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:03:25.256624 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:03:25.256638 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:03:25.256857 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:03:25.256915 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:03:25.256931 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:03:25.290563 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:03:25.290615 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:03:25.290637 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:03:25.290651 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:03:25.290676 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:03:25.290690 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:03:38.372359 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:03:38.372711 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:03:38.373163 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:03:38.373612 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:03:38.373680 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:03:38.373745 /usr/lib/systemd/system-generators/torcx-generator[1009]: time="2024-12-13T02:03:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:03:40.686482 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:03:40.706719 kernel: fuse: init (API version 7.34) Dec 13 02:03:40.718854 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:03:40.726501 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:03:40.733114 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:03:40.739487 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:03:40.745919 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:03:40.754104 systemd[1]: Started systemd-journald.service. Dec 13 02:03:40.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.755275 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:03:40.755424 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:03:40.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.758013 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:03:40.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.760443 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:03:40.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.762698 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:03:40.764754 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:03:40.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.767138 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:03:40.770595 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:03:40.773872 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:03:40.777240 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:03:40.780253 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:03:40.784276 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:03:40.792569 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:03:40.823028 systemd-journald[1090]: Time spent on flushing to /var/log/journal/1abd12d39eb8424aadd5369c204622a1 is 14.921ms for 1184 entries. Dec 13 02:03:40.823028 systemd-journald[1090]: System Journal (/var/log/journal/1abd12d39eb8424aadd5369c204622a1) is 8.0M, max 2.6G, 2.6G free. Dec 13 02:03:40.946157 systemd-journald[1090]: Received client request to flush runtime journal. Dec 13 02:03:40.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:40.838448 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:03:40.947277 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:03:40.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:41.897549 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:03:41.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:42.764063 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:03:42.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:42.766000 audit: BPF prog-id=21 op=LOAD Dec 13 02:03:42.766000 audit: BPF prog-id=22 op=LOAD Dec 13 02:03:42.766000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:03:42.766000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:03:42.767984 systemd[1]: Starting systemd-udevd.service... Dec 13 02:03:42.786776 systemd-udevd[1135]: Using default interface naming scheme 'v252'. Dec 13 02:03:43.667262 systemd[1]: Started systemd-udevd.service. Dec 13 02:03:43.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:43.679000 audit: BPF prog-id=23 op=LOAD Dec 13 02:03:43.681325 systemd[1]: Starting systemd-networkd.service... Dec 13 02:03:43.712179 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:03:43.788551 (udev-worker)[1155]: could not read from '/sys/module/pcc_cpufreq/initstate': No such device Dec 13 02:03:43.800478 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:03:43.826000 audit[1152]: AVC avc: denied { confidentiality } for pid=1152 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:03:43.838314 kernel: hv_vmbus: registering driver hv_balloon Dec 13 02:03:43.838380 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 02:03:43.826000 audit[1152]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ad877546f0 a1=f884 a2=7fd02e9b2bc5 a3=5 items=12 ppid=1135 pid=1152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:03:43.826000 audit: CWD cwd="/" Dec 13 02:03:43.826000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=1 name=(null) inode=15601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=2 name=(null) inode=15601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=3 name=(null) inode=15602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=4 name=(null) inode=15601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=5 name=(null) inode=15603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=6 name=(null) inode=15601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=7 name=(null) inode=15604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=8 name=(null) inode=15601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=9 name=(null) inode=15605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=10 name=(null) inode=15601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PATH item=11 name=(null) inode=15606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:03:43.826000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:03:43.855364 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 02:03:43.855519 kernel: hv_vmbus: registering driver hv_utils Dec 13 02:03:43.865334 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 02:03:43.865388 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 02:03:43.865416 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 02:03:44.523873 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 02:03:44.523957 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 02:03:44.523991 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 02:03:44.527815 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:03:44.529838 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:03:44.645000 audit: BPF prog-id=24 op=LOAD Dec 13 02:03:44.645000 audit: BPF prog-id=25 op=LOAD Dec 13 02:03:44.645000 audit: BPF prog-id=26 op=LOAD Dec 13 02:03:44.648131 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:03:44.748204 systemd[1]: Started systemd-userdbd.service. Dec 13 02:03:44.754367 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1152) Dec 13 02:03:44.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:44.777416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:03:44.829378 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 02:03:44.935727 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:03:44.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:44.939588 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:03:45.443752 systemd-networkd[1153]: lo: Link UP Dec 13 02:03:45.443762 systemd-networkd[1153]: lo: Gained carrier Dec 13 02:03:45.444335 systemd-networkd[1153]: Enumeration completed Dec 13 02:03:45.444534 systemd[1]: Started systemd-networkd.service. Dec 13 02:03:45.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:45.448275 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:03:45.497555 systemd-networkd[1153]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:03:45.552371 kernel: mlx5_core 3d40:00:02.0 enP15680s1: Link up Dec 13 02:03:45.576226 systemd-networkd[1153]: enP15680s1: Link UP Dec 13 02:03:45.576400 kernel: hv_netvsc 7c1e5235-ea32-7c1e-5235-ea327c1e5235 eth0: Data path switched to VF: enP15680s1 Dec 13 02:03:45.576958 systemd-networkd[1153]: eth0: Link UP Dec 13 02:03:45.577070 systemd-networkd[1153]: eth0: Gained carrier Dec 13 02:03:45.582107 systemd-networkd[1153]: enP15680s1: Gained carrier Dec 13 02:03:45.602464 systemd-networkd[1153]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 02:03:45.788927 lvm[1210]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:03:45.819869 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:03:45.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:45.822656 systemd[1]: Reached target cryptsetup.target. Dec 13 02:03:45.825655 kernel: kauditd_printk_skb: 72 callbacks suppressed Dec 13 02:03:45.825733 kernel: audit: type=1130 audit(1734055425.821:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:45.840312 systemd[1]: Starting lvm2-activation.service... Dec 13 02:03:45.845723 lvm[1213]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:03:45.871977 systemd[1]: Finished lvm2-activation.service. Dec 13 02:03:45.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:45.874865 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:03:45.886367 kernel: audit: type=1130 audit(1734055425.873:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:45.888008 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:03:45.888045 systemd[1]: Reached target local-fs.target. Dec 13 02:03:45.890153 systemd[1]: Reached target machines.target. Dec 13 02:03:45.893306 systemd[1]: Starting ldconfig.service... Dec 13 02:03:45.895960 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:03:45.896055 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:03:45.897190 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:03:45.900152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:03:45.903978 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:03:45.907597 systemd[1]: Starting systemd-sysext.service... Dec 13 02:03:45.994680 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1215 (bootctl) Dec 13 02:03:45.996564 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:03:46.000291 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:03:46.036962 kernel: audit: type=1130 audit(1734055426.001:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.083780 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:03:46.084417 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:03:46.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.090660 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:03:46.098370 kernel: audit: type=1130 audit(1734055426.085:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.182367 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:03:46.182595 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:03:46.195376 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 02:03:46.362380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:03:46.380381 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 02:03:46.390070 (sd-sysext)[1227]: Using extensions 'kubernetes'. Dec 13 02:03:46.390513 (sd-sysext)[1227]: Merged extensions into '/usr'. Dec 13 02:03:46.405868 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:46.407286 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:03:46.407697 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.412441 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:03:46.414614 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:03:46.418234 systemd[1]: Starting modprobe@loop.service... Dec 13 02:03:46.418429 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.418579 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:03:46.456258 kernel: audit: type=1130 audit(1734055426.419:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.456327 kernel: audit: type=1131 audit(1734055426.430:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.456418 kernel: audit: type=1130 audit(1734055426.430:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.418715 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:46.419650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:03:46.420524 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:03:46.431576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:03:46.431732 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:03:46.432267 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:03:46.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.458437 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:03:46.470300 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:03:46.470555 systemd[1]: Finished modprobe@loop.service. Dec 13 02:03:46.471238 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.473824 kernel: audit: type=1131 audit(1734055426.430:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.474593 systemd[1]: Finished systemd-sysext.service. Dec 13 02:03:46.498309 kernel: audit: type=1130 audit(1734055426.469:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.498386 kernel: audit: type=1131 audit(1734055426.469:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.500007 systemd[1]: Starting ensure-sysext.service... Dec 13 02:03:46.503292 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:03:46.510056 systemd[1]: Reloading. Dec 13 02:03:46.561740 /usr/lib/systemd/system-generators/torcx-generator[1254]: time="2024-12-13T02:03:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:03:46.561778 /usr/lib/systemd/system-generators/torcx-generator[1254]: time="2024-12-13T02:03:46Z" level=info msg="torcx already run" Dec 13 02:03:46.661329 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:03:46.661357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:03:46.677412 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:03:46.743000 audit: BPF prog-id=27 op=LOAD Dec 13 02:03:46.743000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:03:46.743000 audit: BPF prog-id=28 op=LOAD Dec 13 02:03:46.743000 audit: BPF prog-id=29 op=LOAD Dec 13 02:03:46.743000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:03:46.743000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:03:46.744000 audit: BPF prog-id=30 op=LOAD Dec 13 02:03:46.744000 audit: BPF prog-id=31 op=LOAD Dec 13 02:03:46.744000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:03:46.744000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:03:46.745000 audit: BPF prog-id=32 op=LOAD Dec 13 02:03:46.745000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:03:46.746000 audit: BPF prog-id=33 op=LOAD Dec 13 02:03:46.746000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:03:46.746000 audit: BPF prog-id=34 op=LOAD Dec 13 02:03:46.746000 audit: BPF prog-id=35 op=LOAD Dec 13 02:03:46.747000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:03:46.747000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:03:46.761416 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:46.761707 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.763151 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:03:46.766555 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:03:46.769958 systemd[1]: Starting modprobe@loop.service... Dec 13 02:03:46.771969 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.772154 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:03:46.772294 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:46.773345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:03:46.773594 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:03:46.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.776515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:03:46.776658 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:03:46.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.779933 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:03:46.780093 systemd[1]: Finished modprobe@loop.service. Dec 13 02:03:46.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.788168 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:46.788548 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.790000 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:03:46.793327 systemd[1]: Starting modprobe@drm.service... Dec 13 02:03:46.796552 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:03:46.800656 systemd[1]: Starting modprobe@loop.service... Dec 13 02:03:46.802694 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.802894 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:03:46.803096 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:03:46.804296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:03:46.804486 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:03:46.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.807342 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:03:46.807505 systemd[1]: Finished modprobe@drm.service. Dec 13 02:03:46.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.810175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:03:46.810314 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:03:46.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.813324 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:03:46.813501 systemd[1]: Finished modprobe@loop.service. Dec 13 02:03:46.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.816406 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:03:46.816542 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:03:46.817852 systemd[1]: Finished ensure-sysext.service. Dec 13 02:03:46.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:46.879369 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:03:47.071406 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:03:47.287131 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:03:47.477655 systemd-networkd[1153]: eth0: Gained IPv6LL Dec 13 02:03:47.483279 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:03:47.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:47.515314 systemd-fsck[1222]: fsck.fat 4.2 (2021-01-31) Dec 13 02:03:47.515314 systemd-fsck[1222]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:03:47.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:47.519175 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:03:47.524112 systemd[1]: Mounting boot.mount... Dec 13 02:03:47.547428 systemd[1]: Mounted boot.mount. Dec 13 02:03:47.562938 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:03:47.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.615066 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:03:48.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.619158 systemd[1]: Starting audit-rules.service... Dec 13 02:03:48.622284 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:03:48.625914 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:03:48.628000 audit: BPF prog-id=36 op=LOAD Dec 13 02:03:48.630756 systemd[1]: Starting systemd-resolved.service... Dec 13 02:03:48.634000 audit: BPF prog-id=37 op=LOAD Dec 13 02:03:48.637074 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:03:48.640424 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:03:48.671000 audit[1334]: SYSTEM_BOOT pid=1334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.678029 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:03:48.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.733919 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:03:48.736626 systemd[1]: Reached target time-set.target. Dec 13 02:03:48.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.755606 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:03:48.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.758310 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:03:48.797543 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:03:48.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:48.863103 systemd-resolved[1332]: Positive Trust Anchors: Dec 13 02:03:48.863159 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:03:48.863230 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:03:49.099369 systemd-resolved[1332]: Using system hostname 'ci-3510.3.6-a-eca73107d2'. Dec 13 02:03:49.101324 systemd[1]: Started systemd-resolved.service. Dec 13 02:03:49.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:03:49.103925 systemd[1]: Reached target network.target. Dec 13 02:03:49.106134 systemd[1]: Reached target network-online.target. Dec 13 02:03:49.108655 systemd[1]: Reached target nss-lookup.target. Dec 13 02:03:49.151000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:03:49.151000 audit[1350]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffeae6d790 a2=420 a3=0 items=0 ppid=1329 pid=1350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:03:49.151000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:03:49.153662 augenrules[1350]: No rules Dec 13 02:03:49.154221 systemd[1]: Finished audit-rules.service. Dec 13 02:03:49.159922 systemd-timesyncd[1333]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Dec 13 02:03:49.160077 systemd-timesyncd[1333]: Initial clock synchronization to Fri 2024-12-13 02:03:49.170728 UTC. Dec 13 02:03:58.491566 ldconfig[1214]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:03:58.501194 systemd[1]: Finished ldconfig.service. Dec 13 02:03:58.505229 systemd[1]: Starting systemd-update-done.service... Dec 13 02:03:58.525904 systemd[1]: Finished systemd-update-done.service. Dec 13 02:03:58.528186 systemd[1]: Reached target sysinit.target. Dec 13 02:03:58.530295 systemd[1]: Started motdgen.path. Dec 13 02:03:58.532057 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:03:58.534899 systemd[1]: Started logrotate.timer. Dec 13 02:03:58.536682 systemd[1]: Started mdadm.timer. Dec 13 02:03:58.538338 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:03:58.540395 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:03:58.540431 systemd[1]: Reached target paths.target. Dec 13 02:03:58.542329 systemd[1]: Reached target timers.target. Dec 13 02:03:58.544496 systemd[1]: Listening on dbus.socket. Dec 13 02:03:58.547377 systemd[1]: Starting docker.socket... Dec 13 02:03:58.551452 systemd[1]: Listening on sshd.socket. Dec 13 02:03:58.553341 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:03:58.553755 systemd[1]: Listening on docker.socket. Dec 13 02:03:58.555623 systemd[1]: Reached target sockets.target. Dec 13 02:03:58.557552 systemd[1]: Reached target basic.target. Dec 13 02:03:58.559405 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:03:58.559436 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:03:58.560395 systemd[1]: Starting containerd.service... Dec 13 02:03:58.563604 systemd[1]: Starting dbus.service... Dec 13 02:03:58.566845 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:03:58.570012 systemd[1]: Starting extend-filesystems.service... Dec 13 02:03:58.572722 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:03:58.589974 systemd[1]: Starting kubelet.service... Dec 13 02:03:58.592927 systemd[1]: Starting motdgen.service... Dec 13 02:03:58.595979 systemd[1]: Started nvidia.service. Dec 13 02:03:58.600028 systemd[1]: Starting prepare-helm.service... Dec 13 02:03:58.603718 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:03:58.607462 systemd[1]: Starting sshd-keygen.service... Dec 13 02:03:58.611919 systemd[1]: Starting systemd-logind.service... Dec 13 02:03:58.614266 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:03:58.614430 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:03:58.614970 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:03:58.616115 systemd[1]: Starting update-engine.service... Dec 13 02:03:58.620531 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:03:58.626679 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:03:58.626913 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:03:58.668271 jq[1378]: true Dec 13 02:03:58.671670 jq[1360]: false Dec 13 02:03:58.672239 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:03:58.672405 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:03:58.698055 jq[1383]: true Dec 13 02:03:58.724887 extend-filesystems[1361]: Found loop1 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda1 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda2 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda3 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found usr Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda4 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda6 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda7 Dec 13 02:03:58.724887 extend-filesystems[1361]: Found sda9 Dec 13 02:03:58.724887 extend-filesystems[1361]: Checking size of /dev/sda9 Dec 13 02:03:58.744742 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:03:58.744932 systemd[1]: Finished motdgen.service. Dec 13 02:03:58.776482 env[1405]: time="2024-12-13T02:03:58.776442922Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:03:58.799590 env[1405]: time="2024-12-13T02:03:58.798835906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:03:58.799590 env[1405]: time="2024-12-13T02:03:58.798973963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:03:58.800840 env[1405]: time="2024-12-13T02:03:58.800778703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:03:58.800840 env[1405]: time="2024-12-13T02:03:58.800815918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801105 env[1405]: time="2024-12-13T02:03:58.801073024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801172 env[1405]: time="2024-12-13T02:03:58.801106137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801172 env[1405]: time="2024-12-13T02:03:58.801122544Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:03:58.801172 env[1405]: time="2024-12-13T02:03:58.801134649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801284 env[1405]: time="2024-12-13T02:03:58.801229588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801544 env[1405]: time="2024-12-13T02:03:58.801516906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801731 env[1405]: time="2024-12-13T02:03:58.801703782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:03:58.801731 env[1405]: time="2024-12-13T02:03:58.801725691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:03:58.801820 env[1405]: time="2024-12-13T02:03:58.801792419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:03:58.801820 env[1405]: time="2024-12-13T02:03:58.801807825Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818404332Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818438546Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818456453Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818492468Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818511275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818538987Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818558295Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818576502Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818594109Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818611917Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818628524Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818644830Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818752174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:03:58.820366 env[1405]: time="2024-12-13T02:03:58.818835208Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819102018Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819130129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819147536Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819194256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819210762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819227069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819242375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819258582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819275089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819290395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819308202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819326710Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819467268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819486175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.820905 env[1405]: time="2024-12-13T02:03:58.819501982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.821428 env[1405]: time="2024-12-13T02:03:58.819517288Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:03:58.821428 env[1405]: time="2024-12-13T02:03:58.819537096Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:03:58.821428 env[1405]: time="2024-12-13T02:03:58.819552002Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:03:58.821428 env[1405]: time="2024-12-13T02:03:58.819573911Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:03:58.821428 env[1405]: time="2024-12-13T02:03:58.819610726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:03:58.821618 env[1405]: time="2024-12-13T02:03:58.819854626Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:03:58.821618 env[1405]: time="2024-12-13T02:03:58.819924955Z" level=info msg="Connect containerd service" Dec 13 02:03:58.821618 env[1405]: time="2024-12-13T02:03:58.819961870Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.821930578Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822081140Z" level=info msg="Start subscribing containerd event" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822126958Z" level=info msg="Start recovering state" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822240205Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822243906Z" level=info msg="Start event monitor" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822285523Z" level=info msg="Start snapshots syncer" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822313635Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822324639Z" level=info msg="Start streaming server" Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822292326Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:03:58.876669 env[1405]: time="2024-12-13T02:03:58.822531024Z" level=info msg="containerd successfully booted in 0.046723s" Dec 13 02:03:58.822602 systemd[1]: Started containerd.service. Dec 13 02:03:58.846813 systemd-logind[1373]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:03:58.847422 systemd-logind[1373]: New seat seat0. Dec 13 02:03:58.903261 extend-filesystems[1361]: Old size kept for /dev/sda9 Dec 13 02:03:58.903261 extend-filesystems[1361]: Found sr0 Dec 13 02:03:58.908255 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:03:58.908469 systemd[1]: Finished extend-filesystems.service. Dec 13 02:03:58.929070 bash[1401]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:03:58.929830 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:03:58.939193 tar[1381]: linux-amd64/helm Dec 13 02:03:59.044320 dbus-daemon[1359]: [system] SELinux support is enabled Dec 13 02:03:59.044523 systemd[1]: Started dbus.service. Dec 13 02:03:59.049515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:03:59.049552 systemd[1]: Reached target system-config.target. Dec 13 02:03:59.051898 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:03:59.051926 systemd[1]: Reached target user-config.target. Dec 13 02:03:59.058052 systemd[1]: Started systemd-logind.service. Dec 13 02:03:59.061253 dbus-daemon[1359]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:03:59.165186 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:03:59.569719 tar[1381]: linux-amd64/LICENSE Dec 13 02:03:59.569970 tar[1381]: linux-amd64/README.md Dec 13 02:03:59.576836 systemd[1]: Finished prepare-helm.service. Dec 13 02:03:59.996434 systemd[1]: Started kubelet.service. Dec 13 02:04:00.055247 sshd_keygen[1377]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:04:00.071068 update_engine[1375]: I1213 02:04:00.070258 1375 main.cc:92] Flatcar Update Engine starting Dec 13 02:04:00.084818 systemd[1]: Finished sshd-keygen.service. Dec 13 02:04:00.089314 systemd[1]: Starting issuegen.service... Dec 13 02:04:00.093581 systemd[1]: Started waagent.service. Dec 13 02:04:00.099670 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:04:00.099841 systemd[1]: Finished issuegen.service. Dec 13 02:04:00.103279 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:04:00.112496 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:04:00.116654 systemd[1]: Started getty@tty1.service. Dec 13 02:04:00.120127 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:04:00.122753 systemd[1]: Reached target getty.target. Dec 13 02:04:00.131041 systemd[1]: Started update-engine.service. Dec 13 02:04:00.132447 update_engine[1375]: I1213 02:04:00.131263 1375 update_check_scheduler.cc:74] Next update check in 7m29s Dec 13 02:04:00.134966 systemd[1]: Started locksmithd.service. Dec 13 02:04:00.137083 systemd[1]: Reached target multi-user.target. Dec 13 02:04:00.140586 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:04:00.152132 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:04:00.152301 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:04:00.154978 systemd[1]: Startup finished in 1.084s (firmware) + 35.307s (loader) + 945ms (kernel) + 18.450s (initrd) + 38.354s (userspace) = 1min 34.142s. Dec 13 02:04:00.632248 login[1478]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:04:00.635373 login[1479]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:04:00.659521 kubelet[1464]: E1213 02:04:00.659385 1464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:00.662914 systemd[1]: Created slice user-500.slice. Dec 13 02:04:00.664230 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:04:00.665965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:00.666120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:00.666383 systemd[1]: kubelet.service: Consumed 1.000s CPU time. Dec 13 02:04:00.673101 systemd-logind[1373]: New session 2 of user core. Dec 13 02:04:00.676203 systemd-logind[1373]: New session 1 of user core. Dec 13 02:04:00.680787 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:04:00.682605 systemd[1]: Starting user@500.service... Dec 13 02:04:00.700611 (systemd)[1490]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:00.871647 systemd[1490]: Queued start job for default target default.target. Dec 13 02:04:00.872198 systemd[1490]: Reached target paths.target. Dec 13 02:04:00.872227 systemd[1490]: Reached target sockets.target. Dec 13 02:04:00.872245 systemd[1490]: Reached target timers.target. Dec 13 02:04:00.872259 systemd[1490]: Reached target basic.target. Dec 13 02:04:00.872388 systemd[1]: Started user@500.service. Dec 13 02:04:00.873569 systemd[1]: Started session-1.scope. Dec 13 02:04:00.874364 systemd[1]: Started session-2.scope. Dec 13 02:04:00.875265 systemd[1490]: Reached target default.target. Dec 13 02:04:00.875501 systemd[1490]: Startup finished in 168ms. Dec 13 02:04:01.988168 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:04:09.799380 waagent[1473]: 2024-12-13T02:04:09.799257Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 02:04:09.812797 waagent[1473]: 2024-12-13T02:04:09.801590Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 02:04:09.812797 waagent[1473]: 2024-12-13T02:04:09.802548Z INFO Daemon Daemon Python: 3.9.16 Dec 13 02:04:09.812797 waagent[1473]: 2024-12-13T02:04:09.803994Z INFO Daemon Daemon Run daemon Dec 13 02:04:09.812797 waagent[1473]: 2024-12-13T02:04:09.805345Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 02:04:09.817096 waagent[1473]: 2024-12-13T02:04:09.816972Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 02:04:09.824892 waagent[1473]: 2024-12-13T02:04:09.824785Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 02:04:09.829606 waagent[1473]: 2024-12-13T02:04:09.829542Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 02:04:09.832054 waagent[1473]: 2024-12-13T02:04:09.831994Z INFO Daemon Daemon Using waagent for provisioning Dec 13 02:04:09.835094 waagent[1473]: 2024-12-13T02:04:09.835030Z INFO Daemon Daemon Activate resource disk Dec 13 02:04:09.837392 waagent[1473]: 2024-12-13T02:04:09.837320Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 02:04:09.847362 waagent[1473]: 2024-12-13T02:04:09.847287Z INFO Daemon Daemon Found device: None Dec 13 02:04:09.850054 waagent[1473]: 2024-12-13T02:04:09.849989Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 02:04:09.854031 waagent[1473]: 2024-12-13T02:04:09.853971Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 02:04:09.859836 waagent[1473]: 2024-12-13T02:04:09.859773Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 02:04:09.862740 waagent[1473]: 2024-12-13T02:04:09.862679Z INFO Daemon Daemon Running default provisioning handler Dec 13 02:04:09.873289 waagent[1473]: 2024-12-13T02:04:09.873169Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 02:04:09.881321 waagent[1473]: 2024-12-13T02:04:09.881217Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 02:04:09.885841 waagent[1473]: 2024-12-13T02:04:09.885778Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 02:04:09.888374 waagent[1473]: 2024-12-13T02:04:09.888306Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 02:04:09.906819 waagent[1473]: 2024-12-13T02:04:09.906695Z INFO Daemon Daemon Successfully mounted dvd Dec 13 02:04:09.986101 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 02:04:10.031781 waagent[1473]: 2024-12-13T02:04:10.031622Z INFO Daemon Daemon Detect protocol endpoint Dec 13 02:04:10.045852 waagent[1473]: 2024-12-13T02:04:10.033159Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 02:04:10.045852 waagent[1473]: 2024-12-13T02:04:10.034111Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 02:04:10.045852 waagent[1473]: 2024-12-13T02:04:10.034878Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 02:04:10.045852 waagent[1473]: 2024-12-13T02:04:10.035875Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 02:04:10.045852 waagent[1473]: 2024-12-13T02:04:10.036481Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 02:04:10.270473 waagent[1473]: 2024-12-13T02:04:10.270400Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 02:04:10.274430 waagent[1473]: 2024-12-13T02:04:10.274381Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 02:04:10.277219 waagent[1473]: 2024-12-13T02:04:10.277155Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 02:04:10.714034 waagent[1473]: 2024-12-13T02:04:10.713881Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 02:04:10.723399 waagent[1473]: 2024-12-13T02:04:10.723314Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 02:04:10.728212 waagent[1473]: 2024-12-13T02:04:10.724637Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 02:04:10.802422 waagent[1473]: 2024-12-13T02:04:10.802286Z INFO Daemon Daemon Found private key matching thumbprint 59009FC169E43EF0252B1B420D97645A30F8A81C Dec 13 02:04:10.807614 waagent[1473]: 2024-12-13T02:04:10.807541Z INFO Daemon Daemon Certificate with thumbprint 3FF3A49C9BFB42E80BC2163D18BD47C61998D366 has no matching private key. Dec 13 02:04:10.812776 waagent[1473]: 2024-12-13T02:04:10.812707Z INFO Daemon Daemon Fetch goal state completed Dec 13 02:04:10.836449 waagent[1473]: 2024-12-13T02:04:10.836392Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: e3fbea42-ba0e-43b6-b3cb-d72680ba852b New eTag: 13116986215607582184] Dec 13 02:04:10.842368 waagent[1473]: 2024-12-13T02:04:10.842293Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 02:04:10.854793 waagent[1473]: 2024-12-13T02:04:10.854733Z INFO Daemon Daemon Starting provisioning Dec 13 02:04:10.857408 waagent[1473]: 2024-12-13T02:04:10.857332Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 02:04:10.859748 waagent[1473]: 2024-12-13T02:04:10.859690Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-eca73107d2] Dec 13 02:04:10.905179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:04:10.905519 systemd[1]: Stopped kubelet.service. Dec 13 02:04:10.905585 systemd[1]: kubelet.service: Consumed 1.000s CPU time. Dec 13 02:04:10.907475 systemd[1]: Starting kubelet.service... Dec 13 02:04:10.922476 waagent[1473]: 2024-12-13T02:04:10.922313Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-eca73107d2] Dec 13 02:04:10.926555 waagent[1473]: 2024-12-13T02:04:10.926455Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 02:04:10.930308 waagent[1473]: 2024-12-13T02:04:10.930224Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 02:04:10.952001 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 02:04:10.952269 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 02:04:10.952382 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 02:04:10.952736 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:04:10.957407 systemd-networkd[1153]: eth0: DHCPv6 lease lost Dec 13 02:04:10.959063 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:04:10.959264 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:04:10.962808 systemd[1]: Starting systemd-networkd.service... Dec 13 02:04:10.998756 systemd-networkd[1532]: enP15680s1: Link UP Dec 13 02:04:10.999086 systemd-networkd[1532]: enP15680s1: Gained carrier Dec 13 02:04:11.001222 systemd-networkd[1532]: eth0: Link UP Dec 13 02:04:11.001359 systemd-networkd[1532]: eth0: Gained carrier Dec 13 02:04:11.002045 systemd-networkd[1532]: lo: Link UP Dec 13 02:04:11.002148 systemd-networkd[1532]: lo: Gained carrier Dec 13 02:04:11.002720 systemd-networkd[1532]: eth0: Gained IPv6LL Dec 13 02:04:11.003470 systemd-networkd[1532]: Enumeration completed Dec 13 02:04:11.003678 systemd[1]: Started systemd-networkd.service. Dec 13 02:04:11.005627 systemd-networkd[1532]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:04:11.006781 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:04:11.008401 waagent[1473]: 2024-12-13T02:04:11.007910Z INFO Daemon Daemon Create user account if not exists Dec 13 02:04:11.013025 waagent[1473]: 2024-12-13T02:04:11.012941Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 02:04:11.016642 waagent[1473]: 2024-12-13T02:04:11.016563Z INFO Daemon Daemon Configure sudoer Dec 13 02:04:11.061281 systemd[1]: Started kubelet.service. Dec 13 02:04:11.068450 systemd-networkd[1532]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 02:04:11.070963 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:04:11.240833 waagent[1473]: 2024-12-13T02:04:11.240691Z INFO Daemon Daemon Configure sshd Dec 13 02:04:11.243675 waagent[1473]: 2024-12-13T02:04:11.243579Z INFO Daemon Daemon Deploy ssh public key. Dec 13 02:04:11.581055 kubelet[1537]: E1213 02:04:11.580994 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:11.583902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:11.584069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:11.619330 waagent[1473]: 2024-12-13T02:04:11.619205Z INFO Daemon Daemon Decode custom data Dec 13 02:04:11.623455 waagent[1473]: 2024-12-13T02:04:11.620750Z INFO Daemon Daemon Save custom data Dec 13 02:04:12.676482 waagent[1473]: 2024-12-13T02:04:12.676382Z INFO Daemon Daemon Provisioning complete Dec 13 02:04:12.692515 waagent[1473]: 2024-12-13T02:04:12.692447Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 02:04:12.696041 waagent[1473]: 2024-12-13T02:04:12.695971Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 02:04:12.701511 waagent[1473]: 2024-12-13T02:04:12.701439Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 02:04:12.973980 waagent[1549]: 2024-12-13T02:04:12.972327Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 02:04:12.973980 waagent[1549]: 2024-12-13T02:04:12.973270Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:04:12.973980 waagent[1549]: 2024-12-13T02:04:12.973445Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:04:12.985743 waagent[1549]: 2024-12-13T02:04:12.985669Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 02:04:12.985904 waagent[1549]: 2024-12-13T02:04:12.985849Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 02:04:13.045400 waagent[1549]: 2024-12-13T02:04:13.045270Z INFO ExtHandler ExtHandler Found private key matching thumbprint 59009FC169E43EF0252B1B420D97645A30F8A81C Dec 13 02:04:13.045620 waagent[1549]: 2024-12-13T02:04:13.045562Z INFO ExtHandler ExtHandler Certificate with thumbprint 3FF3A49C9BFB42E80BC2163D18BD47C61998D366 has no matching private key. Dec 13 02:04:13.045853 waagent[1549]: 2024-12-13T02:04:13.045802Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 02:04:13.059026 waagent[1549]: 2024-12-13T02:04:13.058966Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 63690b7b-20af-4ba3-9f47-d7bbb41fe28e New eTag: 13116986215607582184] Dec 13 02:04:13.059546 waagent[1549]: 2024-12-13T02:04:13.059488Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 02:04:13.215407 waagent[1549]: 2024-12-13T02:04:13.215245Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 02:04:13.224244 waagent[1549]: 2024-12-13T02:04:13.224107Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1549 Dec 13 02:04:13.227653 waagent[1549]: 2024-12-13T02:04:13.227586Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 02:04:13.228910 waagent[1549]: 2024-12-13T02:04:13.228851Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 02:04:13.405308 waagent[1549]: 2024-12-13T02:04:13.405241Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 02:04:13.405776 waagent[1549]: 2024-12-13T02:04:13.405708Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 02:04:13.413932 waagent[1549]: 2024-12-13T02:04:13.413878Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 02:04:13.414408 waagent[1549]: 2024-12-13T02:04:13.414331Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 02:04:13.415466 waagent[1549]: 2024-12-13T02:04:13.415403Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 02:04:13.416735 waagent[1549]: 2024-12-13T02:04:13.416675Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 02:04:13.417364 waagent[1549]: 2024-12-13T02:04:13.417298Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 02:04:13.417942 waagent[1549]: 2024-12-13T02:04:13.417880Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 02:04:13.418391 waagent[1549]: 2024-12-13T02:04:13.418301Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:04:13.418591 waagent[1549]: 2024-12-13T02:04:13.418540Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:04:13.418819 waagent[1549]: 2024-12-13T02:04:13.418768Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 02:04:13.419061 waagent[1549]: 2024-12-13T02:04:13.419010Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:04:13.419707 waagent[1549]: 2024-12-13T02:04:13.419651Z INFO EnvHandler ExtHandler Configure routes Dec 13 02:04:13.420059 waagent[1549]: 2024-12-13T02:04:13.420006Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:04:13.420223 waagent[1549]: 2024-12-13T02:04:13.420170Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 02:04:13.420561 waagent[1549]: 2024-12-13T02:04:13.420510Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 02:04:13.420650 waagent[1549]: 2024-12-13T02:04:13.420599Z INFO EnvHandler ExtHandler Gateway:None Dec 13 02:04:13.420939 waagent[1549]: 2024-12-13T02:04:13.420889Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 02:04:13.421100 waagent[1549]: 2024-12-13T02:04:13.421034Z INFO EnvHandler ExtHandler Routes:None Dec 13 02:04:13.421664 waagent[1549]: 2024-12-13T02:04:13.421608Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 02:04:13.425935 waagent[1549]: 2024-12-13T02:04:13.425824Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 02:04:13.425935 waagent[1549]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 02:04:13.425935 waagent[1549]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 02:04:13.425935 waagent[1549]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 02:04:13.425935 waagent[1549]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:04:13.425935 waagent[1549]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:04:13.425935 waagent[1549]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:04:13.445966 waagent[1549]: 2024-12-13T02:04:13.445903Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 02:04:13.447069 waagent[1549]: 2024-12-13T02:04:13.447014Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 02:04:13.448275 waagent[1549]: 2024-12-13T02:04:13.448228Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 02:04:13.487287 waagent[1549]: 2024-12-13T02:04:13.487159Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 02:04:13.591832 waagent[1549]: 2024-12-13T02:04:13.591749Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1532' Dec 13 02:04:13.739379 waagent[1549]: 2024-12-13T02:04:13.739180Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 02:04:13.739379 waagent[1549]: Executing ['ip', '-a', '-o', 'link']: Dec 13 02:04:13.739379 waagent[1549]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 02:04:13.739379 waagent[1549]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:ea:32 brd ff:ff:ff:ff:ff:ff Dec 13 02:04:13.739379 waagent[1549]: 3: enP15680s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:ea:32 brd ff:ff:ff:ff:ff:ff\ altname enP15680p0s2 Dec 13 02:04:13.739379 waagent[1549]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 02:04:13.739379 waagent[1549]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 02:04:13.739379 waagent[1549]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 02:04:13.739379 waagent[1549]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 02:04:13.739379 waagent[1549]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 02:04:13.739379 waagent[1549]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:ea32/64 scope link \ valid_lft forever preferred_lft forever Dec 13 02:04:13.780974 waagent[1549]: 2024-12-13T02:04:13.780909Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 02:04:14.705964 waagent[1473]: 2024-12-13T02:04:14.705790Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 02:04:14.713885 waagent[1473]: 2024-12-13T02:04:14.713820Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 02:04:15.749374 waagent[1581]: 2024-12-13T02:04:15.749268Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 02:04:15.750092 waagent[1581]: 2024-12-13T02:04:15.750020Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 02:04:15.750240 waagent[1581]: 2024-12-13T02:04:15.750185Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 02:04:15.750416 waagent[1581]: 2024-12-13T02:04:15.750367Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 02:04:15.759969 waagent[1581]: 2024-12-13T02:04:15.759873Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 02:04:15.760335 waagent[1581]: 2024-12-13T02:04:15.760280Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:04:15.760507 waagent[1581]: 2024-12-13T02:04:15.760459Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:04:15.772592 waagent[1581]: 2024-12-13T02:04:15.772520Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 02:04:15.784814 waagent[1581]: 2024-12-13T02:04:15.784753Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 02:04:15.785780 waagent[1581]: 2024-12-13T02:04:15.785718Z INFO ExtHandler Dec 13 02:04:15.785932 waagent[1581]: 2024-12-13T02:04:15.785880Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2fb4e82e-6df1-4784-9235-498466797b38 eTag: 13116986215607582184 source: Fabric] Dec 13 02:04:15.786656 waagent[1581]: 2024-12-13T02:04:15.786598Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 02:04:15.787742 waagent[1581]: 2024-12-13T02:04:15.787680Z INFO ExtHandler Dec 13 02:04:15.787877 waagent[1581]: 2024-12-13T02:04:15.787826Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 02:04:15.794906 waagent[1581]: 2024-12-13T02:04:15.794854Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 02:04:15.795329 waagent[1581]: 2024-12-13T02:04:15.795280Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 02:04:15.814614 waagent[1581]: 2024-12-13T02:04:15.814553Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 02:04:15.878237 waagent[1581]: 2024-12-13T02:04:15.878122Z INFO ExtHandler Downloaded certificate {'thumbprint': '59009FC169E43EF0252B1B420D97645A30F8A81C', 'hasPrivateKey': True} Dec 13 02:04:15.879183 waagent[1581]: 2024-12-13T02:04:15.879121Z INFO ExtHandler Downloaded certificate {'thumbprint': '3FF3A49C9BFB42E80BC2163D18BD47C61998D366', 'hasPrivateKey': False} Dec 13 02:04:15.880155 waagent[1581]: 2024-12-13T02:04:15.880095Z INFO ExtHandler Fetch goal state completed Dec 13 02:04:15.901343 waagent[1581]: 2024-12-13T02:04:15.901250Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 02:04:15.912289 waagent[1581]: 2024-12-13T02:04:15.912207Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1581 Dec 13 02:04:15.915285 waagent[1581]: 2024-12-13T02:04:15.915222Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 02:04:15.916224 waagent[1581]: 2024-12-13T02:04:15.916164Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 02:04:15.916525 waagent[1581]: 2024-12-13T02:04:15.916469Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 02:04:15.918481 waagent[1581]: 2024-12-13T02:04:15.918423Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 02:04:15.922938 waagent[1581]: 2024-12-13T02:04:15.922883Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 02:04:15.923284 waagent[1581]: 2024-12-13T02:04:15.923228Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 02:04:15.930959 waagent[1581]: 2024-12-13T02:04:15.930904Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 02:04:15.931421 waagent[1581]: 2024-12-13T02:04:15.931362Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 02:04:15.937138 waagent[1581]: 2024-12-13T02:04:15.937046Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 02:04:15.938153 waagent[1581]: 2024-12-13T02:04:15.938087Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 02:04:15.939569 waagent[1581]: 2024-12-13T02:04:15.939509Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 02:04:15.940016 waagent[1581]: 2024-12-13T02:04:15.939961Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:04:15.940172 waagent[1581]: 2024-12-13T02:04:15.940124Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:04:15.940738 waagent[1581]: 2024-12-13T02:04:15.940682Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 02:04:15.941025 waagent[1581]: 2024-12-13T02:04:15.940957Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 02:04:15.941025 waagent[1581]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 02:04:15.941025 waagent[1581]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 02:04:15.941025 waagent[1581]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 02:04:15.941025 waagent[1581]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:04:15.941025 waagent[1581]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:04:15.941025 waagent[1581]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:04:15.943132 waagent[1581]: 2024-12-13T02:04:15.943040Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 02:04:15.944523 waagent[1581]: 2024-12-13T02:04:15.944454Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:04:15.944875 waagent[1581]: 2024-12-13T02:04:15.944820Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 02:04:15.945159 waagent[1581]: 2024-12-13T02:04:15.945105Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 02:04:15.948333 waagent[1581]: 2024-12-13T02:04:15.948209Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 02:04:15.948564 waagent[1581]: 2024-12-13T02:04:15.948480Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 02:04:15.948675 waagent[1581]: 2024-12-13T02:04:15.948620Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:04:15.949529 waagent[1581]: 2024-12-13T02:04:15.949454Z INFO EnvHandler ExtHandler Configure routes Dec 13 02:04:15.949829 waagent[1581]: 2024-12-13T02:04:15.949753Z INFO EnvHandler ExtHandler Gateway:None Dec 13 02:04:15.952565 waagent[1581]: 2024-12-13T02:04:15.952334Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 02:04:15.954052 waagent[1581]: 2024-12-13T02:04:15.953995Z INFO EnvHandler ExtHandler Routes:None Dec 13 02:04:15.955153 waagent[1581]: 2024-12-13T02:04:15.955098Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 02:04:15.955153 waagent[1581]: Executing ['ip', '-a', '-o', 'link']: Dec 13 02:04:15.955153 waagent[1581]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 02:04:15.955153 waagent[1581]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:ea:32 brd ff:ff:ff:ff:ff:ff Dec 13 02:04:15.955153 waagent[1581]: 3: enP15680s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:ea:32 brd ff:ff:ff:ff:ff:ff\ altname enP15680p0s2 Dec 13 02:04:15.955153 waagent[1581]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 02:04:15.955153 waagent[1581]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 02:04:15.955153 waagent[1581]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 02:04:15.955153 waagent[1581]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 02:04:15.955153 waagent[1581]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 02:04:15.955153 waagent[1581]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:ea32/64 scope link \ valid_lft forever preferred_lft forever Dec 13 02:04:15.974197 waagent[1581]: 2024-12-13T02:04:15.974111Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 02:04:15.997745 waagent[1581]: 2024-12-13T02:04:15.996661Z INFO ExtHandler ExtHandler Dec 13 02:04:16.004765 waagent[1581]: 2024-12-13T02:04:16.004481Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 08a7d105-8d10-44b1-b2e2-1187560f760a correlation d3503169-f52b-4963-b668-5fe562ff56d9 created: 2024-12-13T02:02:14.074585Z] Dec 13 02:04:16.013893 waagent[1581]: 2024-12-13T02:04:16.013829Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 02:04:16.015801 waagent[1581]: 2024-12-13T02:04:16.015743Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 19 ms] Dec 13 02:04:16.036488 waagent[1581]: 2024-12-13T02:04:16.036338Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 02:04:16.054237 waagent[1581]: 2024-12-13T02:04:16.054185Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5726B0BD-6659-45ED-828C-AE9CE9F780C3;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 02:04:16.187583 waagent[1581]: 2024-12-13T02:04:16.187469Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 02:04:16.187583 waagent[1581]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:04:16.187583 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 02:04:16.187583 waagent[1581]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:04:16.187583 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 02:04:16.187583 waagent[1581]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:04:16.187583 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 02:04:16.187583 waagent[1581]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 02:04:16.187583 waagent[1581]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 02:04:16.187583 waagent[1581]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 02:04:16.194693 waagent[1581]: 2024-12-13T02:04:16.194594Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 02:04:16.194693 waagent[1581]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:04:16.194693 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 02:04:16.194693 waagent[1581]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:04:16.194693 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 02:04:16.194693 waagent[1581]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:04:16.194693 waagent[1581]: pkts bytes target prot opt in out source destination Dec 13 02:04:16.194693 waagent[1581]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 02:04:16.194693 waagent[1581]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 02:04:16.194693 waagent[1581]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 02:04:16.195232 waagent[1581]: 2024-12-13T02:04:16.195179Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 02:04:21.655541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:04:21.655861 systemd[1]: Stopped kubelet.service. Dec 13 02:04:21.657840 systemd[1]: Starting kubelet.service... Dec 13 02:04:21.982614 systemd[1]: Started kubelet.service. Dec 13 02:04:22.325624 kubelet[1636]: E1213 02:04:22.325513 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:22.327303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:22.327474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:32.405090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:04:32.405445 systemd[1]: Stopped kubelet.service. Dec 13 02:04:32.407515 systemd[1]: Starting kubelet.service... Dec 13 02:04:32.579687 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 02:04:32.735976 systemd[1]: Started kubelet.service. Dec 13 02:04:32.773584 kubelet[1645]: E1213 02:04:32.773555 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:32.775029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:32.775185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:42.905110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:04:42.905466 systemd[1]: Stopped kubelet.service. Dec 13 02:04:42.907514 systemd[1]: Starting kubelet.service... Dec 13 02:04:43.092942 systemd[1]: Started kubelet.service. Dec 13 02:04:43.572316 kubelet[1654]: E1213 02:04:43.572263 1654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:43.573913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:43.574071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:45.439049 update_engine[1375]: I1213 02:04:45.438961 1375 update_attempter.cc:509] Updating boot flags... Dec 13 02:04:51.787870 systemd[1]: Created slice system-sshd.slice. Dec 13 02:04:51.789862 systemd[1]: Started sshd@0-10.200.8.15:22-10.200.16.10:40292.service. Dec 13 02:04:52.647179 sshd[1726]: Accepted publickey for core from 10.200.16.10 port 40292 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:04:52.648900 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:52.654187 systemd[1]: Started session-3.scope. Dec 13 02:04:52.654659 systemd-logind[1373]: New session 3 of user core. Dec 13 02:04:53.190521 systemd[1]: Started sshd@1-10.200.8.15:22-10.200.16.10:40294.service. Dec 13 02:04:53.655226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:04:53.655543 systemd[1]: Stopped kubelet.service. Dec 13 02:04:53.657242 systemd[1]: Starting kubelet.service... Dec 13 02:04:53.816487 sshd[1731]: Accepted publickey for core from 10.200.16.10 port 40294 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:04:53.818115 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:53.822978 systemd[1]: Started session-4.scope. Dec 13 02:04:53.823489 systemd-logind[1373]: New session 4 of user core. Dec 13 02:04:53.880736 systemd[1]: Started kubelet.service. Dec 13 02:04:53.917822 kubelet[1738]: E1213 02:04:53.917698 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:04:53.919442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:04:53.919604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:04:54.261801 sshd[1731]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:54.265412 systemd[1]: sshd@1-10.200.8.15:22-10.200.16.10:40294.service: Deactivated successfully. Dec 13 02:04:54.266491 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:04:54.267247 systemd-logind[1373]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:04:54.268198 systemd-logind[1373]: Removed session 4. Dec 13 02:04:54.366483 systemd[1]: Started sshd@2-10.200.8.15:22-10.200.16.10:40302.service. Dec 13 02:04:54.991214 sshd[1746]: Accepted publickey for core from 10.200.16.10 port 40302 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:04:54.992886 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:54.998428 systemd-logind[1373]: New session 5 of user core. Dec 13 02:04:54.998693 systemd[1]: Started session-5.scope. Dec 13 02:04:55.431998 sshd[1746]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:55.435211 systemd[1]: sshd@2-10.200.8.15:22-10.200.16.10:40302.service: Deactivated successfully. Dec 13 02:04:55.436213 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:04:55.436971 systemd-logind[1373]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:04:55.437885 systemd-logind[1373]: Removed session 5. Dec 13 02:04:55.535718 systemd[1]: Started sshd@3-10.200.8.15:22-10.200.16.10:40308.service. Dec 13 02:04:56.323454 sshd[1752]: Accepted publickey for core from 10.200.16.10 port 40308 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:04:56.325084 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:56.330449 systemd-logind[1373]: New session 6 of user core. Dec 13 02:04:56.330887 systemd[1]: Started session-6.scope. Dec 13 02:04:56.769122 sshd[1752]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:56.772335 systemd[1]: sshd@3-10.200.8.15:22-10.200.16.10:40308.service: Deactivated successfully. Dec 13 02:04:56.773330 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:04:56.774091 systemd-logind[1373]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:04:56.775010 systemd-logind[1373]: Removed session 6. Dec 13 02:04:56.873622 systemd[1]: Started sshd@4-10.200.8.15:22-10.200.16.10:40314.service. Dec 13 02:04:57.498422 sshd[1758]: Accepted publickey for core from 10.200.16.10 port 40314 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:04:57.500092 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:57.505456 systemd[1]: Started session-7.scope. Dec 13 02:04:57.505906 systemd-logind[1373]: New session 7 of user core. Dec 13 02:04:58.107519 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:04:58.107893 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:04:58.146562 systemd[1]: Starting docker.service... Dec 13 02:04:58.199672 env[1771]: time="2024-12-13T02:04:58.199626075Z" level=info msg="Starting up" Dec 13 02:04:58.203591 env[1771]: time="2024-12-13T02:04:58.202411725Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:04:58.203591 env[1771]: time="2024-12-13T02:04:58.202438420Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:04:58.203591 env[1771]: time="2024-12-13T02:04:58.202466616Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:04:58.203591 env[1771]: time="2024-12-13T02:04:58.202481213Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:04:58.204749 env[1771]: time="2024-12-13T02:04:58.204719851Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:04:58.204749 env[1771]: time="2024-12-13T02:04:58.204742947Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:04:58.204902 env[1771]: time="2024-12-13T02:04:58.204759644Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:04:58.204902 env[1771]: time="2024-12-13T02:04:58.204770143Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:04:58.211258 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1178720459-merged.mount: Deactivated successfully. Dec 13 02:04:58.304675 env[1771]: time="2024-12-13T02:04:58.304629676Z" level=info msg="Loading containers: start." Dec 13 02:04:58.493382 kernel: Initializing XFRM netlink socket Dec 13 02:04:58.537061 env[1771]: time="2024-12-13T02:04:58.537024354Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:04:58.644800 systemd-networkd[1532]: docker0: Link UP Dec 13 02:04:58.693496 env[1771]: time="2024-12-13T02:04:58.693460629Z" level=info msg="Loading containers: done." Dec 13 02:04:58.704673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3963695591-merged.mount: Deactivated successfully. Dec 13 02:04:58.712821 env[1771]: time="2024-12-13T02:04:58.712789100Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:04:58.712990 env[1771]: time="2024-12-13T02:04:58.712967171Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:04:58.713093 env[1771]: time="2024-12-13T02:04:58.713070654Z" level=info msg="Daemon has completed initialization" Dec 13 02:04:58.749450 systemd[1]: Started docker.service. Dec 13 02:04:58.757598 env[1771]: time="2024-12-13T02:04:58.757552553Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:05:00.137228 env[1405]: time="2024-12-13T02:05:00.137183300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 02:05:00.880331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183559006.mount: Deactivated successfully. Dec 13 02:05:03.071580 env[1405]: time="2024-12-13T02:05:03.071461364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:03.080583 env[1405]: time="2024-12-13T02:05:03.080488191Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:03.085807 env[1405]: time="2024-12-13T02:05:03.085776545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:03.090650 env[1405]: time="2024-12-13T02:05:03.090570269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:03.091434 env[1405]: time="2024-12-13T02:05:03.091404051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 02:05:03.093248 env[1405]: time="2024-12-13T02:05:03.093185700Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 02:05:04.155038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 02:05:04.155370 systemd[1]: Stopped kubelet.service. Dec 13 02:05:04.157143 systemd[1]: Starting kubelet.service... Dec 13 02:05:04.283703 systemd[1]: Started kubelet.service. Dec 13 02:05:04.964570 kubelet[1891]: E1213 02:05:04.964521 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:05:04.966112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:05:04.966275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:05:05.741601 env[1405]: time="2024-12-13T02:05:05.741548240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:05.749844 env[1405]: time="2024-12-13T02:05:05.749801637Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:05.754600 env[1405]: time="2024-12-13T02:05:05.754566000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:05.759706 env[1405]: time="2024-12-13T02:05:05.759671918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:05.760328 env[1405]: time="2024-12-13T02:05:05.760294035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 02:05:05.761057 env[1405]: time="2024-12-13T02:05:05.761024237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 02:05:07.435990 env[1405]: time="2024-12-13T02:05:07.435930575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:07.444102 env[1405]: time="2024-12-13T02:05:07.444061745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:07.448553 env[1405]: time="2024-12-13T02:05:07.448523680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:07.451580 env[1405]: time="2024-12-13T02:05:07.451543998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:07.452271 env[1405]: time="2024-12-13T02:05:07.452234610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 02:05:07.452883 env[1405]: time="2024-12-13T02:05:07.452852432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 02:05:08.560182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041986925.mount: Deactivated successfully. Dec 13 02:05:09.189759 env[1405]: time="2024-12-13T02:05:09.189705124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:09.195555 env[1405]: time="2024-12-13T02:05:09.195515026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:09.199743 env[1405]: time="2024-12-13T02:05:09.199706823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:09.203442 env[1405]: time="2024-12-13T02:05:09.203405679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:09.203836 env[1405]: time="2024-12-13T02:05:09.203803631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 02:05:09.204325 env[1405]: time="2024-12-13T02:05:09.204293672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:05:09.802611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687709072.mount: Deactivated successfully. Dec 13 02:05:11.153335 env[1405]: time="2024-12-13T02:05:11.153275871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.163540 env[1405]: time="2024-12-13T02:05:11.163499806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.172280 env[1405]: time="2024-12-13T02:05:11.172246010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.180512 env[1405]: time="2024-12-13T02:05:11.180478072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.181275 env[1405]: time="2024-12-13T02:05:11.181243185Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:05:11.182278 env[1405]: time="2024-12-13T02:05:11.182248370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 02:05:11.697365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903099443.mount: Deactivated successfully. Dec 13 02:05:11.729725 env[1405]: time="2024-12-13T02:05:11.729675894Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.737545 env[1405]: time="2024-12-13T02:05:11.737502002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.742725 env[1405]: time="2024-12-13T02:05:11.742692811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.746780 env[1405]: time="2024-12-13T02:05:11.746744249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:11.747312 env[1405]: time="2024-12-13T02:05:11.747278488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 02:05:11.747872 env[1405]: time="2024-12-13T02:05:11.747847024Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 02:05:12.294010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617327276.mount: Deactivated successfully. Dec 13 02:05:15.155098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 02:05:15.155379 systemd[1]: Stopped kubelet.service. Dec 13 02:05:15.157280 systemd[1]: Starting kubelet.service... Dec 13 02:05:15.275746 systemd[1]: Started kubelet.service. Dec 13 02:05:15.811830 kubelet[1901]: E1213 02:05:15.811773 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:05:15.813503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:05:15.813666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:05:15.951938 env[1405]: time="2024-12-13T02:05:15.951537310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:16.039257 env[1405]: time="2024-12-13T02:05:16.039205799Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:16.047073 env[1405]: time="2024-12-13T02:05:16.047033315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:16.105105 env[1405]: time="2024-12-13T02:05:16.104582951Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:16.105685 env[1405]: time="2024-12-13T02:05:16.105653043Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 02:05:19.256637 systemd[1]: Stopped kubelet.service. Dec 13 02:05:19.259571 systemd[1]: Starting kubelet.service... Dec 13 02:05:19.294739 systemd[1]: Reloading. Dec 13 02:05:19.380861 /usr/lib/systemd/system-generators/torcx-generator[1949]: time="2024-12-13T02:05:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:05:19.380904 /usr/lib/systemd/system-generators/torcx-generator[1949]: time="2024-12-13T02:05:19Z" level=info msg="torcx already run" Dec 13 02:05:19.500494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:05:19.500514 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:05:19.517015 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:05:19.612556 systemd[1]: Started kubelet.service. Dec 13 02:05:19.615171 systemd[1]: Stopping kubelet.service... Dec 13 02:05:19.615970 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:05:19.616157 systemd[1]: Stopped kubelet.service. Dec 13 02:05:19.617770 systemd[1]: Starting kubelet.service... Dec 13 02:05:19.896447 systemd[1]: Started kubelet.service. Dec 13 02:05:20.543789 kubelet[2019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:20.543789 kubelet[2019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:05:20.543789 kubelet[2019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:20.552475 kubelet[2019]: I1213 02:05:20.552423 2019 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:05:21.361290 kubelet[2019]: I1213 02:05:21.361250 2019 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:05:21.361562 kubelet[2019]: I1213 02:05:21.361540 2019 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:05:21.361836 kubelet[2019]: I1213 02:05:21.361815 2019 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:05:21.401697 kubelet[2019]: I1213 02:05:21.401663 2019 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:05:21.402050 kubelet[2019]: E1213 02:05:21.402004 2019 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:21.408182 kubelet[2019]: E1213 02:05:21.408150 2019 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:05:21.408182 kubelet[2019]: I1213 02:05:21.408178 2019 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:05:21.412669 kubelet[2019]: I1213 02:05:21.412640 2019 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:05:21.414126 kubelet[2019]: I1213 02:05:21.414102 2019 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:05:21.414323 kubelet[2019]: I1213 02:05:21.414289 2019 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:05:21.414525 kubelet[2019]: I1213 02:05:21.414321 2019 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-eca73107d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:05:21.414672 kubelet[2019]: I1213 02:05:21.414541 2019 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:05:21.414672 kubelet[2019]: I1213 02:05:21.414555 2019 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:05:21.414763 kubelet[2019]: I1213 02:05:21.414673 2019 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:21.420181 kubelet[2019]: I1213 02:05:21.420155 2019 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:05:21.420266 kubelet[2019]: I1213 02:05:21.420187 2019 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:05:21.420266 kubelet[2019]: I1213 02:05:21.420227 2019 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:05:21.420266 kubelet[2019]: I1213 02:05:21.420245 2019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:05:21.427423 kubelet[2019]: W1213 02:05:21.427373 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-eca73107d2&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:21.427573 kubelet[2019]: E1213 02:05:21.427553 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-eca73107d2&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:21.431403 kubelet[2019]: W1213 02:05:21.431327 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:21.431537 kubelet[2019]: E1213 02:05:21.431419 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:21.431965 kubelet[2019]: I1213 02:05:21.431942 2019 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:05:21.439453 kubelet[2019]: I1213 02:05:21.439430 2019 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:05:21.440261 kubelet[2019]: W1213 02:05:21.440235 2019 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:05:21.441115 kubelet[2019]: I1213 02:05:21.441091 2019 server.go:1269] "Started kubelet" Dec 13 02:05:21.449988 kubelet[2019]: E1213 02:05:21.448655 2019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-eca73107d2.18109a4f304cc7ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-eca73107d2,UID:ci-3510.3.6-a-eca73107d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-eca73107d2,},FirstTimestamp:2024-12-13 02:05:21.441073098 +0000 UTC m=+1.539509631,LastTimestamp:2024-12-13 02:05:21.441073098 +0000 UTC m=+1.539509631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-eca73107d2,}" Dec 13 02:05:21.451627 kubelet[2019]: E1213 02:05:21.451611 2019 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:05:21.451839 kubelet[2019]: I1213 02:05:21.451804 2019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:05:21.452205 kubelet[2019]: I1213 02:05:21.452193 2019 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:05:21.452354 kubelet[2019]: I1213 02:05:21.452327 2019 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:05:21.453395 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:05:21.453552 kubelet[2019]: I1213 02:05:21.453534 2019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:05:21.453678 kubelet[2019]: I1213 02:05:21.453666 2019 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:05:21.455324 kubelet[2019]: I1213 02:05:21.455305 2019 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:05:21.458794 kubelet[2019]: I1213 02:05:21.458777 2019 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:05:21.459028 kubelet[2019]: I1213 02:05:21.459012 2019 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:05:21.459169 kubelet[2019]: I1213 02:05:21.459158 2019 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:05:21.459977 kubelet[2019]: I1213 02:05:21.459959 2019 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:05:21.460137 kubelet[2019]: E1213 02:05:21.460115 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:21.460221 kubelet[2019]: I1213 02:05:21.460128 2019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:05:21.460808 kubelet[2019]: W1213 02:05:21.460766 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:21.460956 kubelet[2019]: E1213 02:05:21.460935 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:21.462290 kubelet[2019]: I1213 02:05:21.462271 2019 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:05:21.468290 kubelet[2019]: E1213 02:05:21.468243 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-eca73107d2?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="200ms" Dec 13 02:05:21.505077 kubelet[2019]: I1213 02:05:21.505041 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:05:21.507233 kubelet[2019]: I1213 02:05:21.507207 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:05:21.507439 kubelet[2019]: I1213 02:05:21.507415 2019 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:05:21.508718 kubelet[2019]: I1213 02:05:21.507564 2019 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:05:21.509969 kubelet[2019]: E1213 02:05:21.508974 2019 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:05:21.510416 kubelet[2019]: W1213 02:05:21.510390 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:21.510512 kubelet[2019]: E1213 02:05:21.510433 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:21.511663 kubelet[2019]: I1213 02:05:21.511644 2019 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:05:21.511866 kubelet[2019]: I1213 02:05:21.511854 2019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:05:21.511980 kubelet[2019]: I1213 02:05:21.511970 2019 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:21.517909 kubelet[2019]: I1213 02:05:21.517896 2019 policy_none.go:49] "None policy: Start" Dec 13 02:05:21.518579 kubelet[2019]: I1213 02:05:21.518555 2019 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:05:21.518667 kubelet[2019]: I1213 02:05:21.518593 2019 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:05:21.526663 systemd[1]: Created slice kubepods.slice. Dec 13 02:05:21.530844 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:05:21.533826 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:05:21.539231 kubelet[2019]: I1213 02:05:21.539205 2019 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:05:21.539380 kubelet[2019]: I1213 02:05:21.539345 2019 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:05:21.539446 kubelet[2019]: I1213 02:05:21.539382 2019 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:05:21.539958 kubelet[2019]: I1213 02:05:21.539865 2019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:05:21.542434 kubelet[2019]: E1213 02:05:21.542288 2019 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:21.619080 systemd[1]: Created slice kubepods-burstable-podf1fb68618dd614a3883ce695044e9af4.slice. Dec 13 02:05:21.630442 systemd[1]: Created slice kubepods-burstable-pod9dc95b62ac582db14111c5b7ca0a4164.slice. Dec 13 02:05:21.639958 systemd[1]: Created slice kubepods-burstable-pod1f6ad63bbef44179c5041205ac9c1a2b.slice. Dec 13 02:05:21.641742 kubelet[2019]: I1213 02:05:21.641571 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.642075 kubelet[2019]: E1213 02:05:21.641991 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.660654 kubelet[2019]: I1213 02:05:21.660623 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.660830 kubelet[2019]: I1213 02:05:21.660804 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.660928 kubelet[2019]: I1213 02:05:21.660840 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.660928 kubelet[2019]: I1213 02:05:21.660869 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f6ad63bbef44179c5041205ac9c1a2b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-eca73107d2\" (UID: \"1f6ad63bbef44179c5041205ac9c1a2b\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.660928 kubelet[2019]: I1213 02:05:21.660893 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1fb68618dd614a3883ce695044e9af4-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-eca73107d2\" (UID: \"f1fb68618dd614a3883ce695044e9af4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.661109 kubelet[2019]: I1213 02:05:21.660924 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1fb68618dd614a3883ce695044e9af4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-eca73107d2\" (UID: \"f1fb68618dd614a3883ce695044e9af4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.661109 kubelet[2019]: I1213 02:05:21.660952 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.661109 kubelet[2019]: I1213 02:05:21.660980 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1fb68618dd614a3883ce695044e9af4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-eca73107d2\" (UID: \"f1fb68618dd614a3883ce695044e9af4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.661109 kubelet[2019]: I1213 02:05:21.661005 2019 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.669622 kubelet[2019]: E1213 02:05:21.669539 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-eca73107d2?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="400ms" Dec 13 02:05:21.844486 kubelet[2019]: I1213 02:05:21.844445 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.845085 kubelet[2019]: E1213 02:05:21.845042 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:21.930497 env[1405]: time="2024-12-13T02:05:21.930451228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-eca73107d2,Uid:f1fb68618dd614a3883ce695044e9af4,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:21.938308 env[1405]: time="2024-12-13T02:05:21.938273536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-eca73107d2,Uid:9dc95b62ac582db14111c5b7ca0a4164,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:21.943197 env[1405]: time="2024-12-13T02:05:21.943160604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-eca73107d2,Uid:1f6ad63bbef44179c5041205ac9c1a2b,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:22.070183 kubelet[2019]: E1213 02:05:22.070128 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-eca73107d2?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="800ms" Dec 13 02:05:23.143059 kubelet[2019]: I1213 02:05:22.247548 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:23.143059 kubelet[2019]: E1213 02:05:22.247935 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:23.143059 kubelet[2019]: W1213 02:05:22.505657 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:23.143059 kubelet[2019]: E1213 02:05:22.505731 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:23.143059 kubelet[2019]: W1213 02:05:22.564683 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-eca73107d2&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:23.143059 kubelet[2019]: E1213 02:05:22.564752 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-eca73107d2&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:23.143059 kubelet[2019]: W1213 02:05:22.622679 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:23.143923 kubelet[2019]: E1213 02:05:22.622750 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:23.143923 kubelet[2019]: E1213 02:05:22.871060 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-eca73107d2?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="1.6s" Dec 13 02:05:23.143923 kubelet[2019]: W1213 02:05:22.997313 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:23.143923 kubelet[2019]: E1213 02:05:22.997411 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:23.143923 kubelet[2019]: I1213 02:05:23.050459 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:23.143923 kubelet[2019]: E1213 02:05:23.050795 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:23.574613 kubelet[2019]: E1213 02:05:23.574567 2019 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:24.277725 kubelet[2019]: W1213 02:05:24.277680 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:24.278134 kubelet[2019]: E1213 02:05:24.277737 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:24.471729 kubelet[2019]: E1213 02:05:24.471673 2019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-eca73107d2?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="3.2s" Dec 13 02:05:24.551477 kubelet[2019]: E1213 02:05:24.551263 2019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-eca73107d2.18109a4f304cc7ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-eca73107d2,UID:ci-3510.3.6-a-eca73107d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-eca73107d2,},FirstTimestamp:2024-12-13 02:05:21.441073098 +0000 UTC m=+1.539509631,LastTimestamp:2024-12-13 02:05:21.441073098 +0000 UTC m=+1.539509631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-eca73107d2,}" Dec 13 02:05:24.556632 kubelet[2019]: W1213 02:05:24.556601 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-eca73107d2&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:24.556743 kubelet[2019]: E1213 02:05:24.556647 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-eca73107d2&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:24.653012 kubelet[2019]: I1213 02:05:24.652969 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:24.653369 kubelet[2019]: E1213 02:05:24.653328 2019 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:25.296787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496207008.mount: Deactivated successfully. Dec 13 02:05:25.326631 env[1405]: time="2024-12-13T02:05:25.326584139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.329800 env[1405]: time="2024-12-13T02:05:25.329765884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.340256 env[1405]: time="2024-12-13T02:05:25.340224044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.344892 env[1405]: time="2024-12-13T02:05:25.344854473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.349764 env[1405]: time="2024-12-13T02:05:25.349727282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.354563 env[1405]: time="2024-12-13T02:05:25.354531096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.358376 env[1405]: time="2024-12-13T02:05:25.358328691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.362441 env[1405]: time="2024-12-13T02:05:25.362409963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.368743 env[1405]: time="2024-12-13T02:05:25.368709658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.371621 env[1405]: time="2024-12-13T02:05:25.371592026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.386835 env[1405]: time="2024-12-13T02:05:25.386793806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.401290 env[1405]: time="2024-12-13T02:05:25.401244446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:25.454729 kubelet[2019]: W1213 02:05:25.454636 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:25.454729 kubelet[2019]: E1213 02:05:25.454688 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:25.472162 env[1405]: time="2024-12-13T02:05:25.472093859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:25.472372 env[1405]: time="2024-12-13T02:05:25.472135056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:25.472372 env[1405]: time="2024-12-13T02:05:25.472149054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:25.472372 env[1405]: time="2024-12-13T02:05:25.472272145Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4d0a050256f492f3e11ec22f28942e086932648312dd81e460d14b046061a0b pid=2057 runtime=io.containerd.runc.v2 Dec 13 02:05:25.478859 env[1405]: time="2024-12-13T02:05:25.478707928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:25.478859 env[1405]: time="2024-12-13T02:05:25.478829918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:25.479020 env[1405]: time="2024-12-13T02:05:25.478879314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:25.479071 env[1405]: time="2024-12-13T02:05:25.479019603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4d9c49d8dc7c64967d99314b01e2915628f2c0377335cf35f3a1c01068af612 pid=2072 runtime=io.containerd.runc.v2 Dec 13 02:05:25.493975 kubelet[2019]: W1213 02:05:25.493882 2019 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Dec 13 02:05:25.493975 kubelet[2019]: E1213 02:05:25.493938 2019 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:05:25.496573 env[1405]: time="2024-12-13T02:05:25.496509299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:25.496744 env[1405]: time="2024-12-13T02:05:25.496716982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:25.496850 env[1405]: time="2024-12-13T02:05:25.496827573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:25.497854 systemd[1]: Started cri-containerd-b4d9c49d8dc7c64967d99314b01e2915628f2c0377335cf35f3a1c01068af612.scope. Dec 13 02:05:25.498619 env[1405]: time="2024-12-13T02:05:25.498582732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee97a61c5d00a3b4260b90fc4ff7564009a20e6ff046df71ffe4b7911e365ca2 pid=2105 runtime=io.containerd.runc.v2 Dec 13 02:05:25.516177 systemd[1]: Started cri-containerd-e4d0a050256f492f3e11ec22f28942e086932648312dd81e460d14b046061a0b.scope. Dec 13 02:05:25.538182 systemd[1]: Started cri-containerd-ee97a61c5d00a3b4260b90fc4ff7564009a20e6ff046df71ffe4b7911e365ca2.scope. Dec 13 02:05:25.601508 env[1405]: time="2024-12-13T02:05:25.600248072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-eca73107d2,Uid:9dc95b62ac582db14111c5b7ca0a4164,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4d0a050256f492f3e11ec22f28942e086932648312dd81e460d14b046061a0b\"" Dec 13 02:05:25.605124 env[1405]: time="2024-12-13T02:05:25.605090683Z" level=info msg="CreateContainer within sandbox \"e4d0a050256f492f3e11ec22f28942e086932648312dd81e460d14b046061a0b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:05:25.617737 env[1405]: time="2024-12-13T02:05:25.617697471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-eca73107d2,Uid:f1fb68618dd614a3883ce695044e9af4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4d9c49d8dc7c64967d99314b01e2915628f2c0377335cf35f3a1c01068af612\"" Dec 13 02:05:25.620999 env[1405]: time="2024-12-13T02:05:25.620968508Z" level=info msg="CreateContainer within sandbox \"b4d9c49d8dc7c64967d99314b01e2915628f2c0377335cf35f3a1c01068af612\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:05:25.624209 env[1405]: time="2024-12-13T02:05:25.624165052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-eca73107d2,Uid:1f6ad63bbef44179c5041205ac9c1a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee97a61c5d00a3b4260b90fc4ff7564009a20e6ff046df71ffe4b7911e365ca2\"" Dec 13 02:05:25.627169 env[1405]: time="2024-12-13T02:05:25.627137813Z" level=info msg="CreateContainer within sandbox \"ee97a61c5d00a3b4260b90fc4ff7564009a20e6ff046df71ffe4b7911e365ca2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:05:25.687022 env[1405]: time="2024-12-13T02:05:25.686977310Z" level=info msg="CreateContainer within sandbox \"e4d0a050256f492f3e11ec22f28942e086932648312dd81e460d14b046061a0b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cee1cc95534f254e3aec9c29b6b68ab91609d81d34afc4cea904a7f59c16c395\"" Dec 13 02:05:25.687739 env[1405]: time="2024-12-13T02:05:25.687701551Z" level=info msg="StartContainer for \"cee1cc95534f254e3aec9c29b6b68ab91609d81d34afc4cea904a7f59c16c395\"" Dec 13 02:05:25.704519 systemd[1]: Started cri-containerd-cee1cc95534f254e3aec9c29b6b68ab91609d81d34afc4cea904a7f59c16c395.scope. Dec 13 02:05:25.714007 env[1405]: time="2024-12-13T02:05:25.713966743Z" level=info msg="CreateContainer within sandbox \"b4d9c49d8dc7c64967d99314b01e2915628f2c0377335cf35f3a1c01068af612\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38639a5403aa14a85139227a34069aa309084854af2c315f61f38cdf4032733f\"" Dec 13 02:05:25.715197 env[1405]: time="2024-12-13T02:05:25.715163047Z" level=info msg="StartContainer for \"38639a5403aa14a85139227a34069aa309084854af2c315f61f38cdf4032733f\"" Dec 13 02:05:25.719391 env[1405]: time="2024-12-13T02:05:25.719342711Z" level=info msg="CreateContainer within sandbox \"ee97a61c5d00a3b4260b90fc4ff7564009a20e6ff046df71ffe4b7911e365ca2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ffb36bfcbcbc530413ebcf7991e5e1bd5fd0ca7c22245ebf56b8aac55a7745d0\"" Dec 13 02:05:25.719893 env[1405]: time="2024-12-13T02:05:25.719873869Z" level=info msg="StartContainer for \"ffb36bfcbcbc530413ebcf7991e5e1bd5fd0ca7c22245ebf56b8aac55a7745d0\"" Dec 13 02:05:25.750901 systemd[1]: Started cri-containerd-38639a5403aa14a85139227a34069aa309084854af2c315f61f38cdf4032733f.scope. Dec 13 02:05:25.760884 systemd[1]: Started cri-containerd-ffb36bfcbcbc530413ebcf7991e5e1bd5fd0ca7c22245ebf56b8aac55a7745d0.scope. Dec 13 02:05:25.795814 env[1405]: time="2024-12-13T02:05:25.795754578Z" level=info msg="StartContainer for \"cee1cc95534f254e3aec9c29b6b68ab91609d81d34afc4cea904a7f59c16c395\" returns successfully" Dec 13 02:05:25.839333 env[1405]: time="2024-12-13T02:05:25.839288383Z" level=info msg="StartContainer for \"38639a5403aa14a85139227a34069aa309084854af2c315f61f38cdf4032733f\" returns successfully" Dec 13 02:05:25.881254 env[1405]: time="2024-12-13T02:05:25.881144523Z" level=info msg="StartContainer for \"ffb36bfcbcbc530413ebcf7991e5e1bd5fd0ca7c22245ebf56b8aac55a7745d0\" returns successfully" Dec 13 02:05:27.855629 kubelet[2019]: I1213 02:05:27.855599 2019 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:28.327923 kubelet[2019]: E1213 02:05:28.327878 2019 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-eca73107d2\" not found" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:28.399065 kubelet[2019]: I1213 02:05:28.399031 2019 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:28.399266 kubelet[2019]: E1213 02:05:28.399088 2019 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.6-a-eca73107d2\": node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:28.509478 kubelet[2019]: E1213 02:05:28.509444 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:28.610553 kubelet[2019]: E1213 02:05:28.610323 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:28.710993 kubelet[2019]: E1213 02:05:28.710944 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:28.811861 kubelet[2019]: E1213 02:05:28.811770 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:28.912981 kubelet[2019]: E1213 02:05:28.912937 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.013225 kubelet[2019]: E1213 02:05:29.013166 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.113853 kubelet[2019]: E1213 02:05:29.113800 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.214521 kubelet[2019]: E1213 02:05:29.214414 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.315027 kubelet[2019]: E1213 02:05:29.314986 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.415926 kubelet[2019]: E1213 02:05:29.415879 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.517030 kubelet[2019]: E1213 02:05:29.516919 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.617872 kubelet[2019]: E1213 02:05:29.617837 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.718320 kubelet[2019]: E1213 02:05:29.718277 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.818952 kubelet[2019]: E1213 02:05:29.818831 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:29.919964 kubelet[2019]: E1213 02:05:29.919913 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:30.021059 kubelet[2019]: E1213 02:05:30.021005 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:30.122240 kubelet[2019]: E1213 02:05:30.121688 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:30.222551 kubelet[2019]: E1213 02:05:30.222504 2019 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-eca73107d2\" not found" Dec 13 02:05:30.432493 kubelet[2019]: I1213 02:05:30.432426 2019 apiserver.go:52] "Watching apiserver" Dec 13 02:05:30.459366 kubelet[2019]: I1213 02:05:30.459309 2019 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:05:30.502465 systemd[1]: Reloading. Dec 13 02:05:30.572289 /usr/lib/systemd/system-generators/torcx-generator[2310]: time="2024-12-13T02:05:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:05:30.572331 /usr/lib/systemd/system-generators/torcx-generator[2310]: time="2024-12-13T02:05:30Z" level=info msg="torcx already run" Dec 13 02:05:30.676683 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:05:30.676704 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:05:30.701936 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:05:30.847317 systemd[1]: Stopping kubelet.service... Dec 13 02:05:30.865704 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:05:30.865919 systemd[1]: Stopped kubelet.service. Dec 13 02:05:30.865983 systemd[1]: kubelet.service: Consumed 1.265s CPU time. Dec 13 02:05:30.868338 systemd[1]: Starting kubelet.service... Dec 13 02:05:31.033944 systemd[1]: Started kubelet.service. Dec 13 02:05:31.078574 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:31.078910 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:05:31.078954 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:05:31.079079 kubelet[2377]: I1213 02:05:31.079057 2377 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:05:31.084890 kubelet[2377]: I1213 02:05:31.084856 2377 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:05:31.084890 kubelet[2377]: I1213 02:05:31.084879 2377 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:05:31.085142 kubelet[2377]: I1213 02:05:31.085122 2377 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:05:31.086344 kubelet[2377]: I1213 02:05:31.086306 2377 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:05:31.088613 kubelet[2377]: I1213 02:05:31.088506 2377 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:05:31.094977 kubelet[2377]: E1213 02:05:31.094926 2377 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:05:31.095074 kubelet[2377]: I1213 02:05:31.094979 2377 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:05:31.098336 kubelet[2377]: I1213 02:05:31.098323 2377 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:05:31.098527 kubelet[2377]: I1213 02:05:31.098518 2377 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:05:31.098755 kubelet[2377]: I1213 02:05:31.098730 2377 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:05:31.098960 kubelet[2377]: I1213 02:05:31.098816 2377 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-eca73107d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:05:31.099086 kubelet[2377]: I1213 02:05:31.099077 2377 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:05:31.099133 kubelet[2377]: I1213 02:05:31.099128 2377 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:05:31.099207 kubelet[2377]: I1213 02:05:31.099199 2377 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:31.099344 kubelet[2377]: I1213 02:05:31.099334 2377 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:05:31.099447 kubelet[2377]: I1213 02:05:31.099438 2377 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:05:31.099534 kubelet[2377]: I1213 02:05:31.099525 2377 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:05:31.099606 kubelet[2377]: I1213 02:05:31.099597 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:05:31.105979 kubelet[2377]: I1213 02:05:31.105940 2377 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:05:31.106637 kubelet[2377]: I1213 02:05:31.106621 2377 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:05:31.107308 kubelet[2377]: I1213 02:05:31.107282 2377 server.go:1269] "Started kubelet" Dec 13 02:05:31.109892 kubelet[2377]: I1213 02:05:31.109878 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:05:31.116505 kubelet[2377]: I1213 02:05:31.116464 2377 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:05:31.118479 kubelet[2377]: I1213 02:05:31.118464 2377 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:05:31.120712 kubelet[2377]: I1213 02:05:31.120647 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:05:31.121011 kubelet[2377]: I1213 02:05:31.120993 2377 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:05:31.121293 kubelet[2377]: I1213 02:05:31.121256 2377 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:05:31.121686 kubelet[2377]: I1213 02:05:31.121471 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:05:31.124597 kubelet[2377]: I1213 02:05:31.124575 2377 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:05:31.124737 kubelet[2377]: I1213 02:05:31.124719 2377 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:05:31.126676 kubelet[2377]: I1213 02:05:31.125788 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:05:31.128707 kubelet[2377]: I1213 02:05:31.128691 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:05:31.128838 kubelet[2377]: I1213 02:05:31.128826 2377 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:05:31.128958 kubelet[2377]: I1213 02:05:31.128947 2377 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:05:31.129118 kubelet[2377]: E1213 02:05:31.129099 2377 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:05:31.130054 kubelet[2377]: I1213 02:05:31.130024 2377 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:05:31.130265 kubelet[2377]: I1213 02:05:31.130241 2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:05:31.131927 kubelet[2377]: E1213 02:05:31.131721 2377 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:05:31.133094 kubelet[2377]: I1213 02:05:31.133054 2377 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:05:31.498466 kubelet[2377]: E1213 02:05:31.498428 2377 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:05:31.539484 kubelet[2377]: I1213 02:05:31.539450 2377 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:05:31.539484 kubelet[2377]: I1213 02:05:31.539469 2377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:05:31.539484 kubelet[2377]: I1213 02:05:31.539491 2377 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:05:31.539734 kubelet[2377]: I1213 02:05:31.539667 2377 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:05:31.539734 kubelet[2377]: I1213 02:05:31.539681 2377 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:05:31.539734 kubelet[2377]: I1213 02:05:31.539704 2377 policy_none.go:49] "None policy: Start" Dec 13 02:05:31.540400 kubelet[2377]: I1213 02:05:31.540379 2377 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:05:31.540503 kubelet[2377]: I1213 02:05:31.540427 2377 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:05:31.540603 kubelet[2377]: I1213 02:05:31.540585 2377 state_mem.go:75] "Updated machine memory state" Dec 13 02:05:31.544408 kubelet[2377]: I1213 02:05:31.544385 2377 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:05:31.544570 kubelet[2377]: I1213 02:05:31.544552 2377 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:05:31.544638 kubelet[2377]: I1213 02:05:31.544573 2377 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:05:31.545132 kubelet[2377]: I1213 02:05:31.545075 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:05:31.653182 kubelet[2377]: I1213 02:05:31.653136 2377 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.664014 kubelet[2377]: I1213 02:05:31.663977 2377 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.664145 kubelet[2377]: I1213 02:05:31.664049 2377 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.697682 sudo[2406]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:05:31.697981 sudo[2406]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:05:31.709941 kubelet[2377]: W1213 02:05:31.709921 2377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:05:31.714196 kubelet[2377]: W1213 02:05:31.714177 2377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:05:31.714916 kubelet[2377]: W1213 02:05:31.714900 2377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:05:31.800083 kubelet[2377]: I1213 02:05:31.799988 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1fb68618dd614a3883ce695044e9af4-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-eca73107d2\" (UID: \"f1fb68618dd614a3883ce695044e9af4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800083 kubelet[2377]: I1213 02:05:31.800043 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800083 kubelet[2377]: I1213 02:05:31.800073 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800329 kubelet[2377]: I1213 02:05:31.800150 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800329 kubelet[2377]: I1213 02:05:31.800196 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800329 kubelet[2377]: I1213 02:05:31.800221 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f6ad63bbef44179c5041205ac9c1a2b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-eca73107d2\" (UID: \"1f6ad63bbef44179c5041205ac9c1a2b\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800329 kubelet[2377]: I1213 02:05:31.800243 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1fb68618dd614a3883ce695044e9af4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-eca73107d2\" (UID: \"f1fb68618dd614a3883ce695044e9af4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800329 kubelet[2377]: I1213 02:05:31.800297 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1fb68618dd614a3883ce695044e9af4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-eca73107d2\" (UID: \"f1fb68618dd614a3883ce695044e9af4\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:31.800561 kubelet[2377]: I1213 02:05:31.800321 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dc95b62ac582db14111c5b7ca0a4164-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-eca73107d2\" (UID: \"9dc95b62ac582db14111c5b7ca0a4164\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" Dec 13 02:05:32.100916 kubelet[2377]: I1213 02:05:32.100812 2377 apiserver.go:52] "Watching apiserver" Dec 13 02:05:32.125694 kubelet[2377]: I1213 02:05:32.125648 2377 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:05:32.241919 sudo[2406]: pam_unix(sudo:session): session closed for user root Dec 13 02:05:32.291620 kubelet[2377]: I1213 02:05:32.291541 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-eca73107d2" podStartSLOduration=1.29136031 podStartE2EDuration="1.29136031s" podCreationTimestamp="2024-12-13 02:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:32.291365009 +0000 UTC m=+1.250829195" watchObservedRunningTime="2024-12-13 02:05:32.29136031 +0000 UTC m=+1.250824496" Dec 13 02:05:32.317512 kubelet[2377]: I1213 02:05:32.317454 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-eca73107d2" podStartSLOduration=1.317429629 podStartE2EDuration="1.317429629s" podCreationTimestamp="2024-12-13 02:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:32.304927583 +0000 UTC m=+1.264391769" watchObservedRunningTime="2024-12-13 02:05:32.317429629 +0000 UTC m=+1.276893915" Dec 13 02:05:32.331366 kubelet[2377]: I1213 02:05:32.331309 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-eca73107d2" podStartSLOduration=1.331292483 podStartE2EDuration="1.331292483s" podCreationTimestamp="2024-12-13 02:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:32.318182478 +0000 UTC m=+1.277646664" watchObservedRunningTime="2024-12-13 02:05:32.331292483 +0000 UTC m=+1.290756669" Dec 13 02:05:34.400148 sudo[1761]: pam_unix(sudo:session): session closed for user root Dec 13 02:05:34.500420 sshd[1758]: pam_unix(sshd:session): session closed for user core Dec 13 02:05:34.503343 systemd[1]: sshd@4-10.200.8.15:22-10.200.16.10:40314.service: Deactivated successfully. Dec 13 02:05:34.504280 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:05:34.504483 systemd[1]: session-7.scope: Consumed 4.802s CPU time. Dec 13 02:05:34.504991 systemd-logind[1373]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:05:34.505861 systemd-logind[1373]: Removed session 7. Dec 13 02:05:35.388354 kubelet[2377]: I1213 02:05:35.388311 2377 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:05:35.389295 env[1405]: time="2024-12-13T02:05:35.389219042Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:05:35.389691 kubelet[2377]: I1213 02:05:35.389496 2377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:05:36.123440 kubelet[2377]: W1213 02:05:36.122151 2377 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.6-a-eca73107d2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-eca73107d2' and this object Dec 13 02:05:36.123194 systemd[1]: Created slice kubepods-besteffort-pod1b102334_9d98_462d_8a0d_b93c9c9ce3ae.slice. Dec 13 02:05:36.124553 kubelet[2377]: E1213 02:05:36.124517 2377 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.6-a-eca73107d2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.6-a-eca73107d2' and this object" logger="UnhandledError" Dec 13 02:05:36.124767 kubelet[2377]: W1213 02:05:36.124748 2377 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.6-a-eca73107d2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-eca73107d2' and this object Dec 13 02:05:36.124889 kubelet[2377]: E1213 02:05:36.124869 2377 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.6-a-eca73107d2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.6-a-eca73107d2' and this object" logger="UnhandledError" Dec 13 02:05:36.138193 systemd[1]: Created slice kubepods-burstable-pod77b288c0_dbe9_4d7c_ad0d_3bd3be2e42f0.slice. Dec 13 02:05:36.227802 kubelet[2377]: I1213 02:05:36.227763 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-net\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228013 kubelet[2377]: I1213 02:05:36.227846 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-kernel\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228013 kubelet[2377]: I1213 02:05:36.227869 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b102334-9d98-462d-8a0d-b93c9c9ce3ae-kube-proxy\") pod \"kube-proxy-jxwm7\" (UID: \"1b102334-9d98-462d-8a0d-b93c9c9ce3ae\") " pod="kube-system/kube-proxy-jxwm7" Dec 13 02:05:36.228013 kubelet[2377]: I1213 02:05:36.227888 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b102334-9d98-462d-8a0d-b93c9c9ce3ae-xtables-lock\") pod \"kube-proxy-jxwm7\" (UID: \"1b102334-9d98-462d-8a0d-b93c9c9ce3ae\") " pod="kube-system/kube-proxy-jxwm7" Dec 13 02:05:36.228013 kubelet[2377]: I1213 02:05:36.227956 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94w6v\" (UniqueName: \"kubernetes.io/projected/1b102334-9d98-462d-8a0d-b93c9c9ce3ae-kube-api-access-94w6v\") pod \"kube-proxy-jxwm7\" (UID: \"1b102334-9d98-462d-8a0d-b93c9c9ce3ae\") " pod="kube-system/kube-proxy-jxwm7" Dec 13 02:05:36.228210 kubelet[2377]: I1213 02:05:36.228021 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-clustermesh-secrets\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228210 kubelet[2377]: I1213 02:05:36.228042 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-run\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228210 kubelet[2377]: I1213 02:05:36.228101 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-etc-cni-netd\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228210 kubelet[2377]: I1213 02:05:36.228126 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-config-path\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228210 kubelet[2377]: I1213 02:05:36.228176 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b102334-9d98-462d-8a0d-b93c9c9ce3ae-lib-modules\") pod \"kube-proxy-jxwm7\" (UID: \"1b102334-9d98-462d-8a0d-b93c9c9ce3ae\") " pod="kube-system/kube-proxy-jxwm7" Dec 13 02:05:36.228210 kubelet[2377]: I1213 02:05:36.228201 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hostproc\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228514 kubelet[2377]: I1213 02:05:36.228250 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hubble-tls\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228514 kubelet[2377]: I1213 02:05:36.228274 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-bpf-maps\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228514 kubelet[2377]: I1213 02:05:36.228296 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-cgroup\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228514 kubelet[2377]: I1213 02:05:36.228374 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cni-path\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228514 kubelet[2377]: I1213 02:05:36.228399 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-lib-modules\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228514 kubelet[2377]: I1213 02:05:36.228462 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-xtables-lock\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.228711 kubelet[2377]: I1213 02:05:36.228492 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2q4\" (UniqueName: \"kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-kube-api-access-ml2q4\") pod \"cilium-vcktq\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " pod="kube-system/cilium-vcktq" Dec 13 02:05:36.330189 kubelet[2377]: I1213 02:05:36.330152 2377 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 02:05:36.441175 systemd[1]: Created slice kubepods-besteffort-podd1dc6477_e637_4b9b_93d8_b079df2242c3.slice. Dec 13 02:05:36.531177 kubelet[2377]: I1213 02:05:36.531141 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1dc6477-e637-4b9b-93d8-b079df2242c3-cilium-config-path\") pod \"cilium-operator-5d85765b45-mg54f\" (UID: \"d1dc6477-e637-4b9b-93d8-b079df2242c3\") " pod="kube-system/cilium-operator-5d85765b45-mg54f" Dec 13 02:05:36.531177 kubelet[2377]: I1213 02:05:36.531180 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qkl\" (UniqueName: \"kubernetes.io/projected/d1dc6477-e637-4b9b-93d8-b079df2242c3-kube-api-access-p2qkl\") pod \"cilium-operator-5d85765b45-mg54f\" (UID: \"d1dc6477-e637-4b9b-93d8-b079df2242c3\") " pod="kube-system/cilium-operator-5d85765b45-mg54f" Dec 13 02:05:37.042374 env[1405]: time="2024-12-13T02:05:37.042319276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vcktq,Uid:77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:37.047010 env[1405]: time="2024-12-13T02:05:37.046967491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mg54f,Uid:d1dc6477-e637-4b9b-93d8-b079df2242c3,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:37.096226 env[1405]: time="2024-12-13T02:05:37.093731629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:37.096226 env[1405]: time="2024-12-13T02:05:37.093775326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:37.096226 env[1405]: time="2024-12-13T02:05:37.093790425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:37.096226 env[1405]: time="2024-12-13T02:05:37.093926117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f pid=2470 runtime=io.containerd.runc.v2 Dec 13 02:05:37.097212 env[1405]: time="2024-12-13T02:05:37.097116022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:37.097491 env[1405]: time="2024-12-13T02:05:37.097422703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:37.097491 env[1405]: time="2024-12-13T02:05:37.097444802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:37.098636 env[1405]: time="2024-12-13T02:05:37.098513236Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5 pid=2486 runtime=io.containerd.runc.v2 Dec 13 02:05:37.120912 systemd[1]: Started cri-containerd-62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5.scope. Dec 13 02:05:37.139296 systemd[1]: Started cri-containerd-2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f.scope. Dec 13 02:05:37.174932 env[1405]: time="2024-12-13T02:05:37.174124008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vcktq,Uid:77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\"" Dec 13 02:05:37.183623 env[1405]: time="2024-12-13T02:05:37.183560430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:05:37.193089 env[1405]: time="2024-12-13T02:05:37.193049749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mg54f,Uid:d1dc6477-e637-4b9b-93d8-b079df2242c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\"" Dec 13 02:05:37.330532 kubelet[2377]: E1213 02:05:37.330400 2377 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 02:05:37.330900 kubelet[2377]: E1213 02:05:37.330726 2377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1b102334-9d98-462d-8a0d-b93c9c9ce3ae-kube-proxy podName:1b102334-9d98-462d-8a0d-b93c9c9ce3ae nodeName:}" failed. No retries permitted until 2024-12-13 02:05:37.830483336 +0000 UTC m=+6.789947522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1b102334-9d98-462d-8a0d-b93c9c9ce3ae-kube-proxy") pod "kube-proxy-jxwm7" (UID: "1b102334-9d98-462d-8a0d-b93c9c9ce3ae") : failed to sync configmap cache: timed out waiting for the condition Dec 13 02:05:37.934013 env[1405]: time="2024-12-13T02:05:37.933961295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxwm7,Uid:1b102334-9d98-462d-8a0d-b93c9c9ce3ae,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:37.990310 env[1405]: time="2024-12-13T02:05:37.990238350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:05:37.990310 env[1405]: time="2024-12-13T02:05:37.990272848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:05:37.990557 env[1405]: time="2024-12-13T02:05:37.990286447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:05:37.990868 env[1405]: time="2024-12-13T02:05:37.990816914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f9d013fc06e3ef054aa47750fa2eeca6d33bc7d4d34b3545e70e778a4e59260 pid=2550 runtime=io.containerd.runc.v2 Dec 13 02:05:38.015486 systemd[1]: Started cri-containerd-4f9d013fc06e3ef054aa47750fa2eeca6d33bc7d4d34b3545e70e778a4e59260.scope. Dec 13 02:05:38.041035 env[1405]: time="2024-12-13T02:05:38.040995795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxwm7,Uid:1b102334-9d98-462d-8a0d-b93c9c9ce3ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f9d013fc06e3ef054aa47750fa2eeca6d33bc7d4d34b3545e70e778a4e59260\"" Dec 13 02:05:38.045039 env[1405]: time="2024-12-13T02:05:38.045000655Z" level=info msg="CreateContainer within sandbox \"4f9d013fc06e3ef054aa47750fa2eeca6d33bc7d4d34b3545e70e778a4e59260\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:05:38.091064 env[1405]: time="2024-12-13T02:05:38.091021497Z" level=info msg="CreateContainer within sandbox \"4f9d013fc06e3ef054aa47750fa2eeca6d33bc7d4d34b3545e70e778a4e59260\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca94a3eef88e29eacd8878a7c5681288ee5b9b3ea2ef0e14b76849571831ee99\"" Dec 13 02:05:38.092376 env[1405]: time="2024-12-13T02:05:38.091505868Z" level=info msg="StartContainer for \"ca94a3eef88e29eacd8878a7c5681288ee5b9b3ea2ef0e14b76849571831ee99\"" Dec 13 02:05:38.108313 systemd[1]: Started cri-containerd-ca94a3eef88e29eacd8878a7c5681288ee5b9b3ea2ef0e14b76849571831ee99.scope. Dec 13 02:05:38.146086 env[1405]: time="2024-12-13T02:05:38.146039500Z" level=info msg="StartContainer for \"ca94a3eef88e29eacd8878a7c5681288ee5b9b3ea2ef0e14b76849571831ee99\" returns successfully" Dec 13 02:05:38.339744 systemd[1]: run-containerd-runc-k8s.io-4f9d013fc06e3ef054aa47750fa2eeca6d33bc7d4d34b3545e70e778a4e59260-runc.jH6N4q.mount: Deactivated successfully. Dec 13 02:05:38.543795 kubelet[2377]: I1213 02:05:38.543716 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxwm7" podStartSLOduration=2.543694268 podStartE2EDuration="2.543694268s" podCreationTimestamp="2024-12-13 02:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:05:38.543219297 +0000 UTC m=+7.502683483" watchObservedRunningTime="2024-12-13 02:05:38.543694268 +0000 UTC m=+7.503158454" Dec 13 02:05:48.020494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939704860.mount: Deactivated successfully. Dec 13 02:05:50.722200 env[1405]: time="2024-12-13T02:05:50.722107734Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:50.730480 env[1405]: time="2024-12-13T02:05:50.730442539Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:50.735977 env[1405]: time="2024-12-13T02:05:50.735947679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:50.736459 env[1405]: time="2024-12-13T02:05:50.736424456Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:05:50.739478 env[1405]: time="2024-12-13T02:05:50.739444713Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:05:50.740336 env[1405]: time="2024-12-13T02:05:50.740309073Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:05:50.774327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375981897.mount: Deactivated successfully. Dec 13 02:05:50.788341 env[1405]: time="2024-12-13T02:05:50.788303902Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\"" Dec 13 02:05:50.789964 env[1405]: time="2024-12-13T02:05:50.788937872Z" level=info msg="StartContainer for \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\"" Dec 13 02:05:50.813481 systemd[1]: Started cri-containerd-7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c.scope. Dec 13 02:05:50.848040 env[1405]: time="2024-12-13T02:05:50.847996577Z" level=info msg="StartContainer for \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\" returns successfully" Dec 13 02:05:50.853402 systemd[1]: cri-containerd-7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c.scope: Deactivated successfully. Dec 13 02:05:51.772259 systemd[1]: run-containerd-runc-k8s.io-7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c-runc.HLRVGF.mount: Deactivated successfully. Dec 13 02:05:51.772577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c-rootfs.mount: Deactivated successfully. Dec 13 02:05:54.503615 env[1405]: time="2024-12-13T02:05:54.503559019Z" level=info msg="shim disconnected" id=7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c Dec 13 02:05:54.504106 env[1405]: time="2024-12-13T02:05:54.504071196Z" level=warning msg="cleaning up after shim disconnected" id=7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c namespace=k8s.io Dec 13 02:05:54.504106 env[1405]: time="2024-12-13T02:05:54.504096995Z" level=info msg="cleaning up dead shim" Dec 13 02:05:54.512091 env[1405]: time="2024-12-13T02:05:54.512051945Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:05:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2801 runtime=io.containerd.runc.v2\n" Dec 13 02:05:54.566919 env[1405]: time="2024-12-13T02:05:54.566874428Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:05:54.607292 env[1405]: time="2024-12-13T02:05:54.607202051Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\"" Dec 13 02:05:54.609158 env[1405]: time="2024-12-13T02:05:54.608094512Z" level=info msg="StartContainer for \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\"" Dec 13 02:05:54.632604 systemd[1]: Started cri-containerd-7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743.scope. Dec 13 02:05:54.664221 env[1405]: time="2024-12-13T02:05:54.664179840Z" level=info msg="StartContainer for \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\" returns successfully" Dec 13 02:05:54.671104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:05:54.671692 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:05:54.673493 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:05:54.675143 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:05:54.678014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:05:54.679072 systemd[1]: cri-containerd-7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743.scope: Deactivated successfully. Dec 13 02:05:54.691169 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:05:54.714882 env[1405]: time="2024-12-13T02:05:54.714840607Z" level=info msg="shim disconnected" id=7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743 Dec 13 02:05:54.715082 env[1405]: time="2024-12-13T02:05:54.714882005Z" level=warning msg="cleaning up after shim disconnected" id=7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743 namespace=k8s.io Dec 13 02:05:54.715082 env[1405]: time="2024-12-13T02:05:54.714894105Z" level=info msg="cleaning up dead shim" Dec 13 02:05:54.723149 env[1405]: time="2024-12-13T02:05:54.723111443Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:05:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2868 runtime=io.containerd.runc.v2\n" Dec 13 02:05:55.568321 env[1405]: time="2024-12-13T02:05:55.568273318Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:05:55.594646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743-rootfs.mount: Deactivated successfully. Dec 13 02:05:55.606855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008809646.mount: Deactivated successfully. Dec 13 02:05:55.628294 env[1405]: time="2024-12-13T02:05:55.628246420Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\"" Dec 13 02:05:55.629864 env[1405]: time="2024-12-13T02:05:55.628929490Z" level=info msg="StartContainer for \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\"" Dec 13 02:05:55.655752 systemd[1]: Started cri-containerd-b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f.scope. Dec 13 02:05:55.689821 systemd[1]: cri-containerd-b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f.scope: Deactivated successfully. Dec 13 02:05:55.695134 env[1405]: time="2024-12-13T02:05:55.695097724Z" level=info msg="StartContainer for \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\" returns successfully" Dec 13 02:05:55.737022 env[1405]: time="2024-12-13T02:05:55.736954410Z" level=info msg="shim disconnected" id=b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f Dec 13 02:05:55.737288 env[1405]: time="2024-12-13T02:05:55.737267997Z" level=warning msg="cleaning up after shim disconnected" id=b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f namespace=k8s.io Dec 13 02:05:55.737404 env[1405]: time="2024-12-13T02:05:55.737389891Z" level=info msg="cleaning up dead shim" Dec 13 02:05:55.751963 env[1405]: time="2024-12-13T02:05:55.751926262Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:05:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" Dec 13 02:05:56.307409 env[1405]: time="2024-12-13T02:05:56.307336922Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:56.316534 env[1405]: time="2024-12-13T02:05:56.316495831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:56.320663 env[1405]: time="2024-12-13T02:05:56.320630755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:05:56.321039 env[1405]: time="2024-12-13T02:05:56.321008139Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:05:56.323920 env[1405]: time="2024-12-13T02:05:56.323881917Z" level=info msg="CreateContainer within sandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:05:56.353305 env[1405]: time="2024-12-13T02:05:56.353268365Z" level=info msg="CreateContainer within sandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\"" Dec 13 02:05:56.354692 env[1405]: time="2024-12-13T02:05:56.354662605Z" level=info msg="StartContainer for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\"" Dec 13 02:05:56.370551 systemd[1]: Started cri-containerd-b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c.scope. Dec 13 02:05:56.403013 env[1405]: time="2024-12-13T02:05:56.402958148Z" level=info msg="StartContainer for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" returns successfully" Dec 13 02:05:56.579978 env[1405]: time="2024-12-13T02:05:56.579865612Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:05:56.615087 env[1405]: time="2024-12-13T02:05:56.615020415Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\"" Dec 13 02:05:56.615753 env[1405]: time="2024-12-13T02:05:56.615716485Z" level=info msg="StartContainer for \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\"" Dec 13 02:05:56.644438 systemd[1]: Started cri-containerd-580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727.scope. Dec 13 02:05:56.699402 env[1405]: time="2024-12-13T02:05:56.699343122Z" level=info msg="StartContainer for \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\" returns successfully" Dec 13 02:05:56.705880 systemd[1]: cri-containerd-580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727.scope: Deactivated successfully. Dec 13 02:05:56.770393 kubelet[2377]: I1213 02:05:56.770314 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mg54f" podStartSLOduration=1.64254978 podStartE2EDuration="20.7702875s" podCreationTimestamp="2024-12-13 02:05:36 +0000 UTC" firstStartedPulling="2024-12-13 02:05:37.194162781 +0000 UTC m=+6.153626967" lastFinishedPulling="2024-12-13 02:05:56.321900501 +0000 UTC m=+25.281364687" observedRunningTime="2024-12-13 02:05:56.645081534 +0000 UTC m=+25.604545820" watchObservedRunningTime="2024-12-13 02:05:56.7702875 +0000 UTC m=+25.729751786" Dec 13 02:05:57.151733 env[1405]: time="2024-12-13T02:05:57.151683959Z" level=info msg="shim disconnected" id=580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727 Dec 13 02:05:57.151969 env[1405]: time="2024-12-13T02:05:57.151948348Z" level=warning msg="cleaning up after shim disconnected" id=580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727 namespace=k8s.io Dec 13 02:05:57.152063 env[1405]: time="2024-12-13T02:05:57.152051044Z" level=info msg="cleaning up dead shim" Dec 13 02:05:57.168493 env[1405]: time="2024-12-13T02:05:57.168441857Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:05:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3020 runtime=io.containerd.runc.v2\n" Dec 13 02:05:57.578407 env[1405]: time="2024-12-13T02:05:57.578338184Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:05:57.594523 systemd[1]: run-containerd-runc-k8s.io-580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727-runc.Q1zVv6.mount: Deactivated successfully. Dec 13 02:05:57.594634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727-rootfs.mount: Deactivated successfully. Dec 13 02:05:57.625706 env[1405]: time="2024-12-13T02:05:57.625664901Z" level=info msg="CreateContainer within sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\"" Dec 13 02:05:57.626654 env[1405]: time="2024-12-13T02:05:57.626618261Z" level=info msg="StartContainer for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\"" Dec 13 02:05:57.655282 systemd[1]: Started cri-containerd-b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0.scope. Dec 13 02:05:57.696696 env[1405]: time="2024-12-13T02:05:57.696647327Z" level=info msg="StartContainer for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" returns successfully" Dec 13 02:05:57.865320 kubelet[2377]: I1213 02:05:57.864106 2377 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:05:57.920256 systemd[1]: Created slice kubepods-burstable-pod2794f9a2_d415_4af3_94bf_9c04fc668bd2.slice. Dec 13 02:05:57.933556 systemd[1]: Created slice kubepods-burstable-podb2237094_2b4f_4157_93c8_17c4df49337f.slice. Dec 13 02:05:57.980625 kubelet[2377]: I1213 02:05:57.980578 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2794f9a2-d415-4af3-94bf-9c04fc668bd2-config-volume\") pod \"coredns-6f6b679f8f-2wdkc\" (UID: \"2794f9a2-d415-4af3-94bf-9c04fc668bd2\") " pod="kube-system/coredns-6f6b679f8f-2wdkc" Dec 13 02:05:57.980953 kubelet[2377]: I1213 02:05:57.980931 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjkr\" (UniqueName: \"kubernetes.io/projected/2794f9a2-d415-4af3-94bf-9c04fc668bd2-kube-api-access-svjkr\") pod \"coredns-6f6b679f8f-2wdkc\" (UID: \"2794f9a2-d415-4af3-94bf-9c04fc668bd2\") " pod="kube-system/coredns-6f6b679f8f-2wdkc" Dec 13 02:05:58.082160 kubelet[2377]: I1213 02:05:58.082111 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2237094-2b4f-4157-93c8-17c4df49337f-config-volume\") pod \"coredns-6f6b679f8f-72z6d\" (UID: \"b2237094-2b4f-4157-93c8-17c4df49337f\") " pod="kube-system/coredns-6f6b679f8f-72z6d" Dec 13 02:05:58.082420 kubelet[2377]: I1213 02:05:58.082401 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xxw\" (UniqueName: \"kubernetes.io/projected/b2237094-2b4f-4157-93c8-17c4df49337f-kube-api-access-j5xxw\") pod \"coredns-6f6b679f8f-72z6d\" (UID: \"b2237094-2b4f-4157-93c8-17c4df49337f\") " pod="kube-system/coredns-6f6b679f8f-72z6d" Dec 13 02:05:58.224971 env[1405]: time="2024-12-13T02:05:58.224923447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2wdkc,Uid:2794f9a2-d415-4af3-94bf-9c04fc668bd2,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:58.238239 env[1405]: time="2024-12-13T02:05:58.238186001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-72z6d,Uid:b2237094-2b4f-4157-93c8-17c4df49337f,Namespace:kube-system,Attempt:0,}" Dec 13 02:05:58.600911 systemd[1]: run-containerd-runc-k8s.io-b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0-runc.zv3Vm8.mount: Deactivated successfully. Dec 13 02:05:58.606613 kubelet[2377]: I1213 02:05:58.606549 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vcktq" podStartSLOduration=9.049971201 podStartE2EDuration="22.606525519s" podCreationTimestamp="2024-12-13 02:05:36 +0000 UTC" firstStartedPulling="2024-12-13 02:05:37.181139978 +0000 UTC m=+6.140604164" lastFinishedPulling="2024-12-13 02:05:50.737694296 +0000 UTC m=+19.697158482" observedRunningTime="2024-12-13 02:05:58.601776015 +0000 UTC m=+27.561240301" watchObservedRunningTime="2024-12-13 02:05:58.606525519 +0000 UTC m=+27.565989705" Dec 13 02:06:00.490432 systemd-networkd[1532]: cilium_host: Link UP Dec 13 02:06:00.492632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:06:00.492710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:06:00.494280 systemd-networkd[1532]: cilium_net: Link UP Dec 13 02:06:00.494706 systemd-networkd[1532]: cilium_net: Gained carrier Dec 13 02:06:00.494925 systemd-networkd[1532]: cilium_host: Gained carrier Dec 13 02:06:00.549430 systemd-networkd[1532]: cilium_net: Gained IPv6LL Dec 13 02:06:00.704345 systemd-networkd[1532]: cilium_vxlan: Link UP Dec 13 02:06:00.704365 systemd-networkd[1532]: cilium_vxlan: Gained carrier Dec 13 02:06:01.035374 kernel: NET: Registered PF_ALG protocol family Dec 13 02:06:01.429586 systemd-networkd[1532]: cilium_host: Gained IPv6LL Dec 13 02:06:01.960870 systemd-networkd[1532]: lxc_health: Link UP Dec 13 02:06:01.973382 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:06:01.970469 systemd-networkd[1532]: lxc_health: Gained carrier Dec 13 02:06:02.305788 systemd-networkd[1532]: lxc3a00e6193749: Link UP Dec 13 02:06:02.311625 kernel: eth0: renamed from tmp322ef Dec 13 02:06:02.321546 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3a00e6193749: link becomes ready Dec 13 02:06:02.326521 systemd-networkd[1532]: lxc3a00e6193749: Gained carrier Dec 13 02:06:02.328614 systemd-networkd[1532]: lxcf46a90f2c30a: Link UP Dec 13 02:06:02.341535 kernel: eth0: renamed from tmpf6ae4 Dec 13 02:06:02.355423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf46a90f2c30a: link becomes ready Dec 13 02:06:02.357528 systemd-networkd[1532]: lxcf46a90f2c30a: Gained carrier Dec 13 02:06:02.709559 systemd-networkd[1532]: cilium_vxlan: Gained IPv6LL Dec 13 02:06:03.413507 systemd-networkd[1532]: lxc_health: Gained IPv6LL Dec 13 02:06:03.477474 systemd-networkd[1532]: lxc3a00e6193749: Gained IPv6LL Dec 13 02:06:04.374515 systemd-networkd[1532]: lxcf46a90f2c30a: Gained IPv6LL Dec 13 02:06:06.031655 env[1405]: time="2024-12-13T02:06:06.031584881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:06.032181 env[1405]: time="2024-12-13T02:06:06.032148360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:06.032313 env[1405]: time="2024-12-13T02:06:06.032288155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:06.033328 env[1405]: time="2024-12-13T02:06:06.032562345Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/322efd9372f34f365484968f08a037ca2eb75c5692fec3f80ff2183410a68523 pid=3568 runtime=io.containerd.runc.v2 Dec 13 02:06:06.057812 systemd[1]: Started cri-containerd-322efd9372f34f365484968f08a037ca2eb75c5692fec3f80ff2183410a68523.scope. Dec 13 02:06:06.064062 systemd[1]: run-containerd-runc-k8s.io-322efd9372f34f365484968f08a037ca2eb75c5692fec3f80ff2183410a68523-runc.ACFkp6.mount: Deactivated successfully. Dec 13 02:06:06.135755 env[1405]: time="2024-12-13T02:06:06.135707882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2wdkc,Uid:2794f9a2-d415-4af3-94bf-9c04fc668bd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"322efd9372f34f365484968f08a037ca2eb75c5692fec3f80ff2183410a68523\"" Dec 13 02:06:06.138688 env[1405]: time="2024-12-13T02:06:06.138652074Z" level=info msg="CreateContainer within sandbox \"322efd9372f34f365484968f08a037ca2eb75c5692fec3f80ff2183410a68523\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:06:06.171915 env[1405]: time="2024-12-13T02:06:06.171851063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:06:06.172147 env[1405]: time="2024-12-13T02:06:06.172117553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:06:06.172277 env[1405]: time="2024-12-13T02:06:06.172254248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:06:06.172602 env[1405]: time="2024-12-13T02:06:06.172558637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6ae4f8bdf888ed9ebba344f81e27cb0d845220751dda7db91beadad66267981 pid=3610 runtime=io.containerd.runc.v2 Dec 13 02:06:06.176937 env[1405]: time="2024-12-13T02:06:06.176884980Z" level=info msg="CreateContainer within sandbox \"322efd9372f34f365484968f08a037ca2eb75c5692fec3f80ff2183410a68523\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02decf678cf1ac6eea6c807be49eca64d05284050121b4f966080e6524c122f0\"" Dec 13 02:06:06.177944 env[1405]: time="2024-12-13T02:06:06.177906942Z" level=info msg="StartContainer for \"02decf678cf1ac6eea6c807be49eca64d05284050121b4f966080e6524c122f0\"" Dec 13 02:06:06.202711 systemd[1]: Started cri-containerd-f6ae4f8bdf888ed9ebba344f81e27cb0d845220751dda7db91beadad66267981.scope. Dec 13 02:06:06.218430 systemd[1]: Started cri-containerd-02decf678cf1ac6eea6c807be49eca64d05284050121b4f966080e6524c122f0.scope. Dec 13 02:06:06.296385 env[1405]: time="2024-12-13T02:06:06.295480653Z" level=info msg="StartContainer for \"02decf678cf1ac6eea6c807be49eca64d05284050121b4f966080e6524c122f0\" returns successfully" Dec 13 02:06:06.303607 env[1405]: time="2024-12-13T02:06:06.303566558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-72z6d,Uid:b2237094-2b4f-4157-93c8-17c4df49337f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6ae4f8bdf888ed9ebba344f81e27cb0d845220751dda7db91beadad66267981\"" Dec 13 02:06:06.306375 env[1405]: time="2024-12-13T02:06:06.306321857Z" level=info msg="CreateContainer within sandbox \"f6ae4f8bdf888ed9ebba344f81e27cb0d845220751dda7db91beadad66267981\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:06:06.341858 env[1405]: time="2024-12-13T02:06:06.341692567Z" level=info msg="CreateContainer within sandbox \"f6ae4f8bdf888ed9ebba344f81e27cb0d845220751dda7db91beadad66267981\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9520220e0bac8407cb83551a4fedc59650c58d02778f9fa5f1d76e888f56430\"" Dec 13 02:06:06.342561 env[1405]: time="2024-12-13T02:06:06.342519637Z" level=info msg="StartContainer for \"f9520220e0bac8407cb83551a4fedc59650c58d02778f9fa5f1d76e888f56430\"" Dec 13 02:06:06.368108 systemd[1]: Started cri-containerd-f9520220e0bac8407cb83551a4fedc59650c58d02778f9fa5f1d76e888f56430.scope. Dec 13 02:06:06.400444 env[1405]: time="2024-12-13T02:06:06.400408525Z" level=info msg="StartContainer for \"f9520220e0bac8407cb83551a4fedc59650c58d02778f9fa5f1d76e888f56430\" returns successfully" Dec 13 02:06:06.615509 kubelet[2377]: I1213 02:06:06.615368 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2wdkc" podStartSLOduration=30.615332783 podStartE2EDuration="30.615332783s" podCreationTimestamp="2024-12-13 02:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:06:06.614199025 +0000 UTC m=+35.573663311" watchObservedRunningTime="2024-12-13 02:06:06.615332783 +0000 UTC m=+35.574796969" Dec 13 02:06:06.643273 kubelet[2377]: I1213 02:06:06.643218 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-72z6d" podStartSLOduration=30.643196967 podStartE2EDuration="30.643196967s" podCreationTimestamp="2024-12-13 02:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:06:06.642616488 +0000 UTC m=+35.602080674" watchObservedRunningTime="2024-12-13 02:06:06.643196967 +0000 UTC m=+35.602661153" Dec 13 02:07:57.736898 systemd[1]: Started sshd@5-10.200.8.15:22-10.200.16.10:50498.service. Dec 13 02:07:58.362956 sshd[3744]: Accepted publickey for core from 10.200.16.10 port 50498 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:07:58.364496 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:58.369422 systemd-logind[1373]: New session 8 of user core. Dec 13 02:07:58.370099 systemd[1]: Started session-8.scope. Dec 13 02:07:58.921742 sshd[3744]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:58.925278 systemd[1]: sshd@5-10.200.8.15:22-10.200.16.10:50498.service: Deactivated successfully. Dec 13 02:07:58.926343 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:07:58.926996 systemd-logind[1373]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:07:58.927796 systemd-logind[1373]: Removed session 8. Dec 13 02:08:04.028976 systemd[1]: Started sshd@6-10.200.8.15:22-10.200.16.10:41794.service. Dec 13 02:08:04.653759 sshd[3757]: Accepted publickey for core from 10.200.16.10 port 41794 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:04.655446 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:04.661139 systemd-logind[1373]: New session 9 of user core. Dec 13 02:08:04.661527 systemd[1]: Started session-9.scope. Dec 13 02:08:05.158682 sshd[3757]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:05.162207 systemd[1]: sshd@6-10.200.8.15:22-10.200.16.10:41794.service: Deactivated successfully. Dec 13 02:08:05.163239 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:08:05.164065 systemd-logind[1373]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:08:05.164877 systemd-logind[1373]: Removed session 9. Dec 13 02:08:10.263542 systemd[1]: Started sshd@7-10.200.8.15:22-10.200.16.10:38376.service. Dec 13 02:08:10.897331 sshd[3772]: Accepted publickey for core from 10.200.16.10 port 38376 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:10.898812 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:10.903655 systemd-logind[1373]: New session 10 of user core. Dec 13 02:08:10.904160 systemd[1]: Started session-10.scope. Dec 13 02:08:11.399506 sshd[3772]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:11.403073 systemd[1]: sshd@7-10.200.8.15:22-10.200.16.10:38376.service: Deactivated successfully. Dec 13 02:08:11.404173 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:08:11.405039 systemd-logind[1373]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:08:11.406027 systemd-logind[1373]: Removed session 10. Dec 13 02:08:16.505942 systemd[1]: Started sshd@8-10.200.8.15:22-10.200.16.10:38392.service. Dec 13 02:08:17.132307 sshd[3784]: Accepted publickey for core from 10.200.16.10 port 38392 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:17.134401 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:17.139425 systemd[1]: Started session-11.scope. Dec 13 02:08:17.139879 systemd-logind[1373]: New session 11 of user core. Dec 13 02:08:17.631990 sshd[3784]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:17.635083 systemd[1]: sshd@8-10.200.8.15:22-10.200.16.10:38392.service: Deactivated successfully. Dec 13 02:08:17.636027 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:08:17.636801 systemd-logind[1373]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:08:17.637625 systemd-logind[1373]: Removed session 11. Dec 13 02:08:22.737079 systemd[1]: Started sshd@9-10.200.8.15:22-10.200.16.10:49916.service. Dec 13 02:08:23.367443 sshd[3797]: Accepted publickey for core from 10.200.16.10 port 49916 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:23.368842 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:23.373984 systemd-logind[1373]: New session 12 of user core. Dec 13 02:08:23.374490 systemd[1]: Started session-12.scope. Dec 13 02:08:23.868505 sshd[3797]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:23.871904 systemd[1]: sshd@9-10.200.8.15:22-10.200.16.10:49916.service: Deactivated successfully. Dec 13 02:08:23.873040 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:08:23.873912 systemd-logind[1373]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:08:23.874841 systemd-logind[1373]: Removed session 12. Dec 13 02:08:23.972496 systemd[1]: Started sshd@10-10.200.8.15:22-10.200.16.10:49930.service. Dec 13 02:08:24.598956 sshd[3810]: Accepted publickey for core from 10.200.16.10 port 49930 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:24.600596 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:24.605828 systemd[1]: Started session-13.scope. Dec 13 02:08:24.606284 systemd-logind[1373]: New session 13 of user core. Dec 13 02:08:25.137906 sshd[3810]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:25.141141 systemd-logind[1373]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:08:25.141405 systemd[1]: sshd@10-10.200.8.15:22-10.200.16.10:49930.service: Deactivated successfully. Dec 13 02:08:25.142374 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:08:25.143777 systemd-logind[1373]: Removed session 13. Dec 13 02:08:25.241202 systemd[1]: Started sshd@11-10.200.8.15:22-10.200.16.10:49936.service. Dec 13 02:08:25.863585 sshd[3819]: Accepted publickey for core from 10.200.16.10 port 49936 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:25.865283 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:25.870327 systemd[1]: Started session-14.scope. Dec 13 02:08:25.870985 systemd-logind[1373]: New session 14 of user core. Dec 13 02:08:26.365220 sshd[3819]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:26.368758 systemd[1]: sshd@11-10.200.8.15:22-10.200.16.10:49936.service: Deactivated successfully. Dec 13 02:08:26.369708 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:08:26.370427 systemd-logind[1373]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:08:26.371304 systemd-logind[1373]: Removed session 14. Dec 13 02:08:31.470423 systemd[1]: Started sshd@12-10.200.8.15:22-10.200.16.10:50654.service. Dec 13 02:08:32.095301 sshd[3833]: Accepted publickey for core from 10.200.16.10 port 50654 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:32.096876 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:32.102110 systemd[1]: Started session-15.scope. Dec 13 02:08:32.102611 systemd-logind[1373]: New session 15 of user core. Dec 13 02:08:32.592576 sshd[3833]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:32.595536 systemd[1]: sshd@12-10.200.8.15:22-10.200.16.10:50654.service: Deactivated successfully. Dec 13 02:08:32.596285 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:08:32.597265 systemd-logind[1373]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:08:32.598085 systemd-logind[1373]: Removed session 15. Dec 13 02:08:37.699163 systemd[1]: Started sshd@13-10.200.8.15:22-10.200.16.10:50670.service. Dec 13 02:08:38.327887 sshd[3846]: Accepted publickey for core from 10.200.16.10 port 50670 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:38.329308 sshd[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:38.334301 systemd[1]: Started session-16.scope. Dec 13 02:08:38.334833 systemd-logind[1373]: New session 16 of user core. Dec 13 02:08:38.826407 sshd[3846]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:38.828820 systemd[1]: sshd@13-10.200.8.15:22-10.200.16.10:50670.service: Deactivated successfully. Dec 13 02:08:38.829742 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:08:38.830410 systemd-logind[1373]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:08:38.831158 systemd-logind[1373]: Removed session 16. Dec 13 02:08:38.929394 systemd[1]: Started sshd@14-10.200.8.15:22-10.200.16.10:53460.service. Dec 13 02:08:39.554133 sshd[3860]: Accepted publickey for core from 10.200.16.10 port 53460 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:39.555614 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:39.560515 systemd[1]: Started session-17.scope. Dec 13 02:08:39.560993 systemd-logind[1373]: New session 17 of user core. Dec 13 02:08:40.136688 sshd[3860]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:40.140000 systemd[1]: sshd@14-10.200.8.15:22-10.200.16.10:53460.service: Deactivated successfully. Dec 13 02:08:40.141158 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:08:40.142016 systemd-logind[1373]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:08:40.143099 systemd-logind[1373]: Removed session 17. Dec 13 02:08:40.243924 systemd[1]: Started sshd@15-10.200.8.15:22-10.200.16.10:53466.service. Dec 13 02:08:40.868150 sshd[3869]: Accepted publickey for core from 10.200.16.10 port 53466 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:40.869701 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:40.874441 systemd-logind[1373]: New session 18 of user core. Dec 13 02:08:40.874723 systemd[1]: Started session-18.scope. Dec 13 02:08:42.899229 sshd[3869]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:42.902539 systemd[1]: sshd@15-10.200.8.15:22-10.200.16.10:53466.service: Deactivated successfully. Dec 13 02:08:42.903420 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:08:42.904026 systemd-logind[1373]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:08:42.905364 systemd-logind[1373]: Removed session 18. Dec 13 02:08:43.003097 systemd[1]: Started sshd@16-10.200.8.15:22-10.200.16.10:53476.service. Dec 13 02:08:43.627128 sshd[3887]: Accepted publickey for core from 10.200.16.10 port 53476 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:43.628903 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:43.633981 systemd[1]: Started session-19.scope. Dec 13 02:08:43.634645 systemd-logind[1373]: New session 19 of user core. Dec 13 02:08:44.229873 sshd[3887]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:44.233423 systemd[1]: sshd@16-10.200.8.15:22-10.200.16.10:53476.service: Deactivated successfully. Dec 13 02:08:44.234562 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:08:44.235424 systemd-logind[1373]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:08:44.236270 systemd-logind[1373]: Removed session 19. Dec 13 02:08:44.340637 systemd[1]: Started sshd@17-10.200.8.15:22-10.200.16.10:53488.service. Dec 13 02:08:44.966979 sshd[3896]: Accepted publickey for core from 10.200.16.10 port 53488 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:44.968558 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:44.975646 systemd[1]: Started session-20.scope. Dec 13 02:08:44.976464 systemd-logind[1373]: New session 20 of user core. Dec 13 02:08:45.465423 sshd[3896]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:45.468535 systemd[1]: sshd@17-10.200.8.15:22-10.200.16.10:53488.service: Deactivated successfully. Dec 13 02:08:45.469538 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:08:45.470301 systemd-logind[1373]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:08:45.471245 systemd-logind[1373]: Removed session 20. Dec 13 02:08:50.571422 systemd[1]: Started sshd@18-10.200.8.15:22-10.200.16.10:48098.service. Dec 13 02:08:51.198327 sshd[3914]: Accepted publickey for core from 10.200.16.10 port 48098 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:51.199007 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:51.204599 systemd[1]: Started session-21.scope. Dec 13 02:08:51.205045 systemd-logind[1373]: New session 21 of user core. Dec 13 02:08:51.697807 sshd[3914]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:51.701570 systemd[1]: sshd@18-10.200.8.15:22-10.200.16.10:48098.service: Deactivated successfully. Dec 13 02:08:51.702705 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:08:51.703593 systemd-logind[1373]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:08:51.704588 systemd-logind[1373]: Removed session 21. Dec 13 02:08:56.805746 systemd[1]: Started sshd@19-10.200.8.15:22-10.200.16.10:48108.service. Dec 13 02:08:57.430117 sshd[3925]: Accepted publickey for core from 10.200.16.10 port 48108 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:08:57.431766 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:57.437197 systemd[1]: Started session-22.scope. Dec 13 02:08:57.437712 systemd-logind[1373]: New session 22 of user core. Dec 13 02:08:57.926640 sshd[3925]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:57.929939 systemd[1]: sshd@19-10.200.8.15:22-10.200.16.10:48108.service: Deactivated successfully. Dec 13 02:08:57.930898 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:08:57.931609 systemd-logind[1373]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:08:57.932372 systemd-logind[1373]: Removed session 22. Dec 13 02:09:03.031529 systemd[1]: Started sshd@20-10.200.8.15:22-10.200.16.10:41866.service. Dec 13 02:09:03.656109 sshd[3937]: Accepted publickey for core from 10.200.16.10 port 41866 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:09:03.657682 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:03.663002 systemd-logind[1373]: New session 23 of user core. Dec 13 02:09:03.663567 systemd[1]: Started session-23.scope. Dec 13 02:09:04.160632 sshd[3937]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:04.164169 systemd[1]: sshd@20-10.200.8.15:22-10.200.16.10:41866.service: Deactivated successfully. Dec 13 02:09:04.165225 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:09:04.166067 systemd-logind[1373]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:09:04.166917 systemd-logind[1373]: Removed session 23. Dec 13 02:09:04.265526 systemd[1]: Started sshd@21-10.200.8.15:22-10.200.16.10:41876.service. Dec 13 02:09:04.890584 sshd[3949]: Accepted publickey for core from 10.200.16.10 port 41876 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:09:04.892911 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:04.898070 systemd-logind[1373]: New session 24 of user core. Dec 13 02:09:04.898587 systemd[1]: Started session-24.scope. Dec 13 02:09:06.535797 env[1405]: time="2024-12-13T02:09:06.535738511Z" level=info msg="StopContainer for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" with timeout 30 (s)" Dec 13 02:09:06.536253 env[1405]: time="2024-12-13T02:09:06.536221401Z" level=info msg="Stop container \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" with signal terminated" Dec 13 02:09:06.544204 systemd[1]: run-containerd-runc-k8s.io-b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0-runc.BWiJJ9.mount: Deactivated successfully. Dec 13 02:09:06.559611 systemd[1]: cri-containerd-b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c.scope: Deactivated successfully. Dec 13 02:09:06.575178 env[1405]: time="2024-12-13T02:09:06.575111951Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:09:06.581787 env[1405]: time="2024-12-13T02:09:06.581752606Z" level=info msg="StopContainer for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" with timeout 2 (s)" Dec 13 02:09:06.582100 env[1405]: time="2024-12-13T02:09:06.582071499Z" level=info msg="Stop container \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" with signal terminated" Dec 13 02:09:06.588765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c-rootfs.mount: Deactivated successfully. Dec 13 02:09:06.595274 systemd-networkd[1532]: lxc_health: Link DOWN Dec 13 02:09:06.595282 systemd-networkd[1532]: lxc_health: Lost carrier Dec 13 02:09:06.599021 kubelet[2377]: E1213 02:09:06.598988 2377 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:06.620752 systemd[1]: cri-containerd-b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0.scope: Deactivated successfully. Dec 13 02:09:06.621072 systemd[1]: cri-containerd-b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0.scope: Consumed 7.183s CPU time. Dec 13 02:09:06.640789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0-rootfs.mount: Deactivated successfully. Dec 13 02:09:06.653031 env[1405]: time="2024-12-13T02:09:06.652989350Z" level=info msg="shim disconnected" id=b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c Dec 13 02:09:06.653209 env[1405]: time="2024-12-13T02:09:06.653031749Z" level=warning msg="cleaning up after shim disconnected" id=b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c namespace=k8s.io Dec 13 02:09:06.653209 env[1405]: time="2024-12-13T02:09:06.653044249Z" level=info msg="cleaning up dead shim" Dec 13 02:09:06.660184 env[1405]: time="2024-12-13T02:09:06.660137994Z" level=info msg="shim disconnected" id=b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0 Dec 13 02:09:06.660300 env[1405]: time="2024-12-13T02:09:06.660188193Z" level=warning msg="cleaning up after shim disconnected" id=b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0 namespace=k8s.io Dec 13 02:09:06.660300 env[1405]: time="2024-12-13T02:09:06.660199593Z" level=info msg="cleaning up dead shim" Dec 13 02:09:06.661034 env[1405]: time="2024-12-13T02:09:06.661001575Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4019 runtime=io.containerd.runc.v2\n" Dec 13 02:09:06.665917 env[1405]: time="2024-12-13T02:09:06.665880769Z" level=info msg="StopContainer for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" returns successfully" Dec 13 02:09:06.666592 env[1405]: time="2024-12-13T02:09:06.666559754Z" level=info msg="StopPodSandbox for \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\"" Dec 13 02:09:06.666697 env[1405]: time="2024-12-13T02:09:06.666632352Z" level=info msg="Container to stop \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:06.670407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5-shm.mount: Deactivated successfully. Dec 13 02:09:06.677727 env[1405]: time="2024-12-13T02:09:06.677675811Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4032 runtime=io.containerd.runc.v2\n" Dec 13 02:09:06.682807 systemd[1]: cri-containerd-62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5.scope: Deactivated successfully. Dec 13 02:09:06.684060 env[1405]: time="2024-12-13T02:09:06.684024473Z" level=info msg="StopContainer for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" returns successfully" Dec 13 02:09:06.684626 env[1405]: time="2024-12-13T02:09:06.684598860Z" level=info msg="StopPodSandbox for \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\"" Dec 13 02:09:06.684831 env[1405]: time="2024-12-13T02:09:06.684809955Z" level=info msg="Container to stop \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:06.684919 env[1405]: time="2024-12-13T02:09:06.684895354Z" level=info msg="Container to stop \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:06.684992 env[1405]: time="2024-12-13T02:09:06.684917653Z" level=info msg="Container to stop \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:06.684992 env[1405]: time="2024-12-13T02:09:06.684933953Z" level=info msg="Container to stop \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:06.684992 env[1405]: time="2024-12-13T02:09:06.684948652Z" level=info msg="Container to stop \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:06.690817 systemd[1]: cri-containerd-2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f.scope: Deactivated successfully. Dec 13 02:09:06.725409 env[1405]: time="2024-12-13T02:09:06.725343170Z" level=info msg="shim disconnected" id=62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5 Dec 13 02:09:06.725655 env[1405]: time="2024-12-13T02:09:06.725415969Z" level=warning msg="cleaning up after shim disconnected" id=62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5 namespace=k8s.io Dec 13 02:09:06.725655 env[1405]: time="2024-12-13T02:09:06.725428768Z" level=info msg="cleaning up dead shim" Dec 13 02:09:06.726537 env[1405]: time="2024-12-13T02:09:06.725378969Z" level=info msg="shim disconnected" id=2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f Dec 13 02:09:06.726745 env[1405]: time="2024-12-13T02:09:06.726719540Z" level=warning msg="cleaning up after shim disconnected" id=2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f namespace=k8s.io Dec 13 02:09:06.727053 env[1405]: time="2024-12-13T02:09:06.727032033Z" level=info msg="cleaning up dead shim" Dec 13 02:09:06.738547 env[1405]: time="2024-12-13T02:09:06.738512783Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" Dec 13 02:09:06.738887 env[1405]: time="2024-12-13T02:09:06.738852575Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4086 runtime=io.containerd.runc.v2\n" Dec 13 02:09:06.739159 env[1405]: time="2024-12-13T02:09:06.739125569Z" level=info msg="TearDown network for sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" successfully" Dec 13 02:09:06.739236 env[1405]: time="2024-12-13T02:09:06.739160668Z" level=info msg="StopPodSandbox for \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" returns successfully" Dec 13 02:09:06.739443 env[1405]: time="2024-12-13T02:09:06.739417463Z" level=info msg="TearDown network for sandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" successfully" Dec 13 02:09:06.739587 env[1405]: time="2024-12-13T02:09:06.739565960Z" level=info msg="StopPodSandbox for \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" returns successfully" Dec 13 02:09:06.850848 kubelet[2377]: I1213 02:09:06.848482 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-clustermesh-secrets\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.850848 kubelet[2377]: I1213 02:09:06.848587 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-etc-cni-netd\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.850848 kubelet[2377]: I1213 02:09:06.848652 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hostproc\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.850848 kubelet[2377]: I1213 02:09:06.848720 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hubble-tls\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.850848 kubelet[2377]: I1213 02:09:06.848786 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml2q4\" (UniqueName: \"kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-kube-api-access-ml2q4\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.850848 kubelet[2377]: I1213 02:09:06.848816 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1dc6477-e637-4b9b-93d8-b079df2242c3-cilium-config-path\") pod \"d1dc6477-e637-4b9b-93d8-b079df2242c3\" (UID: \"d1dc6477-e637-4b9b-93d8-b079df2242c3\") " Dec 13 02:09:06.851397 kubelet[2377]: I1213 02:09:06.848840 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-net\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851397 kubelet[2377]: I1213 02:09:06.848867 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-run\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851397 kubelet[2377]: I1213 02:09:06.848892 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-cgroup\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851397 kubelet[2377]: I1213 02:09:06.848922 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-config-path\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851397 kubelet[2377]: I1213 02:09:06.848948 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-lib-modules\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851397 kubelet[2377]: I1213 02:09:06.848974 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-xtables-lock\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851744 kubelet[2377]: I1213 02:09:06.849000 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-kernel\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851744 kubelet[2377]: I1213 02:09:06.849027 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-bpf-maps\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851744 kubelet[2377]: I1213 02:09:06.849056 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2qkl\" (UniqueName: \"kubernetes.io/projected/d1dc6477-e637-4b9b-93d8-b079df2242c3-kube-api-access-p2qkl\") pod \"d1dc6477-e637-4b9b-93d8-b079df2242c3\" (UID: \"d1dc6477-e637-4b9b-93d8-b079df2242c3\") " Dec 13 02:09:06.851744 kubelet[2377]: I1213 02:09:06.849085 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cni-path\") pod \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\" (UID: \"77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0\") " Dec 13 02:09:06.851744 kubelet[2377]: I1213 02:09:06.849178 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cni-path" (OuterVolumeSpecName: "cni-path") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.852129 kubelet[2377]: I1213 02:09:06.852064 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.852268 kubelet[2377]: I1213 02:09:06.852159 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.852435 kubelet[2377]: I1213 02:09:06.852412 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.852575 kubelet[2377]: I1213 02:09:06.852555 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hostproc" (OuterVolumeSpecName: "hostproc") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.855614 kubelet[2377]: I1213 02:09:06.855574 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:09:06.855755 kubelet[2377]: I1213 02:09:06.855655 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.855755 kubelet[2377]: I1213 02:09:06.855690 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.855755 kubelet[2377]: I1213 02:09:06.855725 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.855755 kubelet[2377]: I1213 02:09:06.855752 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.857768 kubelet[2377]: I1213 02:09:06.857733 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:09:06.860425 kubelet[2377]: I1213 02:09:06.860396 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1dc6477-e637-4b9b-93d8-b079df2242c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d1dc6477-e637-4b9b-93d8-b079df2242c3" (UID: "d1dc6477-e637-4b9b-93d8-b079df2242c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:09:06.860753 kubelet[2377]: I1213 02:09:06.860728 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:06.860910 kubelet[2377]: I1213 02:09:06.860890 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:06.862400 kubelet[2377]: I1213 02:09:06.862323 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-kube-api-access-ml2q4" (OuterVolumeSpecName: "kube-api-access-ml2q4") pod "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" (UID: "77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0"). InnerVolumeSpecName "kube-api-access-ml2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:06.863044 kubelet[2377]: I1213 02:09:06.863013 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1dc6477-e637-4b9b-93d8-b079df2242c3-kube-api-access-p2qkl" (OuterVolumeSpecName: "kube-api-access-p2qkl") pod "d1dc6477-e637-4b9b-93d8-b079df2242c3" (UID: "d1dc6477-e637-4b9b-93d8-b079df2242c3"). InnerVolumeSpecName "kube-api-access-p2qkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:06.949442 kubelet[2377]: I1213 02:09:06.949395 2377 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cni-path\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949442 kubelet[2377]: I1213 02:09:06.949442 2377 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-etc-cni-netd\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949462 2377 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hostproc\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949478 2377 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-hubble-tls\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949493 2377 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-clustermesh-secrets\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949508 2377 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ml2q4\" (UniqueName: \"kubernetes.io/projected/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-kube-api-access-ml2q4\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949522 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1dc6477-e637-4b9b-93d8-b079df2242c3-cilium-config-path\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949535 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-run\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949549 2377 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-net\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.949727 kubelet[2377]: I1213 02:09:06.949563 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-cgroup\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.950153 kubelet[2377]: I1213 02:09:06.949576 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-cilium-config-path\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.950153 kubelet[2377]: I1213 02:09:06.949588 2377 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-lib-modules\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.950153 kubelet[2377]: I1213 02:09:06.949602 2377 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-xtables-lock\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.950153 kubelet[2377]: I1213 02:09:06.949616 2377 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.950153 kubelet[2377]: I1213 02:09:06.949630 2377 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p2qkl\" (UniqueName: \"kubernetes.io/projected/d1dc6477-e637-4b9b-93d8-b079df2242c3-kube-api-access-p2qkl\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.950153 kubelet[2377]: I1213 02:09:06.949644 2377 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0-bpf-maps\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:06.977190 kubelet[2377]: I1213 02:09:06.977155 2377 scope.go:117] "RemoveContainer" containerID="b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c" Dec 13 02:09:06.980093 env[1405]: time="2024-12-13T02:09:06.979786513Z" level=info msg="RemoveContainer for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\"" Dec 13 02:09:06.985662 systemd[1]: Removed slice kubepods-besteffort-podd1dc6477_e637_4b9b_93d8_b079df2242c3.slice. Dec 13 02:09:06.993656 systemd[1]: Removed slice kubepods-burstable-pod77b288c0_dbe9_4d7c_ad0d_3bd3be2e42f0.slice. Dec 13 02:09:06.993787 systemd[1]: kubepods-burstable-pod77b288c0_dbe9_4d7c_ad0d_3bd3be2e42f0.slice: Consumed 7.283s CPU time. Dec 13 02:09:06.996762 env[1405]: time="2024-12-13T02:09:06.996724343Z" level=info msg="RemoveContainer for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" returns successfully" Dec 13 02:09:06.997076 kubelet[2377]: I1213 02:09:06.997055 2377 scope.go:117] "RemoveContainer" containerID="b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c" Dec 13 02:09:06.997713 env[1405]: time="2024-12-13T02:09:06.997303431Z" level=error msg="ContainerStatus for \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\": not found" Dec 13 02:09:06.997920 kubelet[2377]: E1213 02:09:06.997898 2377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\": not found" containerID="b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c" Dec 13 02:09:06.998127 kubelet[2377]: I1213 02:09:06.998032 2377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c"} err="failed to get container status \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b53e328360f3335097e7f1a8388783189d06275bbca41251c2562021f8335b3c\": not found" Dec 13 02:09:06.998246 kubelet[2377]: I1213 02:09:06.998230 2377 scope.go:117] "RemoveContainer" containerID="b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0" Dec 13 02:09:07.000812 env[1405]: time="2024-12-13T02:09:07.000476561Z" level=info msg="RemoveContainer for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\"" Dec 13 02:09:07.010390 env[1405]: time="2024-12-13T02:09:07.009337868Z" level=info msg="RemoveContainer for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" returns successfully" Dec 13 02:09:07.011964 kubelet[2377]: I1213 02:09:07.011944 2377 scope.go:117] "RemoveContainer" containerID="580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727" Dec 13 02:09:07.017571 env[1405]: time="2024-12-13T02:09:07.017540589Z" level=info msg="RemoveContainer for \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\"" Dec 13 02:09:07.027587 env[1405]: time="2024-12-13T02:09:07.027549270Z" level=info msg="RemoveContainer for \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\" returns successfully" Dec 13 02:09:07.027807 kubelet[2377]: I1213 02:09:07.027789 2377 scope.go:117] "RemoveContainer" containerID="b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f" Dec 13 02:09:07.028997 env[1405]: time="2024-12-13T02:09:07.028966739Z" level=info msg="RemoveContainer for \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\"" Dec 13 02:09:07.037263 env[1405]: time="2024-12-13T02:09:07.037227159Z" level=info msg="RemoveContainer for \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\" returns successfully" Dec 13 02:09:07.037446 kubelet[2377]: I1213 02:09:07.037425 2377 scope.go:117] "RemoveContainer" containerID="7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743" Dec 13 02:09:07.038472 env[1405]: time="2024-12-13T02:09:07.038444833Z" level=info msg="RemoveContainer for \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\"" Dec 13 02:09:07.048394 env[1405]: time="2024-12-13T02:09:07.048337117Z" level=info msg="RemoveContainer for \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\" returns successfully" Dec 13 02:09:07.048544 kubelet[2377]: I1213 02:09:07.048521 2377 scope.go:117] "RemoveContainer" containerID="7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c" Dec 13 02:09:07.049681 env[1405]: time="2024-12-13T02:09:07.049652988Z" level=info msg="RemoveContainer for \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\"" Dec 13 02:09:07.057233 env[1405]: time="2024-12-13T02:09:07.057197923Z" level=info msg="RemoveContainer for \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\" returns successfully" Dec 13 02:09:07.057378 kubelet[2377]: I1213 02:09:07.057341 2377 scope.go:117] "RemoveContainer" containerID="b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0" Dec 13 02:09:07.057655 env[1405]: time="2024-12-13T02:09:07.057592515Z" level=error msg="ContainerStatus for \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\": not found" Dec 13 02:09:07.057793 kubelet[2377]: E1213 02:09:07.057770 2377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\": not found" containerID="b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0" Dec 13 02:09:07.057881 kubelet[2377]: I1213 02:09:07.057799 2377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0"} err="failed to get container status \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5daa52918a57745a5ee84d2f3d5ac122dcf6f301b2d2ea5f02f00d4c142f3a0\": not found" Dec 13 02:09:07.057881 kubelet[2377]: I1213 02:09:07.057824 2377 scope.go:117] "RemoveContainer" containerID="580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727" Dec 13 02:09:07.058079 env[1405]: time="2024-12-13T02:09:07.058026905Z" level=error msg="ContainerStatus for \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\": not found" Dec 13 02:09:07.058209 kubelet[2377]: E1213 02:09:07.058184 2377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\": not found" containerID="580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727" Dec 13 02:09:07.058267 kubelet[2377]: I1213 02:09:07.058220 2377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727"} err="failed to get container status \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\": rpc error: code = NotFound desc = an error occurred when try to find container \"580812626a4237914ff9a138ebdf720678f235f20d40c223a1b29a79781c6727\": not found" Dec 13 02:09:07.058267 kubelet[2377]: I1213 02:09:07.058247 2377 scope.go:117] "RemoveContainer" containerID="b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f" Dec 13 02:09:07.058479 env[1405]: time="2024-12-13T02:09:07.058434396Z" level=error msg="ContainerStatus for \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\": not found" Dec 13 02:09:07.058583 kubelet[2377]: E1213 02:09:07.058559 2377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\": not found" containerID="b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f" Dec 13 02:09:07.058653 kubelet[2377]: I1213 02:09:07.058587 2377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f"} err="failed to get container status \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1e28d57d171606f6ff268bb2eb03811ed67b3f6e28f25fd5cd3d68cf113ed2f\": not found" Dec 13 02:09:07.058653 kubelet[2377]: I1213 02:09:07.058609 2377 scope.go:117] "RemoveContainer" containerID="7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743" Dec 13 02:09:07.058834 env[1405]: time="2024-12-13T02:09:07.058788689Z" level=error msg="ContainerStatus for \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\": not found" Dec 13 02:09:07.058937 kubelet[2377]: E1213 02:09:07.058914 2377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\": not found" containerID="7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743" Dec 13 02:09:07.059006 kubelet[2377]: I1213 02:09:07.058940 2377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743"} err="failed to get container status \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\": rpc error: code = NotFound desc = an error occurred when try to find container \"7309340d0bf48fe01121b48b1b2f2a63e5e6c7374f44053b3487cab825bd2743\": not found" Dec 13 02:09:07.059006 kubelet[2377]: I1213 02:09:07.058960 2377 scope.go:117] "RemoveContainer" containerID="7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c" Dec 13 02:09:07.059241 env[1405]: time="2024-12-13T02:09:07.059197880Z" level=error msg="ContainerStatus for \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\": not found" Dec 13 02:09:07.059365 kubelet[2377]: E1213 02:09:07.059322 2377 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\": not found" containerID="7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c" Dec 13 02:09:07.059443 kubelet[2377]: I1213 02:09:07.059372 2377 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c"} err="failed to get container status \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c84387c8ba9701050ac7b93dcfb750037a8e2f21478e099418bf409c769475c\": not found" Dec 13 02:09:07.134023 kubelet[2377]: I1213 02:09:07.132647 2377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" path="/var/lib/kubelet/pods/77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0/volumes" Dec 13 02:09:07.134023 kubelet[2377]: I1213 02:09:07.133609 2377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1dc6477-e637-4b9b-93d8-b079df2242c3" path="/var/lib/kubelet/pods/d1dc6477-e637-4b9b-93d8-b079df2242c3/volumes" Dec 13 02:09:07.540271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5-rootfs.mount: Deactivated successfully. Dec 13 02:09:07.540431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f-rootfs.mount: Deactivated successfully. Dec 13 02:09:07.540529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f-shm.mount: Deactivated successfully. Dec 13 02:09:07.540629 systemd[1]: var-lib-kubelet-pods-77b288c0\x2ddbe9\x2d4d7c\x2dad0d\x2d3bd3be2e42f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dml2q4.mount: Deactivated successfully. Dec 13 02:09:07.540737 systemd[1]: var-lib-kubelet-pods-d1dc6477\x2de637\x2d4b9b\x2d93d8\x2db079df2242c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2qkl.mount: Deactivated successfully. Dec 13 02:09:07.540849 systemd[1]: var-lib-kubelet-pods-77b288c0\x2ddbe9\x2d4d7c\x2dad0d\x2d3bd3be2e42f0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:09:07.540947 systemd[1]: var-lib-kubelet-pods-77b288c0\x2ddbe9\x2d4d7c\x2dad0d\x2d3bd3be2e42f0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:09:08.579106 sshd[3949]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:08.582844 systemd[1]: sshd@21-10.200.8.15:22-10.200.16.10:41876.service: Deactivated successfully. Dec 13 02:09:08.583864 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:09:08.584840 systemd-logind[1373]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:09:08.585836 systemd-logind[1373]: Removed session 24. Dec 13 02:09:08.690665 systemd[1]: Started sshd@22-10.200.8.15:22-10.200.16.10:55884.service. Dec 13 02:09:09.315588 sshd[4120]: Accepted publickey for core from 10.200.16.10 port 55884 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:09:09.317053 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:09.321987 systemd[1]: Started session-25.scope. Dec 13 02:09:09.322654 systemd-logind[1373]: New session 25 of user core. Dec 13 02:09:10.170398 kubelet[2377]: E1213 02:09:10.170334 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" containerName="mount-cgroup" Dec 13 02:09:10.170398 kubelet[2377]: E1213 02:09:10.170394 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" containerName="apply-sysctl-overwrites" Dec 13 02:09:10.170398 kubelet[2377]: E1213 02:09:10.170402 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" containerName="clean-cilium-state" Dec 13 02:09:10.170946 kubelet[2377]: E1213 02:09:10.170413 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" containerName="mount-bpf-fs" Dec 13 02:09:10.170946 kubelet[2377]: E1213 02:09:10.170420 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1dc6477-e637-4b9b-93d8-b079df2242c3" containerName="cilium-operator" Dec 13 02:09:10.170946 kubelet[2377]: E1213 02:09:10.170428 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" containerName="cilium-agent" Dec 13 02:09:10.170946 kubelet[2377]: I1213 02:09:10.170459 2377 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1dc6477-e637-4b9b-93d8-b079df2242c3" containerName="cilium-operator" Dec 13 02:09:10.170946 kubelet[2377]: I1213 02:09:10.170469 2377 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b288c0-dbe9-4d7c-ad0d-3bd3be2e42f0" containerName="cilium-agent" Dec 13 02:09:10.177112 systemd[1]: Created slice kubepods-burstable-pod0c445ed6_6ba6_4969_bd72_98a8aa29c77e.slice. Dec 13 02:09:10.245628 sshd[4120]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:10.249115 systemd-logind[1373]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:09:10.250074 systemd[1]: sshd@22-10.200.8.15:22-10.200.16.10:55884.service: Deactivated successfully. Dec 13 02:09:10.250980 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:09:10.252273 systemd-logind[1373]: Removed session 25. Dec 13 02:09:10.267387 kubelet[2377]: I1213 02:09:10.267358 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-cgroup\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.267588 kubelet[2377]: I1213 02:09:10.267571 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-etc-cni-netd\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.267692 kubelet[2377]: I1213 02:09:10.267680 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-xtables-lock\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.267789 kubelet[2377]: I1213 02:09:10.267776 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-config-path\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.267881 kubelet[2377]: I1213 02:09:10.267869 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-ipsec-secrets\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.267965 kubelet[2377]: I1213 02:09:10.267954 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-kernel\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268063 kubelet[2377]: I1213 02:09:10.268051 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-bpf-maps\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268154 kubelet[2377]: I1213 02:09:10.268139 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hubble-tls\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268247 kubelet[2377]: I1213 02:09:10.268236 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cni-path\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268337 kubelet[2377]: I1213 02:09:10.268325 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-lib-modules\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268516 kubelet[2377]: I1213 02:09:10.268492 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-net\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268628 kubelet[2377]: I1213 02:09:10.268615 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-run\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268718 kubelet[2377]: I1213 02:09:10.268706 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hostproc\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268827 kubelet[2377]: I1213 02:09:10.268811 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrjmv\" (UniqueName: \"kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-kube-api-access-qrjmv\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.268936 kubelet[2377]: I1213 02:09:10.268924 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-clustermesh-secrets\") pod \"cilium-h5tbh\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " pod="kube-system/cilium-h5tbh" Dec 13 02:09:10.349142 systemd[1]: Started sshd@23-10.200.8.15:22-10.200.16.10:55900.service. Dec 13 02:09:10.484950 env[1405]: time="2024-12-13T02:09:10.484815776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5tbh,Uid:0c445ed6-6ba6-4969-bd72-98a8aa29c77e,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:10.517152 env[1405]: time="2024-12-13T02:09:10.517067973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:10.517152 env[1405]: time="2024-12-13T02:09:10.517118872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:10.517152 env[1405]: time="2024-12-13T02:09:10.517132572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:10.517665 env[1405]: time="2024-12-13T02:09:10.517613961Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1 pid=4145 runtime=io.containerd.runc.v2 Dec 13 02:09:10.529318 systemd[1]: Started cri-containerd-2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1.scope. Dec 13 02:09:10.561043 env[1405]: time="2024-12-13T02:09:10.561001916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5tbh,Uid:0c445ed6-6ba6-4969-bd72-98a8aa29c77e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\"" Dec 13 02:09:10.564753 env[1405]: time="2024-12-13T02:09:10.564712035Z" level=info msg="CreateContainer within sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:09:10.597047 env[1405]: time="2024-12-13T02:09:10.596997632Z" level=info msg="CreateContainer within sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\"" Dec 13 02:09:10.598897 env[1405]: time="2024-12-13T02:09:10.597719516Z" level=info msg="StartContainer for \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\"" Dec 13 02:09:10.614273 systemd[1]: Started cri-containerd-d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569.scope. Dec 13 02:09:10.626817 systemd[1]: cri-containerd-d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569.scope: Deactivated successfully. Dec 13 02:09:10.702081 env[1405]: time="2024-12-13T02:09:10.702022244Z" level=info msg="shim disconnected" id=d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569 Dec 13 02:09:10.702081 env[1405]: time="2024-12-13T02:09:10.702076943Z" level=warning msg="cleaning up after shim disconnected" id=d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569 namespace=k8s.io Dec 13 02:09:10.702081 env[1405]: time="2024-12-13T02:09:10.702087743Z" level=info msg="cleaning up dead shim" Dec 13 02:09:10.709957 env[1405]: time="2024-12-13T02:09:10.709914072Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4201 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:09:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:09:10.710277 env[1405]: time="2024-12-13T02:09:10.710172166Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" Dec 13 02:09:10.711471 env[1405]: time="2024-12-13T02:09:10.711425439Z" level=error msg="Failed to pipe stderr of container \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\"" error="reading from a closed fifo" Dec 13 02:09:10.714272 env[1405]: time="2024-12-13T02:09:10.714223178Z" level=error msg="Failed to pipe stdout of container \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\"" error="reading from a closed fifo" Dec 13 02:09:10.719398 env[1405]: time="2024-12-13T02:09:10.719334767Z" level=error msg="StartContainer for \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:09:10.719652 kubelet[2377]: E1213 02:09:10.719618 2377 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569" Dec 13 02:09:10.719822 kubelet[2377]: E1213 02:09:10.719796 2377 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 02:09:10.719822 kubelet[2377]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:09:10.719822 kubelet[2377]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:09:10.719822 kubelet[2377]: rm /hostbin/cilium-mount Dec 13 02:09:10.720003 kubelet[2377]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrjmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h5tbh_kube-system(0c445ed6-6ba6-4969-bd72-98a8aa29c77e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:09:10.720003 kubelet[2377]: > logger="UnhandledError" Dec 13 02:09:10.721383 kubelet[2377]: E1213 02:09:10.721317 2377 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h5tbh" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" Dec 13 02:09:10.975287 sshd[4130]: Accepted publickey for core from 10.200.16.10 port 55900 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:09:10.977008 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:10.982155 systemd[1]: Started session-26.scope. Dec 13 02:09:10.982640 systemd-logind[1373]: New session 26 of user core. Dec 13 02:09:11.003902 env[1405]: time="2024-12-13T02:09:11.003681673Z" level=info msg="CreateContainer within sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 02:09:11.039637 env[1405]: time="2024-12-13T02:09:11.039587391Z" level=info msg="CreateContainer within sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\"" Dec 13 02:09:11.040377 env[1405]: time="2024-12-13T02:09:11.040324875Z" level=info msg="StartContainer for \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\"" Dec 13 02:09:11.058303 systemd[1]: Started cri-containerd-2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b.scope. Dec 13 02:09:11.078343 systemd[1]: cri-containerd-2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b.scope: Deactivated successfully. Dec 13 02:09:11.106625 env[1405]: time="2024-12-13T02:09:11.106567033Z" level=info msg="shim disconnected" id=2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b Dec 13 02:09:11.106846 env[1405]: time="2024-12-13T02:09:11.106626931Z" level=warning msg="cleaning up after shim disconnected" id=2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b namespace=k8s.io Dec 13 02:09:11.106846 env[1405]: time="2024-12-13T02:09:11.106638731Z" level=info msg="cleaning up dead shim" Dec 13 02:09:11.114139 env[1405]: time="2024-12-13T02:09:11.114092569Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4242 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:09:11Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:09:11.114421 env[1405]: time="2024-12-13T02:09:11.114335564Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" Dec 13 02:09:11.114667 env[1405]: time="2024-12-13T02:09:11.114618157Z" level=error msg="Failed to pipe stderr of container \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\"" error="reading from a closed fifo" Dec 13 02:09:11.114768 env[1405]: time="2024-12-13T02:09:11.114634657Z" level=error msg="Failed to pipe stdout of container \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\"" error="reading from a closed fifo" Dec 13 02:09:11.119311 env[1405]: time="2024-12-13T02:09:11.119270856Z" level=error msg="StartContainer for \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:09:11.119532 kubelet[2377]: E1213 02:09:11.119493 2377 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b" Dec 13 02:09:11.119670 kubelet[2377]: E1213 02:09:11.119644 2377 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 02:09:11.119670 kubelet[2377]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:09:11.119670 kubelet[2377]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:09:11.119670 kubelet[2377]: rm /hostbin/cilium-mount Dec 13 02:09:11.119670 kubelet[2377]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrjmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h5tbh_kube-system(0c445ed6-6ba6-4969-bd72-98a8aa29c77e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:09:11.119670 kubelet[2377]: > logger="UnhandledError" Dec 13 02:09:11.121260 kubelet[2377]: E1213 02:09:11.121217 2377 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h5tbh" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" Dec 13 02:09:11.491917 sshd[4130]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:11.496050 systemd[1]: sshd@23-10.200.8.15:22-10.200.16.10:55900.service: Deactivated successfully. Dec 13 02:09:11.497017 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:09:11.497817 systemd-logind[1373]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:09:11.499095 systemd-logind[1373]: Removed session 26. Dec 13 02:09:11.594437 systemd[1]: Started sshd@24-10.200.8.15:22-10.200.16.10:55904.service. Dec 13 02:09:11.600264 kubelet[2377]: E1213 02:09:11.600229 2377 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:11.999872 kubelet[2377]: I1213 02:09:11.999841 2377 scope.go:117] "RemoveContainer" containerID="d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569" Dec 13 02:09:12.000917 env[1405]: time="2024-12-13T02:09:12.000866363Z" level=info msg="StopPodSandbox for \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\"" Dec 13 02:09:12.008534 env[1405]: time="2024-12-13T02:09:12.000926962Z" level=info msg="Container to stop \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:12.008534 env[1405]: time="2024-12-13T02:09:12.000945462Z" level=info msg="Container to stop \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:12.003897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1-shm.mount: Deactivated successfully. Dec 13 02:09:12.009043 env[1405]: time="2024-12-13T02:09:12.009006186Z" level=info msg="RemoveContainer for \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\"" Dec 13 02:09:12.023378 systemd[1]: cri-containerd-2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1.scope: Deactivated successfully. Dec 13 02:09:12.030305 env[1405]: time="2024-12-13T02:09:12.030266024Z" level=info msg="RemoveContainer for \"d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569\" returns successfully" Dec 13 02:09:12.049283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1-rootfs.mount: Deactivated successfully. Dec 13 02:09:12.062737 env[1405]: time="2024-12-13T02:09:12.062685218Z" level=info msg="shim disconnected" id=2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1 Dec 13 02:09:12.062908 env[1405]: time="2024-12-13T02:09:12.062847515Z" level=warning msg="cleaning up after shim disconnected" id=2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1 namespace=k8s.io Dec 13 02:09:12.062908 env[1405]: time="2024-12-13T02:09:12.062867614Z" level=info msg="cleaning up dead shim" Dec 13 02:09:12.071038 env[1405]: time="2024-12-13T02:09:12.070997037Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4284 runtime=io.containerd.runc.v2\n" Dec 13 02:09:12.071326 env[1405]: time="2024-12-13T02:09:12.071291131Z" level=info msg="TearDown network for sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" successfully" Dec 13 02:09:12.071326 env[1405]: time="2024-12-13T02:09:12.071321730Z" level=info msg="StopPodSandbox for \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" returns successfully" Dec 13 02:09:12.182227 kubelet[2377]: I1213 02:09:12.182171 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrjmv\" (UniqueName: \"kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-kube-api-access-qrjmv\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182227 kubelet[2377]: I1213 02:09:12.182231 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-xtables-lock\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182263 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-ipsec-secrets\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182290 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-bpf-maps\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182315 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-cgroup\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182337 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-etc-cni-netd\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182386 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cni-path\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182410 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-lib-modules\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182432 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hostproc\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182463 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-config-path\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182486 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-net\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182511 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-run\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182543 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-clustermesh-secrets\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182571 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hubble-tls\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.182653 kubelet[2377]: I1213 02:09:12.182601 2377 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-kernel\") pod \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\" (UID: \"0c445ed6-6ba6-4969-bd72-98a8aa29c77e\") " Dec 13 02:09:12.183502 kubelet[2377]: I1213 02:09:12.182706 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.184032 kubelet[2377]: I1213 02:09:12.183616 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.184032 kubelet[2377]: I1213 02:09:12.183698 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185024 kubelet[2377]: I1213 02:09:12.184982 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185189 kubelet[2377]: I1213 02:09:12.185169 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185311 kubelet[2377]: I1213 02:09:12.185295 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185429 kubelet[2377]: I1213 02:09:12.185414 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185542 kubelet[2377]: I1213 02:09:12.185528 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185656 kubelet[2377]: I1213 02:09:12.185637 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.185774 kubelet[2377]: I1213 02:09:12.185758 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:12.187711 kubelet[2377]: I1213 02:09:12.187681 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:09:12.191727 systemd[1]: var-lib-kubelet-pods-0c445ed6\x2d6ba6\x2d4969\x2dbd72\x2d98a8aa29c77e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqrjmv.mount: Deactivated successfully. Dec 13 02:09:12.197298 kubelet[2377]: I1213 02:09:12.192346 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:12.197298 kubelet[2377]: I1213 02:09:12.192645 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-kube-api-access-qrjmv" (OuterVolumeSpecName: "kube-api-access-qrjmv") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "kube-api-access-qrjmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:12.191873 systemd[1]: var-lib-kubelet-pods-0c445ed6\x2d6ba6\x2d4969\x2dbd72\x2d98a8aa29c77e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:09:12.197617 kubelet[2377]: I1213 02:09:12.197593 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:09:12.197849 kubelet[2377]: I1213 02:09:12.197827 2377 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c445ed6-6ba6-4969-bd72-98a8aa29c77e" (UID: "0c445ed6-6ba6-4969-bd72-98a8aa29c77e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:09:12.198851 systemd[1]: var-lib-kubelet-pods-0c445ed6\x2d6ba6\x2d4969\x2dbd72\x2d98a8aa29c77e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:09:12.198957 systemd[1]: var-lib-kubelet-pods-0c445ed6\x2d6ba6\x2d4969\x2dbd72\x2d98a8aa29c77e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:09:12.219734 sshd[4263]: Accepted publickey for core from 10.200.16.10 port 55904 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:09:12.221146 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:12.225765 systemd-logind[1373]: New session 27 of user core. Dec 13 02:09:12.226201 systemd[1]: Started session-27.scope. Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283460 2377 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-bpf-maps\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283504 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283520 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-cgroup\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283537 2377 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-etc-cni-netd\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283551 2377 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cni-path\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283564 2377 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-lib-modules\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.283607 kubelet[2377]: I1213 02:09:12.283579 2377 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hostproc\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.285872 kubelet[2377]: I1213 02:09:12.285835 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-config-path\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.285872 kubelet[2377]: I1213 02:09:12.285869 2377 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-net\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.286038 kubelet[2377]: I1213 02:09:12.285886 2377 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-cilium-run\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.286038 kubelet[2377]: I1213 02:09:12.285900 2377 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-clustermesh-secrets\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.286038 kubelet[2377]: I1213 02:09:12.285914 2377 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.286038 kubelet[2377]: I1213 02:09:12.285929 2377 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-hubble-tls\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.286038 kubelet[2377]: I1213 02:09:12.285943 2377 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qrjmv\" (UniqueName: \"kubernetes.io/projected/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-kube-api-access-qrjmv\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:12.286038 kubelet[2377]: I1213 02:09:12.285958 2377 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c445ed6-6ba6-4969-bd72-98a8aa29c77e-xtables-lock\") on node \"ci-3510.3.6-a-eca73107d2\" DevicePath \"\"" Dec 13 02:09:13.003552 kubelet[2377]: I1213 02:09:13.003511 2377 scope.go:117] "RemoveContainer" containerID="2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b" Dec 13 02:09:13.006318 env[1405]: time="2024-12-13T02:09:13.006261989Z" level=info msg="RemoveContainer for \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\"" Dec 13 02:09:13.010963 systemd[1]: Removed slice kubepods-burstable-pod0c445ed6_6ba6_4969_bd72_98a8aa29c77e.slice. Dec 13 02:09:13.015974 env[1405]: time="2024-12-13T02:09:13.015934478Z" level=info msg="RemoveContainer for \"2243d72b1774e309887461569b3a195b8c0904c0e0d56e566fb81c8f4f89bd0b\" returns successfully" Dec 13 02:09:13.056691 kubelet[2377]: E1213 02:09:13.056641 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" containerName="mount-cgroup" Dec 13 02:09:13.056691 kubelet[2377]: I1213 02:09:13.056695 2377 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" containerName="mount-cgroup" Dec 13 02:09:13.056936 kubelet[2377]: E1213 02:09:13.056720 2377 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" containerName="mount-cgroup" Dec 13 02:09:13.056936 kubelet[2377]: I1213 02:09:13.056743 2377 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" containerName="mount-cgroup" Dec 13 02:09:13.063639 systemd[1]: Created slice kubepods-burstable-pod0d538bf7_1da1_41d6_bc45_7b46d8adceac.slice. Dec 13 02:09:13.132278 kubelet[2377]: I1213 02:09:13.132233 2377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c445ed6-6ba6-4969-bd72-98a8aa29c77e" path="/var/lib/kubelet/pods/0c445ed6-6ba6-4969-bd72-98a8aa29c77e/volumes" Dec 13 02:09:13.190854 kubelet[2377]: I1213 02:09:13.190810 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d538bf7-1da1-41d6-bc45-7b46d8adceac-clustermesh-secrets\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.190875 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-hostproc\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.190903 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-cni-path\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.190927 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-lib-modules\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.190947 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d538bf7-1da1-41d6-bc45-7b46d8adceac-cilium-config-path\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.190970 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-host-proc-sys-net\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.190992 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-cilium-run\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.191012 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-host-proc-sys-kernel\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191058 kubelet[2377]: I1213 02:09:13.191037 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d538bf7-1da1-41d6-bc45-7b46d8adceac-cilium-ipsec-secrets\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191444 kubelet[2377]: I1213 02:09:13.191060 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d538bf7-1da1-41d6-bc45-7b46d8adceac-hubble-tls\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191444 kubelet[2377]: I1213 02:09:13.191085 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-cilium-cgroup\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191444 kubelet[2377]: I1213 02:09:13.191108 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-etc-cni-netd\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191444 kubelet[2377]: I1213 02:09:13.191129 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-xtables-lock\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191444 kubelet[2377]: I1213 02:09:13.191154 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8k5\" (UniqueName: \"kubernetes.io/projected/0d538bf7-1da1-41d6-bc45-7b46d8adceac-kube-api-access-4j8k5\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.191444 kubelet[2377]: I1213 02:09:13.191181 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d538bf7-1da1-41d6-bc45-7b46d8adceac-bpf-maps\") pod \"cilium-gd2qz\" (UID: \"0d538bf7-1da1-41d6-bc45-7b46d8adceac\") " pod="kube-system/cilium-gd2qz" Dec 13 02:09:13.368202 env[1405]: time="2024-12-13T02:09:13.368055722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gd2qz,Uid:0d538bf7-1da1-41d6-bc45-7b46d8adceac,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:13.408053 env[1405]: time="2024-12-13T02:09:13.407979554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:13.408053 env[1405]: time="2024-12-13T02:09:13.408014853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:13.408053 env[1405]: time="2024-12-13T02:09:13.408028852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:13.408468 env[1405]: time="2024-12-13T02:09:13.408420644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467 pid=4317 runtime=io.containerd.runc.v2 Dec 13 02:09:13.440040 systemd[1]: Started cri-containerd-3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467.scope. Dec 13 02:09:13.466595 env[1405]: time="2024-12-13T02:09:13.466550880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gd2qz,Uid:0d538bf7-1da1-41d6-bc45-7b46d8adceac,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\"" Dec 13 02:09:13.470303 env[1405]: time="2024-12-13T02:09:13.470264599Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:09:13.503224 env[1405]: time="2024-12-13T02:09:13.503182983Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6\"" Dec 13 02:09:13.503864 env[1405]: time="2024-12-13T02:09:13.503819870Z" level=info msg="StartContainer for \"fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6\"" Dec 13 02:09:13.521032 systemd[1]: Started cri-containerd-fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6.scope. Dec 13 02:09:13.552072 env[1405]: time="2024-12-13T02:09:13.550696550Z" level=info msg="StartContainer for \"fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6\" returns successfully" Dec 13 02:09:13.557486 systemd[1]: cri-containerd-fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6.scope: Deactivated successfully. Dec 13 02:09:13.601430 env[1405]: time="2024-12-13T02:09:13.601342649Z" level=info msg="shim disconnected" id=fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6 Dec 13 02:09:13.601430 env[1405]: time="2024-12-13T02:09:13.601429747Z" level=warning msg="cleaning up after shim disconnected" id=fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6 namespace=k8s.io Dec 13 02:09:13.601776 env[1405]: time="2024-12-13T02:09:13.601445847Z" level=info msg="cleaning up dead shim" Dec 13 02:09:13.609652 env[1405]: time="2024-12-13T02:09:13.609616169Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4404 runtime=io.containerd.runc.v2\n" Dec 13 02:09:13.808372 kubelet[2377]: W1213 02:09:13.808304 2377 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c445ed6_6ba6_4969_bd72_98a8aa29c77e.slice/cri-containerd-d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569.scope WatchSource:0}: container "d37cd6dfa84c88898fd01176bcf536cfc850c23edf4c712eaf71a9b417db3569" in namespace "k8s.io": not found Dec 13 02:09:14.011037 env[1405]: time="2024-12-13T02:09:14.010986942Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:09:14.045296 env[1405]: time="2024-12-13T02:09:14.045247297Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc\"" Dec 13 02:09:14.046043 env[1405]: time="2024-12-13T02:09:14.045984581Z" level=info msg="StartContainer for \"0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc\"" Dec 13 02:09:14.063056 systemd[1]: Started cri-containerd-0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc.scope. Dec 13 02:09:14.093382 env[1405]: time="2024-12-13T02:09:14.091928083Z" level=info msg="StartContainer for \"0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc\" returns successfully" Dec 13 02:09:14.101421 systemd[1]: cri-containerd-0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc.scope: Deactivated successfully. Dec 13 02:09:14.130245 env[1405]: time="2024-12-13T02:09:14.130196951Z" level=info msg="shim disconnected" id=0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc Dec 13 02:09:14.130544 env[1405]: time="2024-12-13T02:09:14.130388447Z" level=warning msg="cleaning up after shim disconnected" id=0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc namespace=k8s.io Dec 13 02:09:14.130544 env[1405]: time="2024-12-13T02:09:14.130408546Z" level=info msg="cleaning up dead shim" Dec 13 02:09:14.137909 env[1405]: time="2024-12-13T02:09:14.137875184Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4465 runtime=io.containerd.runc.v2\n" Dec 13 02:09:14.310709 systemd[1]: run-containerd-runc-k8s.io-3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467-runc.0XIkFd.mount: Deactivated successfully. Dec 13 02:09:15.016503 env[1405]: time="2024-12-13T02:09:15.014486234Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:09:15.052834 env[1405]: time="2024-12-13T02:09:15.052775103Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f\"" Dec 13 02:09:15.054704 env[1405]: time="2024-12-13T02:09:15.053550486Z" level=info msg="StartContainer for \"79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f\"" Dec 13 02:09:15.082229 systemd[1]: Started cri-containerd-79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f.scope. Dec 13 02:09:15.112149 systemd[1]: cri-containerd-79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f.scope: Deactivated successfully. Dec 13 02:09:15.114830 env[1405]: time="2024-12-13T02:09:15.114783956Z" level=info msg="StartContainer for \"79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f\" returns successfully" Dec 13 02:09:15.146371 env[1405]: time="2024-12-13T02:09:15.146295572Z" level=info msg="shim disconnected" id=79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f Dec 13 02:09:15.146371 env[1405]: time="2024-12-13T02:09:15.146371370Z" level=warning msg="cleaning up after shim disconnected" id=79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f namespace=k8s.io Dec 13 02:09:15.146679 env[1405]: time="2024-12-13T02:09:15.146384170Z" level=info msg="cleaning up dead shim" Dec 13 02:09:15.154290 env[1405]: time="2024-12-13T02:09:15.154250599Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4525 runtime=io.containerd.runc.v2\n" Dec 13 02:09:15.310787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f-rootfs.mount: Deactivated successfully. Dec 13 02:09:16.020254 env[1405]: time="2024-12-13T02:09:16.020200291Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:09:16.053950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766960931.mount: Deactivated successfully. Dec 13 02:09:16.071678 env[1405]: time="2024-12-13T02:09:16.071633875Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f\"" Dec 13 02:09:16.073739 env[1405]: time="2024-12-13T02:09:16.072187563Z" level=info msg="StartContainer for \"174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f\"" Dec 13 02:09:16.094844 systemd[1]: Started cri-containerd-174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f.scope. Dec 13 02:09:16.119192 systemd[1]: cri-containerd-174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f.scope: Deactivated successfully. Dec 13 02:09:16.122742 env[1405]: time="2024-12-13T02:09:16.122700166Z" level=info msg="StartContainer for \"174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f\" returns successfully" Dec 13 02:09:16.151729 env[1405]: time="2024-12-13T02:09:16.151677237Z" level=info msg="shim disconnected" id=174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f Dec 13 02:09:16.151729 env[1405]: time="2024-12-13T02:09:16.151729136Z" level=warning msg="cleaning up after shim disconnected" id=174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f namespace=k8s.io Dec 13 02:09:16.152022 env[1405]: time="2024-12-13T02:09:16.151740736Z" level=info msg="cleaning up dead shim" Dec 13 02:09:16.159118 env[1405]: time="2024-12-13T02:09:16.159082777Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4580 runtime=io.containerd.runc.v2\n" Dec 13 02:09:16.310550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f-rootfs.mount: Deactivated successfully. Dec 13 02:09:16.355238 kubelet[2377]: I1213 02:09:16.355178 2377 setters.go:600] "Node became not ready" node="ci-3510.3.6-a-eca73107d2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:09:16Z","lastTransitionTime":"2024-12-13T02:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:09:16.601835 kubelet[2377]: E1213 02:09:16.601708 2377 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:16.921976 kubelet[2377]: W1213 02:09:16.921933 2377 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d538bf7_1da1_41d6_bc45_7b46d8adceac.slice/cri-containerd-fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6.scope WatchSource:0}: task fe737310b7e037714bef6e53438ab8f34370955d9e31d9caa9a5e92ed18e09d6 not found: not found Dec 13 02:09:17.026456 env[1405]: time="2024-12-13T02:09:17.025426272Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:09:17.072715 env[1405]: time="2024-12-13T02:09:17.072668447Z" level=info msg="CreateContainer within sandbox \"3cd2fc65d314c5394d6406f9dde4cdaeac2a7c5bcdd8d2d623d0306b87fc8467\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f\"" Dec 13 02:09:17.073594 env[1405]: time="2024-12-13T02:09:17.073558828Z" level=info msg="StartContainer for \"5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f\"" Dec 13 02:09:17.110000 systemd[1]: Started cri-containerd-5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f.scope. Dec 13 02:09:17.180514 env[1405]: time="2024-12-13T02:09:17.180406210Z" level=info msg="StartContainer for \"5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f\" returns successfully" Dec 13 02:09:17.311507 systemd[1]: run-containerd-runc-k8s.io-5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f-runc.cbvL6p.mount: Deactivated successfully. Dec 13 02:09:17.708373 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:09:18.730257 systemd[1]: run-containerd-runc-k8s.io-5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f-runc.El2u9B.mount: Deactivated successfully. Dec 13 02:09:20.031257 kubelet[2377]: W1213 02:09:20.030613 2377 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d538bf7_1da1_41d6_bc45_7b46d8adceac.slice/cri-containerd-0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc.scope WatchSource:0}: task 0fe85dc575037e065e65f5d7e798ad51b001c68a91fa934a5a32f7a72161a1bc not found: not found Dec 13 02:09:20.417063 systemd-networkd[1532]: lxc_health: Link UP Dec 13 02:09:20.426384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:09:20.426974 systemd-networkd[1532]: lxc_health: Gained carrier Dec 13 02:09:21.402133 kubelet[2377]: I1213 02:09:21.402067 2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gd2qz" podStartSLOduration=8.402048014 podStartE2EDuration="8.402048014s" podCreationTimestamp="2024-12-13 02:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:18.044771858 +0000 UTC m=+227.004236044" watchObservedRunningTime="2024-12-13 02:09:21.402048014 +0000 UTC m=+230.361512200" Dec 13 02:09:21.685612 systemd-networkd[1532]: lxc_health: Gained IPv6LL Dec 13 02:09:23.127886 systemd[1]: run-containerd-runc-k8s.io-5cd7908a62a5bb265a9d2f48fbe54d8ed15dc3b86bf0aca695aa5ebe6e36bc7f-runc.tGm6ci.mount: Deactivated successfully. Dec 13 02:09:23.137820 kubelet[2377]: W1213 02:09:23.137692 2377 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d538bf7_1da1_41d6_bc45_7b46d8adceac.slice/cri-containerd-79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f.scope WatchSource:0}: task 79d093b40bf23f9890a8f4cecfcd54312786b559f2022ec6e9b910ced4866f2f not found: not found Dec 13 02:09:26.245611 kubelet[2377]: W1213 02:09:26.245558 2377 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d538bf7_1da1_41d6_bc45_7b46d8adceac.slice/cri-containerd-174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f.scope WatchSource:0}: task 174343ae26946d334afa76aeed4bc22363ec342bd3601eeb1be84cf0af37251f not found: not found Dec 13 02:09:27.545228 sshd[4263]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:27.548746 systemd[1]: sshd@24-10.200.8.15:22-10.200.16.10:55904.service: Deactivated successfully. Dec 13 02:09:27.549682 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 02:09:27.550538 systemd-logind[1373]: Session 27 logged out. Waiting for processes to exit. Dec 13 02:09:27.551421 systemd-logind[1373]: Removed session 27. Dec 13 02:09:31.141264 env[1405]: time="2024-12-13T02:09:31.141217789Z" level=info msg="StopPodSandbox for \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\"" Dec 13 02:09:31.141672 env[1405]: time="2024-12-13T02:09:31.141334087Z" level=info msg="TearDown network for sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" successfully" Dec 13 02:09:31.141672 env[1405]: time="2024-12-13T02:09:31.141393085Z" level=info msg="StopPodSandbox for \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" returns successfully" Dec 13 02:09:31.142075 env[1405]: time="2024-12-13T02:09:31.142039172Z" level=info msg="RemovePodSandbox for \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\"" Dec 13 02:09:31.142194 env[1405]: time="2024-12-13T02:09:31.142078571Z" level=info msg="Forcibly stopping sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\"" Dec 13 02:09:31.142194 env[1405]: time="2024-12-13T02:09:31.142159969Z" level=info msg="TearDown network for sandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" successfully" Dec 13 02:09:31.152977 env[1405]: time="2024-12-13T02:09:31.152939737Z" level=info msg="RemovePodSandbox \"2a6ef78853c680bfbfb7e895c3c81da209287f261cf7ea2117d8ad00ebf1a35f\" returns successfully" Dec 13 02:09:31.153342 env[1405]: time="2024-12-13T02:09:31.153315029Z" level=info msg="StopPodSandbox for \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\"" Dec 13 02:09:31.153453 env[1405]: time="2024-12-13T02:09:31.153405927Z" level=info msg="TearDown network for sandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" successfully" Dec 13 02:09:31.153453 env[1405]: time="2024-12-13T02:09:31.153444826Z" level=info msg="StopPodSandbox for \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" returns successfully" Dec 13 02:09:31.153801 env[1405]: time="2024-12-13T02:09:31.153770419Z" level=info msg="RemovePodSandbox for \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\"" Dec 13 02:09:31.153881 env[1405]: time="2024-12-13T02:09:31.153807418Z" level=info msg="Forcibly stopping sandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\"" Dec 13 02:09:31.153933 env[1405]: time="2024-12-13T02:09:31.153887416Z" level=info msg="TearDown network for sandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" successfully" Dec 13 02:09:31.162145 env[1405]: time="2024-12-13T02:09:31.162116339Z" level=info msg="RemovePodSandbox \"62df251edbf4230d66ef2d41352fa15150312fc9c552019f0a8d23d26a9c46e5\" returns successfully" Dec 13 02:09:31.162481 env[1405]: time="2024-12-13T02:09:31.162450732Z" level=info msg="StopPodSandbox for \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\"" Dec 13 02:09:31.162610 env[1405]: time="2024-12-13T02:09:31.162565129Z" level=info msg="TearDown network for sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" successfully" Dec 13 02:09:31.162671 env[1405]: time="2024-12-13T02:09:31.162607728Z" level=info msg="StopPodSandbox for \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" returns successfully" Dec 13 02:09:31.162999 env[1405]: time="2024-12-13T02:09:31.162967121Z" level=info msg="RemovePodSandbox for \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\"" Dec 13 02:09:31.163085 env[1405]: time="2024-12-13T02:09:31.162998820Z" level=info msg="Forcibly stopping sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\"" Dec 13 02:09:31.163085 env[1405]: time="2024-12-13T02:09:31.163074518Z" level=info msg="TearDown network for sandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" successfully" Dec 13 02:09:31.171283 env[1405]: time="2024-12-13T02:09:31.171255842Z" level=info msg="RemovePodSandbox \"2b3f17a25bcb763e87de32e2ed326be4fee23647e46fc76872002e4c070f02d1\" returns successfully"