Dec 13 14:30:53.053744 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:30:53.053772 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:53.053782 kernel: BIOS-provided physical RAM map: Dec 13 14:30:53.053791 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:30:53.053797 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 14:30:53.053805 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 14:30:53.053814 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 14:30:53.053822 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 14:30:53.053829 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 14:30:53.053836 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 14:30:53.053844 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 14:30:53.053849 kernel: printk: bootconsole [earlyser0] enabled Dec 13 14:30:53.053857 kernel: NX (Execute Disable) protection: active Dec 13 14:30:53.053864 kernel: efi: EFI v2.70 by Microsoft Dec 13 14:30:53.053877 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 Dec 13 14:30:53.053884 kernel: random: crng init done Dec 13 14:30:53.053891 kernel: SMBIOS 3.1.0 present. Dec 13 14:30:53.053899 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 14:30:53.053908 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 14:30:53.053915 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 14:30:53.053921 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 14:30:53.053930 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 14:30:53.053940 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 14:30:53.053949 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 14:30:53.053969 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 14:30:53.053976 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 14:30:53.053985 kernel: tsc: Detected 2593.905 MHz processor Dec 13 14:30:53.053992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:30:53.054002 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:30:53.054010 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 14:30:53.054019 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:30:53.054025 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 14:30:53.054037 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 14:30:53.054045 kernel: Using GB pages for direct mapping Dec 13 14:30:53.054054 kernel: Secure boot disabled Dec 13 14:30:53.054060 kernel: ACPI: Early table checksum verification disabled Dec 13 14:30:53.054068 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 14:30:53.054077 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054085 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054093 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 14:30:53.054107 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 14:30:53.054115 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054125 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054132 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054141 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054149 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054161 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054168 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:30:53.054178 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 14:30:53.054186 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 14:30:53.054195 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 14:30:53.054203 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 14:30:53.054211 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 14:30:53.054220 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 14:30:53.054231 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 14:30:53.054238 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 14:30:53.054246 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 14:30:53.054255 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 14:30:53.054264 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:30:53.054272 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:30:53.054279 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 14:30:53.054289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 14:30:53.054297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 14:30:53.054308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 14:30:53.054315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 14:30:53.054326 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 14:30:53.054334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 14:30:53.054343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 14:30:53.054349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 14:30:53.054360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 14:30:53.054368 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 14:30:53.054377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 14:30:53.054386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 14:30:53.054396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 14:30:53.054404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 14:30:53.054412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 14:30:53.054419 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 14:30:53.054429 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 14:30:53.054437 kernel: Zone ranges: Dec 13 14:30:53.054446 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:30:53.054454 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:30:53.054466 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:30:53.054474 kernel: Movable zone start for each node Dec 13 14:30:53.054483 kernel: Early memory node ranges Dec 13 14:30:53.054489 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:30:53.054499 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 14:30:53.054507 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 14:30:53.054517 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:30:53.054523 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 14:30:53.054532 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:30:53.054543 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:30:53.054552 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 14:30:53.054559 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 14:30:53.054569 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 14:30:53.054576 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:30:53.054586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:30:53.054593 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:30:53.054602 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 14:30:53.054610 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:30:53.054619 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 14:30:53.054626 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 14:30:53.054633 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:30:53.054640 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:30:53.054647 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:30:53.054653 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:30:53.054662 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:30:53.054670 kernel: Hyper-V: PV spinlocks enabled Dec 13 14:30:53.054677 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:30:53.054686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 14:30:53.054693 kernel: Policy zone: Normal Dec 13 14:30:53.054701 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:53.054708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:30:53.054715 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:30:53.054721 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:30:53.054728 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:30:53.054735 kernel: Memory: 8079088K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 308112K reserved, 0K cma-reserved) Dec 13 14:30:53.054746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:30:53.054755 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:30:53.054770 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:30:53.054781 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:30:53.054789 kernel: rcu: RCU event tracing is enabled. Dec 13 14:30:53.054796 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:30:53.054803 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:30:53.054814 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:30:53.054821 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:30:53.054831 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:30:53.054839 kernel: Using NULL legacy PIC Dec 13 14:30:53.054850 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 14:30:53.054859 kernel: Console: colour dummy device 80x25 Dec 13 14:30:53.054870 kernel: printk: console [tty1] enabled Dec 13 14:30:53.054877 kernel: printk: console [ttyS0] enabled Dec 13 14:30:53.054887 kernel: printk: bootconsole [earlyser0] disabled Dec 13 14:30:53.054898 kernel: ACPI: Core revision 20210730 Dec 13 14:30:53.054907 kernel: Failed to register legacy timer interrupt Dec 13 14:30:53.054915 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:30:53.054926 kernel: Hyper-V: Using IPI hypercalls Dec 13 14:30:53.054935 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Dec 13 14:30:53.054944 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:30:53.061995 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:30:53.062023 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:30:53.062036 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:30:53.062050 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:30:53.062068 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:30:53.062082 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:30:53.062097 kernel: RETBleed: Vulnerable Dec 13 14:30:53.062110 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:30:53.062123 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:30:53.062136 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:30:53.062150 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:30:53.062162 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:30:53.062175 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:30:53.062189 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:30:53.062206 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:30:53.062219 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:30:53.062232 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:30:53.062245 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:30:53.062259 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 14:30:53.062272 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 14:30:53.062285 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 14:30:53.062299 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 14:30:53.062311 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:30:53.062325 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:30:53.062338 kernel: LSM: Security Framework initializing Dec 13 14:30:53.062351 kernel: SELinux: Initializing. Dec 13 14:30:53.062368 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:30:53.062380 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:30:53.062394 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:30:53.062407 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:30:53.062420 kernel: signal: max sigframe size: 3632 Dec 13 14:30:53.062434 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:30:53.062447 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:30:53.062460 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:30:53.062474 kernel: x86: Booting SMP configuration: Dec 13 14:30:53.062487 kernel: .... node #0, CPUs: #1 Dec 13 14:30:53.062505 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 14:30:53.062521 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:30:53.062534 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:30:53.062548 kernel: smpboot: Max logical packages: 1 Dec 13 14:30:53.062562 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 14:30:53.062575 kernel: devtmpfs: initialized Dec 13 14:30:53.062588 kernel: x86/mm: Memory block size: 128MB Dec 13 14:30:53.062601 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 14:30:53.062618 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:30:53.062632 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:30:53.062645 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:30:53.062670 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:30:53.062684 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:30:53.062698 kernel: audit: type=2000 audit(1734100252.023:1): state=initialized audit_enabled=0 res=1 Dec 13 14:30:53.062712 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:30:53.062725 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:30:53.062739 kernel: cpuidle: using governor menu Dec 13 14:30:53.062756 kernel: ACPI: bus type PCI registered Dec 13 14:30:53.062769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:30:53.062783 kernel: dca service started, version 1.12.1 Dec 13 14:30:53.062797 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:30:53.062809 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:30:53.062824 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:30:53.062837 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:30:53.062849 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:30:53.062863 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:30:53.062879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:30:53.062892 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:30:53.062906 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:30:53.062920 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:30:53.062934 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:30:53.062947 kernel: ACPI: Interpreter enabled Dec 13 14:30:53.062976 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:30:53.062990 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:30:53.063003 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:30:53.063020 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 14:30:53.063033 kernel: iommu: Default domain type: Translated Dec 13 14:30:53.063047 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:30:53.063061 kernel: vgaarb: loaded Dec 13 14:30:53.063074 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:30:53.063087 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:30:53.063100 kernel: PTP clock support registered Dec 13 14:30:53.063114 kernel: Registered efivars operations Dec 13 14:30:53.063128 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:30:53.063141 kernel: PCI: System does not support PCI Dec 13 14:30:53.063158 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 14:30:53.063171 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:30:53.063184 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:30:53.063197 kernel: pnp: PnP ACPI init Dec 13 14:30:53.063210 kernel: pnp: PnP ACPI: found 3 devices Dec 13 14:30:53.063224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:30:53.063237 kernel: NET: Registered PF_INET protocol family Dec 13 14:30:53.063249 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:30:53.063263 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:30:53.063275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:30:53.063287 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:30:53.063299 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:30:53.063311 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:30:53.063323 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:30:53.063335 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:30:53.063349 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:30:53.063361 kernel: NET: Registered PF_XDP protocol family Dec 13 14:30:53.063377 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:30:53.063390 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:30:53.063402 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 14:30:53.063415 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:30:53.063427 kernel: Initialise system trusted keyrings Dec 13 14:30:53.063440 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:30:53.063454 kernel: Key type asymmetric registered Dec 13 14:30:53.063468 kernel: Asymmetric key parser 'x509' registered Dec 13 14:30:53.063484 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:30:53.063502 kernel: io scheduler mq-deadline registered Dec 13 14:30:53.063516 kernel: io scheduler kyber registered Dec 13 14:30:53.063529 kernel: io scheduler bfq registered Dec 13 14:30:53.063543 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:30:53.063558 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:30:53.063573 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:30:53.063587 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:30:53.063601 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:30:53.063818 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 14:30:53.063941 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T14:30:52 UTC (1734100252) Dec 13 14:30:53.064060 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 14:30:53.064076 kernel: fail to initialize ptp_kvm Dec 13 14:30:53.064088 kernel: intel_pstate: CPU model not supported Dec 13 14:30:53.064101 kernel: efifb: probing for efifb Dec 13 14:30:53.064113 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:30:53.064125 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:30:53.064137 kernel: efifb: scrolling: redraw Dec 13 14:30:53.064153 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:30:53.064166 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:30:53.064179 kernel: fb0: EFI VGA frame buffer device Dec 13 14:30:53.064191 kernel: pstore: Registered efi as persistent store backend Dec 13 14:30:53.064204 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:30:53.064216 kernel: Segment Routing with IPv6 Dec 13 14:30:53.064228 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:30:53.064241 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:30:53.064254 kernel: Key type dns_resolver registered Dec 13 14:30:53.064268 kernel: IPI shorthand broadcast: enabled Dec 13 14:30:53.064280 kernel: sched_clock: Marking stable (735129300, 20745500)->(932293100, -176418300) Dec 13 14:30:53.064293 kernel: registered taskstats version 1 Dec 13 14:30:53.064305 kernel: Loading compiled-in X.509 certificates Dec 13 14:30:53.064317 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:30:53.064329 kernel: Key type .fscrypt registered Dec 13 14:30:53.064342 kernel: Key type fscrypt-provisioning registered Dec 13 14:30:53.064355 kernel: pstore: Using crash dump compression: deflate Dec 13 14:30:53.064370 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:30:53.064382 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:30:53.064396 kernel: ima: No architecture policies found Dec 13 14:30:53.064409 kernel: clk: Disabling unused clocks Dec 13 14:30:53.064422 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:30:53.064435 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:30:53.064447 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:30:53.064460 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:30:53.064472 kernel: Run /init as init process Dec 13 14:30:53.064485 kernel: with arguments: Dec 13 14:30:53.064501 kernel: /init Dec 13 14:30:53.064514 kernel: with environment: Dec 13 14:30:53.064526 kernel: HOME=/ Dec 13 14:30:53.064538 kernel: TERM=linux Dec 13 14:30:53.064551 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:30:53.064566 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:30:53.064583 systemd[1]: Detected virtualization microsoft. Dec 13 14:30:53.064599 systemd[1]: Detected architecture x86-64. Dec 13 14:30:53.064612 systemd[1]: Running in initrd. Dec 13 14:30:53.064626 systemd[1]: No hostname configured, using default hostname. Dec 13 14:30:53.064639 systemd[1]: Hostname set to . Dec 13 14:30:53.064654 systemd[1]: Initializing machine ID from random generator. Dec 13 14:30:53.064669 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:30:53.064683 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:30:53.064697 systemd[1]: Reached target cryptsetup.target. Dec 13 14:30:53.064710 systemd[1]: Reached target paths.target. Dec 13 14:30:53.064727 systemd[1]: Reached target slices.target. Dec 13 14:30:53.064741 systemd[1]: Reached target swap.target. Dec 13 14:30:53.064754 systemd[1]: Reached target timers.target. Dec 13 14:30:53.064769 systemd[1]: Listening on iscsid.socket. Dec 13 14:30:53.064782 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:30:53.064796 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:30:53.064810 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:30:53.064826 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:30:53.064840 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:30:53.064855 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:30:53.064868 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:30:53.064881 systemd[1]: Reached target sockets.target. Dec 13 14:30:53.064895 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:30:53.064911 systemd[1]: Finished network-cleanup.service. Dec 13 14:30:53.064925 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:30:53.064939 systemd[1]: Starting systemd-journald.service... Dec 13 14:30:53.064966 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:30:53.064980 systemd[1]: Starting systemd-resolved.service... Dec 13 14:30:53.064995 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:30:53.065015 systemd-journald[183]: Journal started Dec 13 14:30:53.065093 systemd-journald[183]: Runtime Journal (/run/log/journal/ce6de31552144095be2a1ed1c6e46485) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:30:53.058907 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 14:30:53.073975 systemd[1]: Started systemd-journald.service. Dec 13 14:30:53.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.107442 kernel: audit: type=1130 audit(1734100253.078:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.079560 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:30:53.107728 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:30:53.111612 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:30:53.129269 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:30:53.126234 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:30:53.129304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:30:53.143747 systemd-resolved[185]: Positive Trust Anchors: Dec 13 14:30:53.145128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:30:53.150747 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:30:53.154226 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:30:53.170586 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:30:53.187197 kernel: audit: type=1130 audit(1734100253.106:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.174205 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:30:53.202928 kernel: Bridge firewalling registered Dec 13 14:30:53.203077 kernel: audit: type=1130 audit(1734100253.110:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.191163 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 14:30:53.251731 kernel: audit: type=1130 audit(1734100253.124:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.251770 kernel: audit: type=1130 audit(1734100253.147:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.251786 kernel: audit: type=1130 audit(1734100253.172:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.204787 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 14:30:53.253965 dracut-cmdline[200]: dracut-dracut-053 Dec 13 14:30:53.253965 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:53.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.231739 systemd[1]: Started systemd-resolved.service. Dec 13 14:30:53.282250 kernel: audit: type=1130 audit(1734100253.232:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.234018 systemd[1]: Reached target nss-lookup.target. Dec 13 14:30:53.308978 kernel: SCSI subsystem initialized Dec 13 14:30:53.335065 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:30:53.335146 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:30:53.335165 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:30:53.347037 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 14:30:53.348925 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:30:53.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.354735 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:53.376995 kernel: audit: type=1130 audit(1734100253.352:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.377030 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:30:53.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.380421 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:53.395993 kernel: audit: type=1130 audit(1734100253.382:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.403973 kernel: iscsi: registered transport (tcp) Dec 13 14:30:53.430415 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:30:53.430498 kernel: QLogic iSCSI HBA Driver Dec 13 14:30:53.460387 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:30:53.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.463646 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:30:53.516987 kernel: raid6: avx512x4 gen() 18095 MB/s Dec 13 14:30:53.536966 kernel: raid6: avx512x4 xor() 6722 MB/s Dec 13 14:30:53.556967 kernel: raid6: avx512x2 gen() 18060 MB/s Dec 13 14:30:53.587974 kernel: raid6: avx512x2 xor() 29715 MB/s Dec 13 14:30:53.606964 kernel: raid6: avx512x1 gen() 17966 MB/s Dec 13 14:30:53.627969 kernel: raid6: avx512x1 xor() 26609 MB/s Dec 13 14:30:53.647963 kernel: raid6: avx2x4 gen() 17954 MB/s Dec 13 14:30:53.667964 kernel: raid6: avx2x4 xor() 6649 MB/s Dec 13 14:30:53.688968 kernel: raid6: avx2x2 gen() 18038 MB/s Dec 13 14:30:53.708966 kernel: raid6: avx2x2 xor() 21763 MB/s Dec 13 14:30:53.728963 kernel: raid6: avx2x1 gen() 13668 MB/s Dec 13 14:30:53.748968 kernel: raid6: avx2x1 xor() 18996 MB/s Dec 13 14:30:53.768964 kernel: raid6: sse2x4 gen() 11569 MB/s Dec 13 14:30:53.788967 kernel: raid6: sse2x4 xor() 5942 MB/s Dec 13 14:30:53.808966 kernel: raid6: sse2x2 gen() 12871 MB/s Dec 13 14:30:53.827962 kernel: raid6: sse2x2 xor() 7455 MB/s Dec 13 14:30:53.847964 kernel: raid6: sse2x1 gen() 11469 MB/s Dec 13 14:30:53.871507 kernel: raid6: sse2x1 xor() 5794 MB/s Dec 13 14:30:53.871536 kernel: raid6: using algorithm avx512x4 gen() 18095 MB/s Dec 13 14:30:53.871555 kernel: raid6: .... xor() 6722 MB/s, rmw enabled Dec 13 14:30:53.878471 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:30:53.893975 kernel: xor: automatically using best checksumming function avx Dec 13 14:30:53.989977 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:30:53.998718 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:30:54.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:54.002000 audit: BPF prog-id=7 op=LOAD Dec 13 14:30:54.002000 audit: BPF prog-id=8 op=LOAD Dec 13 14:30:54.003869 systemd[1]: Starting systemd-udevd.service... Dec 13 14:30:54.018579 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 14:30:54.023301 systemd[1]: Started systemd-udevd.service. Dec 13 14:30:54.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:54.032376 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:30:54.057806 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Dec 13 14:30:54.088065 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:30:54.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:54.093448 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:30:54.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:54.128403 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:30:54.186979 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:30:54.191979 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 14:30:54.204976 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:30:54.230333 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:30:54.250980 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:30:54.257935 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:30:54.258015 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:30:54.271792 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:30:54.271862 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:30:54.272158 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:30:54.277976 kernel: AES CTR mode by8 optimization enabled Dec 13 14:30:54.283016 kernel: scsi host0: storvsc_host_t Dec 13 14:30:54.290191 kernel: scsi host1: storvsc_host_t Dec 13 14:30:54.290288 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:30:54.290971 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:30:54.303137 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:30:54.325892 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:30:54.337389 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:30:54.337409 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:30:54.352142 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:30:54.352330 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:30:54.352495 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:30:54.352654 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:30:54.352806 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:30:54.352975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:54.352996 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:30:54.422983 kernel: hv_netvsc 7c1e5234-1847-7c1e-5234-18477c1e5234 eth0: VF slot 1 added Dec 13 14:30:54.433833 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:30:54.440975 kernel: hv_pci 8661688b-6d30-4873-9c30-8ccec478bc7c: PCI VMBus probing: Using version 0x10004 Dec 13 14:30:54.535313 kernel: hv_pci 8661688b-6d30-4873-9c30-8ccec478bc7c: PCI host bridge to bus 6d30:00 Dec 13 14:30:54.535510 kernel: pci_bus 6d30:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 14:30:54.535683 kernel: pci_bus 6d30:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:30:54.535838 kernel: pci 6d30:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 14:30:54.536117 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (447) Dec 13 14:30:54.536146 kernel: pci 6d30:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:30:54.536305 kernel: pci 6d30:00:02.0: enabling Extended Tags Dec 13 14:30:54.536459 kernel: pci 6d30:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6d30:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:30:54.536606 kernel: pci_bus 6d30:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:30:54.536746 kernel: pci 6d30:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:30:54.495197 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:30:54.505063 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:30:54.577369 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:30:54.589462 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:30:54.595349 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:30:54.602702 systemd[1]: Starting disk-uuid.service... Dec 13 14:30:54.616973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:54.649984 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:54.678979 kernel: mlx5_core 6d30:00:02.0: firmware version: 14.30.5000 Dec 13 14:30:54.959492 kernel: mlx5_core 6d30:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:30:54.959691 kernel: mlx5_core 6d30:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 14:30:54.959842 kernel: mlx5_core 6d30:00:02.0: mlx5e_tc_post_act_init:40:(pid 191): firmware level support is missing Dec 13 14:30:54.959945 kernel: hv_netvsc 7c1e5234-1847-7c1e-5234-18477c1e5234 eth0: VF registering: eth1 Dec 13 14:30:54.960104 kernel: mlx5_core 6d30:00:02.0 eth1: joined to eth0 Dec 13 14:30:54.966973 kernel: mlx5_core 6d30:00:02.0 enP27952s1: renamed from eth1 Dec 13 14:30:55.636052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:30:55.638291 disk-uuid[552]: The operation has completed successfully. Dec 13 14:30:55.720117 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:30:55.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:55.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:55.720227 systemd[1]: Finished disk-uuid.service. Dec 13 14:30:55.723670 systemd[1]: Starting verity-setup.service... Dec 13 14:30:55.752979 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:30:55.838030 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:30:55.841971 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:30:55.847513 systemd[1]: Finished verity-setup.service. Dec 13 14:30:55.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:55.921987 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:30:55.922325 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:30:55.925886 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:30:55.929908 systemd[1]: Starting ignition-setup.service... Dec 13 14:30:55.935528 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:30:55.951429 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:55.951480 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:30:55.951491 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:30:55.987622 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:30:56.011211 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:30:56.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.015000 audit: BPF prog-id=9 op=LOAD Dec 13 14:30:56.016792 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:56.039444 systemd[1]: Finished ignition-setup.service. Dec 13 14:30:56.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.044588 systemd-networkd[833]: lo: Link UP Dec 13 14:30:56.044808 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:30:56.046505 systemd-networkd[833]: lo: Gained carrier Dec 13 14:30:56.047415 systemd-networkd[833]: Enumeration completed Dec 13 14:30:56.051123 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:56.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.053440 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:56.055715 systemd[1]: Reached target network.target. Dec 13 14:30:56.065207 systemd[1]: Starting iscsiuio.service... Dec 13 14:30:56.074055 systemd[1]: Started iscsiuio.service. Dec 13 14:30:56.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.078718 systemd[1]: Starting iscsid.service... Dec 13 14:30:56.085345 iscsid[840]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:30:56.085345 iscsid[840]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:30:56.085345 iscsid[840]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:30:56.085345 iscsid[840]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:30:56.085345 iscsid[840]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:30:56.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.111547 iscsid[840]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:30:56.103015 systemd[1]: Started iscsid.service. Dec 13 14:30:56.108159 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:30:56.124997 kernel: mlx5_core 6d30:00:02.0 enP27952s1: Link up Dec 13 14:30:56.126167 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:30:56.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.130161 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:30:56.134076 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:30:56.138490 systemd[1]: Reached target remote-fs.target. Dec 13 14:30:56.143347 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:30:56.155035 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:30:56.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.179606 kernel: hv_netvsc 7c1e5234-1847-7c1e-5234-18477c1e5234 eth0: Data path switched to VF: enP27952s1 Dec 13 14:30:56.179893 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:30:56.180073 systemd-networkd[833]: enP27952s1: Link UP Dec 13 14:30:56.180414 systemd-networkd[833]: eth0: Link UP Dec 13 14:30:56.181333 systemd-networkd[833]: eth0: Gained carrier Dec 13 14:30:56.188488 systemd-networkd[833]: enP27952s1: Gained carrier Dec 13 14:30:56.220064 systemd-networkd[833]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:30:56.866809 ignition[836]: Ignition 2.14.0 Dec 13 14:30:56.866828 ignition[836]: Stage: fetch-offline Dec 13 14:30:56.866921 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:56.866978 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:56.922021 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:56.922215 ignition[836]: parsed url from cmdline: "" Dec 13 14:30:56.922220 ignition[836]: no config URL provided Dec 13 14:30:56.922226 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:30:56.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:56.922235 ignition[836]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:30:56.928204 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:30:56.922241 ignition[836]: failed to fetch config: resource requires networking Dec 13 14:30:56.929424 systemd[1]: Starting ignition-fetch.service... Dec 13 14:30:56.924942 ignition[836]: Ignition finished successfully Dec 13 14:30:56.947163 ignition[859]: Ignition 2.14.0 Dec 13 14:30:56.947174 ignition[859]: Stage: fetch Dec 13 14:30:56.947324 ignition[859]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:56.947358 ignition[859]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:56.957758 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:56.957934 ignition[859]: parsed url from cmdline: "" Dec 13 14:30:56.957938 ignition[859]: no config URL provided Dec 13 14:30:56.957943 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:30:56.957977 ignition[859]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:30:56.958020 ignition[859]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:30:57.045751 ignition[859]: GET result: OK Dec 13 14:30:57.045889 ignition[859]: config has been read from IMDS userdata Dec 13 14:30:57.045922 ignition[859]: parsing config with SHA512: ce8e6adca8f358893608b4340e771aac6bba4f81a70174beecfb2dc7232a024582ff15ed10e8154bfed642670ee1333c9cdc4311b6bc106bebdf1bb2bcbe3ad6 Dec 13 14:30:57.049320 unknown[859]: fetched base config from "system" Dec 13 14:30:57.049703 ignition[859]: fetch: fetch complete Dec 13 14:30:57.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.049329 unknown[859]: fetched base config from "system" Dec 13 14:30:57.049708 ignition[859]: fetch: fetch passed Dec 13 14:30:57.049336 unknown[859]: fetched user config from "azure" Dec 13 14:30:57.049748 ignition[859]: Ignition finished successfully Dec 13 14:30:57.051292 systemd[1]: Finished ignition-fetch.service. Dec 13 14:30:57.055716 systemd[1]: Starting ignition-kargs.service... Dec 13 14:30:57.068529 ignition[865]: Ignition 2.14.0 Dec 13 14:30:57.068536 ignition[865]: Stage: kargs Dec 13 14:30:57.068646 ignition[865]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:57.068673 ignition[865]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:57.073886 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:57.075481 ignition[865]: kargs: kargs passed Dec 13 14:30:57.077442 systemd[1]: Finished ignition-kargs.service. Dec 13 14:30:57.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.075548 ignition[865]: Ignition finished successfully Dec 13 14:30:57.081718 systemd[1]: Starting ignition-disks.service... Dec 13 14:30:57.091431 ignition[871]: Ignition 2.14.0 Dec 13 14:30:57.091441 ignition[871]: Stage: disks Dec 13 14:30:57.091579 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:57.091613 ignition[871]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:57.095485 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:57.097121 ignition[871]: disks: disks passed Dec 13 14:30:57.104071 ignition[871]: Ignition finished successfully Dec 13 14:30:57.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.104903 systemd[1]: Finished ignition-disks.service. Dec 13 14:30:57.107860 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:30:57.111624 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:30:57.115435 systemd[1]: Reached target local-fs.target. Dec 13 14:30:57.119054 systemd[1]: Reached target sysinit.target. Dec 13 14:30:57.120754 systemd[1]: Reached target basic.target. Dec 13 14:30:57.122534 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:30:57.145796 systemd-fsck[879]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 14:30:57.161976 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:30:57.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.167384 systemd[1]: Mounting sysroot.mount... Dec 13 14:30:57.180973 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:30:57.181480 systemd[1]: Mounted sysroot.mount. Dec 13 14:30:57.184859 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:30:57.197240 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:30:57.202032 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:30:57.206153 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:30:57.206195 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:30:57.216262 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:30:57.230573 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:30:57.234340 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:30:57.246631 initrd-setup-root[894]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:30:57.253458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (889) Dec 13 14:30:57.259945 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:30:57.268667 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:57.268692 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:30:57.268702 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:30:57.272027 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:30:57.278560 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:30:57.280518 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:30:57.356246 systemd-networkd[833]: eth0: Gained IPv6LL Dec 13 14:30:57.445379 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:30:57.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.454975 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:30:57.455035 kernel: audit: type=1130 audit(1734100257.449:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.450825 systemd[1]: Starting ignition-mount.service... Dec 13 14:30:57.466426 systemd[1]: Starting sysroot-boot.service... Dec 13 14:30:57.472826 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:57.475491 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:57.491901 ignition[956]: INFO : Ignition 2.14.0 Dec 13 14:30:57.491901 ignition[956]: INFO : Stage: mount Dec 13 14:30:57.495785 ignition[956]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:57.495785 ignition[956]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:57.507813 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:57.510909 ignition[956]: INFO : mount: mount passed Dec 13 14:30:57.510909 ignition[956]: INFO : Ignition finished successfully Dec 13 14:30:57.516600 systemd[1]: Finished ignition-mount.service. Dec 13 14:30:57.534041 kernel: audit: type=1130 audit(1734100257.518:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.534082 kernel: audit: type=1130 audit(1734100257.531:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.520586 systemd[1]: Finished sysroot-boot.service. Dec 13 14:30:57.675504 coreos-metadata[888]: Dec 13 14:30:57.675 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:30:57.681750 coreos-metadata[888]: Dec 13 14:30:57.681 INFO Fetch successful Dec 13 14:30:57.715444 coreos-metadata[888]: Dec 13 14:30:57.715 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:30:57.732266 coreos-metadata[888]: Dec 13 14:30:57.732 INFO Fetch successful Dec 13 14:30:57.737401 coreos-metadata[888]: Dec 13 14:30:57.737 INFO wrote hostname ci-3510.3.6-a-01993ae768 to /sysroot/etc/hostname Dec 13 14:30:57.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.739442 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:30:57.761935 kernel: audit: type=1130 audit(1734100257.744:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:57.746888 systemd[1]: Starting ignition-files.service... Dec 13 14:30:57.767383 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:30:57.784982 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (967) Dec 13 14:30:57.785056 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:57.785084 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:30:57.788535 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:30:57.796389 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:30:57.810823 ignition[986]: INFO : Ignition 2.14.0 Dec 13 14:30:57.813071 ignition[986]: INFO : Stage: files Dec 13 14:30:57.813071 ignition[986]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:57.813071 ignition[986]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:57.827394 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:57.843072 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:30:57.846818 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:30:57.846818 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:30:57.863533 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:30:57.866742 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:30:57.888660 unknown[986]: wrote ssh authorized keys file for user: core Dec 13 14:30:57.891015 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:30:57.898071 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:30:57.902544 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:30:57.906526 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:30:57.910829 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:30:57.914991 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:57.920641 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:57.926297 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:30:57.930796 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:57.936532 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1269595935" Dec 13 14:30:57.947508 ignition[986]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1269595935": device or resource busy Dec 13 14:30:57.947508 ignition[986]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1269595935", trying btrfs: device or resource busy Dec 13 14:30:57.947508 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1269595935" Dec 13 14:30:57.963570 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (986) Dec 13 14:30:57.963606 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1269595935" Dec 13 14:30:57.968231 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem1269595935" Dec 13 14:30:57.968231 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem1269595935" Dec 13 14:30:57.975708 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:30:57.975708 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:30:57.984637 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:57.989181 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3246358245" Dec 13 14:30:57.989181 ignition[986]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3246358245": device or resource busy Dec 13 14:30:57.989181 ignition[986]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3246358245", trying btrfs: device or resource busy Dec 13 14:30:57.989181 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3246358245" Dec 13 14:30:57.989181 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3246358245" Dec 13 14:30:57.989181 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3246358245" Dec 13 14:30:57.989181 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3246358245" Dec 13 14:30:57.989181 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:30:58.027994 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:58.027994 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:30:58.547581 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 14:30:58.947103 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:58.947103 ignition[986]: INFO : files: op(f): [started] processing unit "waagent.service" Dec 13 14:30:58.947103 ignition[986]: INFO : files: op(f): [finished] processing unit "waagent.service" Dec 13 14:30:58.947103 ignition[986]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 14:30:58.947103 ignition[986]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 14:30:58.947103 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Dec 13 14:30:58.983127 kernel: audit: type=1130 audit(1734100258.956:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.953455 systemd[1]: Finished ignition-files.service. Dec 13 14:30:58.990045 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Dec 13 14:30:58.990045 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Dec 13 14:30:58.990045 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:30:58.990045 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:30:58.990045 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:30:58.990045 ignition[986]: INFO : files: files passed Dec 13 14:30:58.990045 ignition[986]: INFO : Ignition finished successfully Dec 13 14:30:59.049444 kernel: audit: type=1130 audit(1734100258.992:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.049480 kernel: audit: type=1131 audit(1734100258.992:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.049498 kernel: audit: type=1130 audit(1734100259.008:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:58.970604 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:30:58.974916 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:30:59.056394 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:30:58.978719 systemd[1]: Starting ignition-quench.service... Dec 13 14:30:58.987888 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:30:58.988032 systemd[1]: Finished ignition-quench.service. Dec 13 14:30:59.006578 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:30:59.017957 systemd[1]: Reached target ignition-complete.target. Dec 13 14:30:59.036078 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:30:59.085447 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:30:59.085568 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:30:59.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.092140 systemd[1]: Reached target initrd-fs.target. Dec 13 14:30:59.118033 kernel: audit: type=1130 audit(1734100259.091:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.118078 kernel: audit: type=1131 audit(1734100259.091:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.114699 systemd[1]: Reached target initrd.target. Dec 13 14:30:59.118065 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:30:59.119136 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:30:59.133813 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:30:59.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.135809 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:30:59.147849 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:30:59.151371 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:30:59.155406 systemd[1]: Stopped target timers.target. Dec 13 14:30:59.159144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:30:59.161441 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:30:59.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.165244 systemd[1]: Stopped target initrd.target. Dec 13 14:30:59.168530 systemd[1]: Stopped target basic.target. Dec 13 14:30:59.171825 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:30:59.175986 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:30:59.179787 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:30:59.183666 systemd[1]: Stopped target remote-fs.target. Dec 13 14:30:59.187370 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:30:59.191267 systemd[1]: Stopped target sysinit.target. Dec 13 14:30:59.195088 systemd[1]: Stopped target local-fs.target. Dec 13 14:30:59.198496 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:30:59.202084 systemd[1]: Stopped target swap.target. Dec 13 14:30:59.205758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:30:59.208097 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:30:59.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.211879 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:30:59.215559 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:30:59.217792 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:30:59.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.221415 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:30:59.224068 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:30:59.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.228816 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:30:59.230904 systemd[1]: Stopped ignition-files.service. Dec 13 14:30:59.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.234503 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:30:59.237045 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:30:59.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.242607 systemd[1]: Stopping ignition-mount.service... Dec 13 14:30:59.251220 iscsid[840]: iscsid shutting down. Dec 13 14:30:59.244585 systemd[1]: Stopping iscsid.service... Dec 13 14:30:59.270912 ignition[1024]: INFO : Ignition 2.14.0 Dec 13 14:30:59.270912 ignition[1024]: INFO : Stage: umount Dec 13 14:30:59.270912 ignition[1024]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:59.270912 ignition[1024]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:30:59.270912 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:30:59.246176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:30:59.279963 ignition[1024]: INFO : umount: umount passed Dec 13 14:30:59.279963 ignition[1024]: INFO : Ignition finished successfully Dec 13 14:30:59.246324 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:30:59.259475 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:30:59.296468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:30:59.297569 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:30:59.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.303129 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:30:59.304146 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:30:59.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.311742 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:30:59.311888 systemd[1]: Stopped iscsid.service. Dec 13 14:30:59.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.317530 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:30:59.317646 systemd[1]: Stopped ignition-mount.service. Dec 13 14:30:59.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.323004 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:30:59.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.323121 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:30:59.326674 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:30:59.326728 systemd[1]: Stopped ignition-disks.service. Dec 13 14:30:59.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.331003 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:30:59.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.331060 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:30:59.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.342360 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:30:59.342418 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:30:59.346532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:30:59.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.346591 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:30:59.350086 systemd[1]: Stopped target paths.target. Dec 13 14:30:59.352545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:30:59.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.352620 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:30:59.354713 systemd[1]: Stopped target slices.target. Dec 13 14:30:59.358378 systemd[1]: Stopped target sockets.target. Dec 13 14:30:59.360235 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:30:59.360281 systemd[1]: Closed iscsid.socket. Dec 13 14:30:59.363311 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:30:59.363381 systemd[1]: Stopped ignition-setup.service. Dec 13 14:30:59.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.370551 systemd[1]: Stopping iscsiuio.service... Dec 13 14:30:59.373720 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:30:59.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.374166 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:30:59.374269 systemd[1]: Stopped iscsiuio.service. Dec 13 14:30:59.407000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:30:59.376464 systemd[1]: Stopped target network.target. Dec 13 14:30:59.378390 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:30:59.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.378423 systemd[1]: Closed iscsiuio.socket. Dec 13 14:30:59.384020 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:30:59.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.387156 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:30:59.395010 systemd-networkd[833]: eth0: DHCPv6 lease lost Dec 13 14:30:59.429000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:30:59.396478 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:30:59.396583 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:30:59.401842 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:30:59.401945 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:30:59.407947 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:30:59.407992 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:30:59.409899 systemd[1]: Stopping network-cleanup.service... Dec 13 14:30:59.412870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:30:59.412935 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:30:59.416422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:30:59.416479 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:30:59.420337 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:30:59.420389 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:30:59.427434 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:30:59.440356 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:30:59.465486 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:30:59.467487 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:30:59.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.472025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:30:59.472096 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:30:59.476593 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:30:59.476645 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:30:59.480288 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:30:59.482416 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:30:59.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.489343 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:30:59.489407 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:30:59.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.494758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:30:59.494816 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:30:59.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.500282 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:30:59.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.502176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:30:59.502233 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:30:59.532001 kernel: hv_netvsc 7c1e5234-1847-7c1e-5234-18477c1e5234 eth0: Data path switched from VF: enP27952s1 Dec 13 14:30:59.534559 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:30:59.536961 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:30:59.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.551826 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:30:59.554281 systemd[1]: Stopped network-cleanup.service. Dec 13 14:30:59.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.717149 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:30:59.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.717290 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:30:59.721523 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:30:59.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:59.725369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:30:59.725436 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:30:59.730259 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:30:59.744227 systemd[1]: Switching root. Dec 13 14:30:59.766698 systemd-journald[183]: Journal stopped Dec 13 14:31:04.641408 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 14:31:04.641438 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:31:04.641449 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:31:04.641460 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:31:04.641468 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:31:04.641480 kernel: SELinux: policy capability open_perms=1 Dec 13 14:31:04.641492 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:31:04.641502 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:31:04.641511 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:31:04.641521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:31:04.641529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:31:04.641540 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:31:04.641549 systemd[1]: Successfully loaded SELinux policy in 156.719ms. Dec 13 14:31:04.641561 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.064ms. Dec 13 14:31:04.641576 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:31:04.641586 systemd[1]: Detected virtualization microsoft. Dec 13 14:31:04.641597 systemd[1]: Detected architecture x86-64. Dec 13 14:31:04.641607 systemd[1]: Detected first boot. Dec 13 14:31:04.641621 systemd[1]: Hostname set to . Dec 13 14:31:04.641633 systemd[1]: Initializing machine ID from random generator. Dec 13 14:31:04.641645 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:31:04.641656 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:31:04.641668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:04.641678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:04.641692 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:04.641705 kernel: kauditd_printk_skb: 54 callbacks suppressed Dec 13 14:31:04.641717 kernel: audit: type=1334 audit(1734100264.211:91): prog-id=12 op=LOAD Dec 13 14:31:04.641729 kernel: audit: type=1334 audit(1734100264.211:92): prog-id=3 op=UNLOAD Dec 13 14:31:04.641738 kernel: audit: type=1334 audit(1734100264.215:93): prog-id=13 op=LOAD Dec 13 14:31:04.641750 kernel: audit: type=1334 audit(1734100264.220:94): prog-id=14 op=LOAD Dec 13 14:31:04.641758 kernel: audit: type=1334 audit(1734100264.220:95): prog-id=4 op=UNLOAD Dec 13 14:31:04.641769 kernel: audit: type=1334 audit(1734100264.220:96): prog-id=5 op=UNLOAD Dec 13 14:31:04.641778 kernel: audit: type=1334 audit(1734100264.225:97): prog-id=15 op=LOAD Dec 13 14:31:04.641792 kernel: audit: type=1334 audit(1734100264.225:98): prog-id=12 op=UNLOAD Dec 13 14:31:04.641800 kernel: audit: type=1334 audit(1734100264.229:99): prog-id=16 op=LOAD Dec 13 14:31:04.641812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:31:04.641821 kernel: audit: type=1334 audit(1734100264.233:100): prog-id=17 op=LOAD Dec 13 14:31:04.641833 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:31:04.641842 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:31:04.641855 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:31:04.641869 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:31:04.641881 systemd[1]: Created slice system-getty.slice. Dec 13 14:31:04.641894 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:31:04.641906 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:31:04.641917 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:31:04.641931 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:31:04.641944 systemd[1]: Created slice user.slice. Dec 13 14:31:04.641975 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:31:04.641990 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:31:04.642006 systemd[1]: Set up automount boot.automount. Dec 13 14:31:04.642026 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:31:04.642040 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:31:04.642056 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:31:04.642071 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:31:04.642086 systemd[1]: Reached target integritysetup.target. Dec 13 14:31:04.642105 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:31:04.642121 systemd[1]: Reached target remote-fs.target. Dec 13 14:31:04.642293 systemd[1]: Reached target slices.target. Dec 13 14:31:04.642314 systemd[1]: Reached target swap.target. Dec 13 14:31:04.642331 systemd[1]: Reached target torcx.target. Dec 13 14:31:04.642347 systemd[1]: Reached target veritysetup.target. Dec 13 14:31:04.642362 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:31:04.642379 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:31:04.642394 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:31:04.642415 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:31:04.642434 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:31:04.642451 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:31:04.642468 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:31:04.642482 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:31:04.642498 systemd[1]: Mounting media.mount... Dec 13 14:31:04.642514 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:04.642541 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:31:04.642557 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:31:04.642573 systemd[1]: Mounting tmp.mount... Dec 13 14:31:04.642588 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:31:04.642673 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:04.642685 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:31:04.642695 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:31:04.642704 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:04.642717 systemd[1]: Starting modprobe@drm.service... Dec 13 14:31:04.648227 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:04.648252 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:31:04.648270 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:04.648289 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:31:04.648305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:31:04.648321 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:31:04.648338 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:31:04.648354 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:31:04.648370 systemd[1]: Stopped systemd-journald.service. Dec 13 14:31:04.648391 systemd[1]: Starting systemd-journald.service... Dec 13 14:31:04.648409 kernel: loop: module loaded Dec 13 14:31:04.648424 kernel: fuse: init (API version 7.34) Dec 13 14:31:04.648440 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:31:04.648455 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:31:04.648470 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:31:04.648487 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:31:04.648512 systemd-journald[1164]: Journal started Dec 13 14:31:04.648593 systemd-journald[1164]: Runtime Journal (/run/log/journal/20728e6f573d493f8681659afcdc2556) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:31:00.462000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:31:00.721000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:31:00.726000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:31:00.726000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:31:00.726000 audit: BPF prog-id=10 op=LOAD Dec 13 14:31:00.726000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:31:00.726000 audit: BPF prog-id=11 op=LOAD Dec 13 14:31:00.726000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:31:01.102000 audit[1057]: AVC avc: denied { associate } for pid=1057 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:31:01.102000 audit[1057]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1040 pid=1057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:01.102000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:31:01.110000 audit[1057]: AVC avc: denied { associate } for pid=1057 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:31:01.110000 audit[1057]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=1040 pid=1057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:01.110000 audit: CWD cwd="/" Dec 13 14:31:01.110000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:01.110000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:01.110000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:31:04.211000 audit: BPF prog-id=12 op=LOAD Dec 13 14:31:04.211000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:31:04.215000 audit: BPF prog-id=13 op=LOAD Dec 13 14:31:04.220000 audit: BPF prog-id=14 op=LOAD Dec 13 14:31:04.220000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:31:04.220000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:31:04.225000 audit: BPF prog-id=15 op=LOAD Dec 13 14:31:04.225000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:31:04.229000 audit: BPF prog-id=16 op=LOAD Dec 13 14:31:04.233000 audit: BPF prog-id=17 op=LOAD Dec 13 14:31:04.233000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:31:04.233000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:31:04.238000 audit: BPF prog-id=18 op=LOAD Dec 13 14:31:04.238000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:31:04.247000 audit: BPF prog-id=19 op=LOAD Dec 13 14:31:04.247000 audit: BPF prog-id=20 op=LOAD Dec 13 14:31:04.247000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:31:04.247000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:31:04.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.269000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:31:04.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.589000 audit: BPF prog-id=21 op=LOAD Dec 13 14:31:04.590000 audit: BPF prog-id=22 op=LOAD Dec 13 14:31:04.590000 audit: BPF prog-id=23 op=LOAD Dec 13 14:31:04.590000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:31:04.590000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:31:04.638000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:31:04.638000 audit[1164]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffda3ea8060 a2=4000 a3=7ffda3ea80fc items=0 ppid=1 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:04.638000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:31:04.210539 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:31:01.094411 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:04.248764 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:31:01.094710 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:31:01.094729 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:31:01.094764 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:31:01.094774 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:31:01.094811 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:31:01.094823 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:31:01.095056 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:31:01.095093 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:31:01.095104 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:31:01.099705 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:31:01.099744 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:31:01.099766 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:31:01.099779 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:31:01.099796 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:31:01.099809 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:31:03.735157 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:31:03.735401 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:31:03.735528 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:31:03.735695 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:31:03.735741 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:31:03.735797 /usr/lib/systemd/system-generators/torcx-generator[1057]: time="2024-12-13T14:31:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:31:04.657370 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:31:04.657410 systemd[1]: Stopped verity-setup.service. Dec 13 14:31:04.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.666973 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:04.672995 systemd[1]: Started systemd-journald.service. Dec 13 14:31:04.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.673998 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:31:04.676046 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:31:04.678497 systemd[1]: Mounted media.mount. Dec 13 14:31:04.680385 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:31:04.682399 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:31:04.684490 systemd[1]: Mounted tmp.mount. Dec 13 14:31:04.686537 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:31:04.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.688992 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:31:04.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.691458 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:31:04.691657 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:31:04.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.694289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:04.694475 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:04.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.696906 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:31:04.697082 systemd[1]: Finished modprobe@drm.service. Dec 13 14:31:04.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.699507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:04.699695 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:04.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.702271 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:31:04.702459 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:31:04.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.704638 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:04.704843 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:04.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.707220 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:31:04.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.709922 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:31:04.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.712568 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:31:04.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.715464 systemd[1]: Reached target network-pre.target. Dec 13 14:31:04.719178 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:31:04.722742 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:31:04.725296 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:31:04.740559 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:31:04.743940 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:31:04.746338 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:04.747733 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:31:04.750274 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:04.751776 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:31:04.757288 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:31:04.764815 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:31:04.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.770617 systemd-journald[1164]: Time spent on flushing to /var/log/journal/20728e6f573d493f8681659afcdc2556 is 37.528ms for 1131 entries. Dec 13 14:31:04.770617 systemd-journald[1164]: System Journal (/var/log/journal/20728e6f573d493f8681659afcdc2556) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:31:04.835067 systemd-journald[1164]: Received client request to flush runtime journal. Dec 13 14:31:04.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.769016 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:31:04.776522 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:31:04.837612 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:31:04.780129 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:31:04.782723 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:31:04.784876 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:31:04.805795 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:31:04.836277 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:31:04.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.933405 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:31:05.384528 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:31:05.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.386000 audit: BPF prog-id=24 op=LOAD Dec 13 14:31:05.386000 audit: BPF prog-id=25 op=LOAD Dec 13 14:31:05.386000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:31:05.386000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:31:05.388728 systemd[1]: Starting systemd-udevd.service... Dec 13 14:31:05.407727 systemd-udevd[1185]: Using default interface naming scheme 'v252'. Dec 13 14:31:05.482346 systemd[1]: Started systemd-udevd.service. Dec 13 14:31:05.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.487000 audit: BPF prog-id=26 op=LOAD Dec 13 14:31:05.488990 systemd[1]: Starting systemd-networkd.service... Dec 13 14:31:05.518000 audit: BPF prog-id=27 op=LOAD Dec 13 14:31:05.518000 audit: BPF prog-id=28 op=LOAD Dec 13 14:31:05.518000 audit: BPF prog-id=29 op=LOAD Dec 13 14:31:05.520474 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:31:05.566540 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:31:05.574696 systemd[1]: Started systemd-userdbd.service. Dec 13 14:31:05.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.631992 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:31:05.662000 audit[1192]: AVC avc: denied { confidentiality } for pid=1192 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:31:05.680262 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:31:05.680393 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:31:05.687147 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:31:05.687244 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:31:05.699856 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:31:05.703005 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:31:05.712814 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:31:05.712933 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:31:05.727995 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:31:05.662000 audit[1192]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55bb9cd3f140 a1=f884 a2=7f4f947d2bc5 a3=5 items=12 ppid=1185 pid=1192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:05.662000 audit: CWD cwd="/" Dec 13 14:31:05.662000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=1 name=(null) inode=14684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=2 name=(null) inode=14684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=3 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=4 name=(null) inode=14684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=5 name=(null) inode=14686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=6 name=(null) inode=14684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=7 name=(null) inode=14687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=8 name=(null) inode=14684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=9 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=10 name=(null) inode=14684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PATH item=11 name=(null) inode=14689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:05.662000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:31:05.764787 systemd-networkd[1196]: lo: Link UP Dec 13 14:31:05.765221 systemd-networkd[1196]: lo: Gained carrier Dec 13 14:31:05.766013 systemd-networkd[1196]: Enumeration completed Dec 13 14:31:05.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.766290 systemd[1]: Started systemd-networkd.service. Dec 13 14:31:05.770633 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:31:05.782099 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:31:05.814066 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1189) Dec 13 14:31:05.837151 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:31:05.837325 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:31:05.837352 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:31:06.660596 kernel: mlx5_core 6d30:00:02.0 enP27952s1: Link up Dec 13 14:31:06.683601 kernel: hv_netvsc 7c1e5234-1847-7c1e-5234-18477c1e5234 eth0: Data path switched to VF: enP27952s1 Dec 13 14:31:06.684930 systemd-networkd[1196]: enP27952s1: Link UP Dec 13 14:31:06.685078 systemd-networkd[1196]: eth0: Link UP Dec 13 14:31:06.685084 systemd-networkd[1196]: eth0: Gained carrier Dec 13 14:31:06.688485 systemd-networkd[1196]: enP27952s1: Gained carrier Dec 13 14:31:06.718859 systemd-networkd[1196]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:31:06.765319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:31:06.827596 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 14:31:06.854031 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:31:06.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:06.858236 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:31:06.961781 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:31:06.986841 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:31:06.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.000887 systemd[1]: Reached target cryptsetup.target. Dec 13 14:31:07.004745 systemd[1]: Starting lvm2-activation.service... Dec 13 14:31:07.009505 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:31:07.041000 systemd[1]: Finished lvm2-activation.service. Dec 13 14:31:07.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.043668 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:31:07.045780 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:31:07.045813 systemd[1]: Reached target local-fs.target. Dec 13 14:31:07.047733 systemd[1]: Reached target machines.target. Dec 13 14:31:07.051062 systemd[1]: Starting ldconfig.service... Dec 13 14:31:07.053022 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:07.053120 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:07.054481 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:31:07.058167 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:31:07.062069 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:31:07.065539 systemd[1]: Starting systemd-sysext.service... Dec 13 14:31:07.081623 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1266 (bootctl) Dec 13 14:31:07.083411 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:31:07.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.488940 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:31:07.494230 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:31:07.549848 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:31:07.550037 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:31:07.612024 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:31:07.633595 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:31:07.649620 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:31:07.654355 (sd-sysext)[1278]: Using extensions 'kubernetes'. Dec 13 14:31:07.654855 (sd-sysext)[1278]: Merged extensions into '/usr'. Dec 13 14:31:07.672842 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:07.674752 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:31:07.677016 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:07.681384 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:07.684624 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:07.688295 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:07.690919 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:07.691106 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:07.691301 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:07.692476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:07.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.692613 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:07.694822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:07.695333 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:07.696526 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:07.696664 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:07.697290 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:07.697401 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:07.701965 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:31:07.704371 systemd[1]: Finished systemd-sysext.service. Dec 13 14:31:07.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:07.707784 systemd[1]: Starting ensure-sysext.service... Dec 13 14:31:07.710495 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:31:07.729711 systemd[1]: Reloading. Dec 13 14:31:07.763029 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:31:07.794846 systemd-fsck[1274]: fsck.fat 4.2 (2021-01-31) Dec 13 14:31:07.794846 systemd-fsck[1274]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:31:07.798874 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:31:07.823707 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:31:07.824332 /usr/lib/systemd/system-generators/torcx-generator[1305]: time="2024-12-13T14:31:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:07.824364 /usr/lib/systemd/system-generators/torcx-generator[1305]: time="2024-12-13T14:31:07Z" level=info msg="torcx already run" Dec 13 14:31:07.935317 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:07.935678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:07.962550 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:08.025830 systemd-networkd[1196]: eth0: Gained IPv6LL Dec 13 14:31:08.035834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:31:08.037000 audit: BPF prog-id=30 op=LOAD Dec 13 14:31:08.037000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:31:08.039000 audit: BPF prog-id=31 op=LOAD Dec 13 14:31:08.039000 audit: BPF prog-id=32 op=LOAD Dec 13 14:31:08.039000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:31:08.039000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:31:08.040000 audit: BPF prog-id=33 op=LOAD Dec 13 14:31:08.040000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:31:08.040000 audit: BPF prog-id=34 op=LOAD Dec 13 14:31:08.040000 audit: BPF prog-id=35 op=LOAD Dec 13 14:31:08.040000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:31:08.041000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:31:08.041000 audit: BPF prog-id=36 op=LOAD Dec 13 14:31:08.041000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:31:08.041000 audit: BPF prog-id=37 op=LOAD Dec 13 14:31:08.041000 audit: BPF prog-id=38 op=LOAD Dec 13 14:31:08.041000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:31:08.041000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:31:08.048642 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:31:08.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.051924 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:31:08.054935 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:31:08.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.068532 systemd[1]: Mounting boot.mount... Dec 13 14:31:08.076586 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:08.077322 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.078966 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:08.083120 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:08.086547 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:08.090157 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.090341 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:08.090487 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:08.093412 systemd[1]: Mounted boot.mount. Dec 13 14:31:08.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.097095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:08.097262 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:08.100004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:08.100150 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:08.103366 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:08.103523 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:08.106454 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:08.106642 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.109464 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.111936 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:08.116683 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:08.120848 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:08.123169 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.123368 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:08.124734 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:31:08.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.128137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:08.128321 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:08.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.131196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:08.131370 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:08.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.134523 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:08.134718 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:08.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.137537 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:08.137710 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.142101 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.144861 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:31:08.149622 systemd[1]: Starting modprobe@drm.service... Dec 13 14:31:08.153450 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:31:08.157879 systemd[1]: Starting modprobe@loop.service... Dec 13 14:31:08.160667 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.160867 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:08.162247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:31:08.162458 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:31:08.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.165693 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:31:08.165882 systemd[1]: Finished modprobe@drm.service. Dec 13 14:31:08.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.168956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:31:08.169133 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:31:08.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.172534 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:31:08.172714 systemd[1]: Finished modprobe@loop.service. Dec 13 14:31:08.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.177330 systemd[1]: Finished ensure-sysext.service. Dec 13 14:31:08.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.181322 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:31:08.181385 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:31:08.228731 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:31:08.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.232998 systemd[1]: Starting audit-rules.service... Dec 13 14:31:08.236318 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:31:08.240620 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:31:08.244000 audit: BPF prog-id=39 op=LOAD Dec 13 14:31:08.251000 audit: BPF prog-id=40 op=LOAD Dec 13 14:31:08.248100 systemd[1]: Starting systemd-resolved.service... Dec 13 14:31:08.254715 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:31:08.260176 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:31:08.276243 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:31:08.278867 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:31:08.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.287000 audit[1388]: SYSTEM_BOOT pid=1388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.292997 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:31:08.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.358068 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:31:08.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.382040 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:31:08.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:08.384806 systemd[1]: Reached target time-set.target. Dec 13 14:31:08.390558 systemd-resolved[1386]: Positive Trust Anchors: Dec 13 14:31:08.391077 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:31:08.391201 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:31:08.403000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:31:08.403000 audit[1403]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9c22a120 a2=420 a3=0 items=0 ppid=1382 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:08.403000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:31:08.405520 augenrules[1403]: No rules Dec 13 14:31:08.406465 systemd[1]: Finished audit-rules.service. Dec 13 14:31:08.425930 systemd-resolved[1386]: Using system hostname 'ci-3510.3.6-a-01993ae768'. Dec 13 14:31:08.427829 systemd[1]: Started systemd-resolved.service. Dec 13 14:31:08.430527 systemd[1]: Reached target network.target. Dec 13 14:31:08.432630 systemd[1]: Reached target network-online.target. Dec 13 14:31:08.435220 systemd[1]: Reached target nss-lookup.target. Dec 13 14:31:08.474035 systemd-timesyncd[1387]: Contacted time server 193.1.12.167:123 (0.flatcar.pool.ntp.org). Dec 13 14:31:08.474124 systemd-timesyncd[1387]: Initial clock synchronization to Fri 2024-12-13 14:31:08.473908 UTC. Dec 13 14:31:08.548901 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:08.548944 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:31:09.394725 ldconfig[1265]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:31:09.408057 systemd[1]: Finished ldconfig.service. Dec 13 14:31:09.411763 systemd[1]: Starting systemd-update-done.service... Dec 13 14:31:09.420401 systemd[1]: Finished systemd-update-done.service. Dec 13 14:31:09.422886 systemd[1]: Reached target sysinit.target. Dec 13 14:31:09.425144 systemd[1]: Started motdgen.path. Dec 13 14:31:09.427064 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:31:09.430461 systemd[1]: Started logrotate.timer. Dec 13 14:31:09.432302 systemd[1]: Started mdadm.timer. Dec 13 14:31:09.433902 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:31:09.435987 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:31:09.436023 systemd[1]: Reached target paths.target. Dec 13 14:31:09.437780 systemd[1]: Reached target timers.target. Dec 13 14:31:09.439912 systemd[1]: Listening on dbus.socket. Dec 13 14:31:09.442853 systemd[1]: Starting docker.socket... Dec 13 14:31:09.447706 systemd[1]: Listening on sshd.socket. Dec 13 14:31:09.449687 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:09.450165 systemd[1]: Listening on docker.socket. Dec 13 14:31:09.451912 systemd[1]: Reached target sockets.target. Dec 13 14:31:09.453654 systemd[1]: Reached target basic.target. Dec 13 14:31:09.455355 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:31:09.455388 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:31:09.456480 systemd[1]: Starting containerd.service... Dec 13 14:31:09.459788 systemd[1]: Starting dbus.service... Dec 13 14:31:09.462824 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:31:09.466178 systemd[1]: Starting extend-filesystems.service... Dec 13 14:31:09.467988 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:31:09.469542 systemd[1]: Starting kubelet.service... Dec 13 14:31:09.474525 systemd[1]: Starting motdgen.service... Dec 13 14:31:09.481969 systemd[1]: Started nvidia.service. Dec 13 14:31:09.486437 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:31:09.524611 jq[1413]: false Dec 13 14:31:09.491321 systemd[1]: Starting sshd-keygen.service... Dec 13 14:31:09.498986 systemd[1]: Starting systemd-logind.service... Dec 13 14:31:09.500840 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:31:09.500928 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:31:09.501359 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:31:09.503274 systemd[1]: Starting update-engine.service... Dec 13 14:31:09.507314 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:31:09.536037 jq[1431]: true Dec 13 14:31:09.512487 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:31:09.512731 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:31:09.528926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:31:09.529177 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:31:09.562271 extend-filesystems[1414]: Found loop1 Dec 13 14:31:09.566241 jq[1437]: true Dec 13 14:31:09.575079 extend-filesystems[1414]: Found sda Dec 13 14:31:09.577001 extend-filesystems[1414]: Found sda1 Dec 13 14:31:09.577001 extend-filesystems[1414]: Found sda2 Dec 13 14:31:09.577001 extend-filesystems[1414]: Found sda3 Dec 13 14:31:09.599746 extend-filesystems[1414]: Found usr Dec 13 14:31:09.599746 extend-filesystems[1414]: Found sda4 Dec 13 14:31:09.599746 extend-filesystems[1414]: Found sda6 Dec 13 14:31:09.599746 extend-filesystems[1414]: Found sda7 Dec 13 14:31:09.599746 extend-filesystems[1414]: Found sda9 Dec 13 14:31:09.599746 extend-filesystems[1414]: Checking size of /dev/sda9 Dec 13 14:31:09.587872 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:31:09.660560 extend-filesystems[1414]: Old size kept for /dev/sda9 Dec 13 14:31:09.660560 extend-filesystems[1414]: Found sr0 Dec 13 14:31:09.613502 dbus-daemon[1412]: [system] SELinux support is enabled Dec 13 14:31:09.588072 systemd[1]: Finished motdgen.service. Dec 13 14:31:09.613737 systemd[1]: Started dbus.service. Dec 13 14:31:09.627923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:31:09.627975 systemd[1]: Reached target system-config.target. Dec 13 14:31:09.629949 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:31:09.629973 systemd[1]: Reached target user-config.target. Dec 13 14:31:09.650304 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:31:09.650522 systemd[1]: Finished extend-filesystems.service. Dec 13 14:31:09.720304 env[1435]: time="2024-12-13T14:31:09.720236049Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:31:09.753314 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:31:09.756688 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:31:09.757483 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:31:09.774415 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:31:09.779345 systemd-logind[1425]: New seat seat0. Dec 13 14:31:09.789230 systemd[1]: Started systemd-logind.service. Dec 13 14:31:09.850018 env[1435]: time="2024-12-13T14:31:09.849907376Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:31:09.850186 env[1435]: time="2024-12-13T14:31:09.850160276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:09.852866 env[1435]: time="2024-12-13T14:31:09.852808877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:31:09.852866 env[1435]: time="2024-12-13T14:31:09.852861477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:09.853205 env[1435]: time="2024-12-13T14:31:09.853174277Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:31:09.853276 env[1435]: time="2024-12-13T14:31:09.853205577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:09.853276 env[1435]: time="2024-12-13T14:31:09.853224877Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:31:09.853276 env[1435]: time="2024-12-13T14:31:09.853239677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:09.853382 env[1435]: time="2024-12-13T14:31:09.853344677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:09.853951 env[1435]: time="2024-12-13T14:31:09.853921077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:31:09.855322 env[1435]: time="2024-12-13T14:31:09.855284378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:31:09.855322 env[1435]: time="2024-12-13T14:31:09.855321178Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:31:09.855452 env[1435]: time="2024-12-13T14:31:09.855399878Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:31:09.855452 env[1435]: time="2024-12-13T14:31:09.855417378Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:31:09.869847 update_engine[1426]: I1213 14:31:09.869253 1426 main.cc:92] Flatcar Update Engine starting Dec 13 14:31:09.873371 env[1435]: time="2024-12-13T14:31:09.873293081Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:31:09.873489 env[1435]: time="2024-12-13T14:31:09.873382681Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:31:09.873489 env[1435]: time="2024-12-13T14:31:09.873416081Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:31:09.873489 env[1435]: time="2024-12-13T14:31:09.873466981Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873642 env[1435]: time="2024-12-13T14:31:09.873542681Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873642 env[1435]: time="2024-12-13T14:31:09.873595381Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873642 env[1435]: time="2024-12-13T14:31:09.873614381Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873748 env[1435]: time="2024-12-13T14:31:09.873635181Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873748 env[1435]: time="2024-12-13T14:31:09.873670081Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873748 env[1435]: time="2024-12-13T14:31:09.873693181Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873748 env[1435]: time="2024-12-13T14:31:09.873712781Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.873880 env[1435]: time="2024-12-13T14:31:09.873746881Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:31:09.873964 env[1435]: time="2024-12-13T14:31:09.873935881Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:31:09.874117 env[1435]: time="2024-12-13T14:31:09.874093281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:31:09.874623 env[1435]: time="2024-12-13T14:31:09.874597782Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:31:09.874841 env[1435]: time="2024-12-13T14:31:09.874645282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.874841 env[1435]: time="2024-12-13T14:31:09.874691882Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:31:09.874841 env[1435]: time="2024-12-13T14:31:09.874770982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.874841 env[1435]: time="2024-12-13T14:31:09.874790582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.874809582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.874912482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.874931982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.874950782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.874967982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875010882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875031082Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875205682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875238482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875256682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875273482Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875294382Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875320982Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:31:09.875512 env[1435]: time="2024-12-13T14:31:09.875347782Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:31:09.876104 env[1435]: time="2024-12-13T14:31:09.875407782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:31:09.876154 env[1435]: time="2024-12-13T14:31:09.875738482Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:31:09.876154 env[1435]: time="2024-12-13T14:31:09.875834982Z" level=info msg="Connect containerd service" Dec 13 14:31:09.876154 env[1435]: time="2024-12-13T14:31:09.875896982Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.877205582Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.877515382Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.877564782Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.878240682Z" level=info msg="Start subscribing containerd event" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.884806884Z" level=info msg="Start recovering state" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.884961484Z" level=info msg="Start event monitor" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.885849184Z" level=info msg="Start snapshots syncer" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.885868984Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.885896584Z" level=info msg="Start streaming server" Dec 13 14:31:09.895932 env[1435]: time="2024-12-13T14:31:09.887030684Z" level=info msg="containerd successfully booted in 0.167813s" Dec 13 14:31:09.896294 update_engine[1426]: I1213 14:31:09.894685 1426 update_check_scheduler.cc:74] Next update check in 3m41s Dec 13 14:31:09.877734 systemd[1]: Started containerd.service. Dec 13 14:31:09.886932 systemd[1]: Started update-engine.service. Dec 13 14:31:09.891936 systemd[1]: Started locksmithd.service. Dec 13 14:31:10.651275 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:31:10.672363 systemd[1]: Started kubelet.service. Dec 13 14:31:11.508855 kubelet[1523]: E1213 14:31:11.508765 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:11.511327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:11.511485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:11.511811 systemd[1]: kubelet.service: Consumed 1.190s CPU time. Dec 13 14:31:11.616155 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:31:11.636802 systemd[1]: Finished sshd-keygen.service. Dec 13 14:31:11.641192 systemd[1]: Starting issuegen.service... Dec 13 14:31:11.644809 systemd[1]: Started waagent.service. Dec 13 14:31:11.651925 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:31:11.652124 systemd[1]: Finished issuegen.service. Dec 13 14:31:11.656069 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:31:11.668995 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:31:11.673229 systemd[1]: Started getty@tty1.service. Dec 13 14:31:11.677203 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:31:11.682721 systemd[1]: Reached target getty.target. Dec 13 14:31:11.684815 systemd[1]: Reached target multi-user.target. Dec 13 14:31:11.688748 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:31:11.701971 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:31:11.702119 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:31:11.704782 systemd[1]: Startup finished in 554ms (firmware) + 8.017s (loader) + 922ms (kernel) + 7.508s (initrd) + 10.645s (userspace) = 27.647s. Dec 13 14:31:11.828215 login[1543]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 14:31:11.828631 login[1542]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:31:11.843321 systemd[1]: Created slice user-500.slice. Dec 13 14:31:11.845540 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:31:11.848961 systemd-logind[1425]: New session 1 of user core. Dec 13 14:31:11.863540 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:31:11.865748 systemd[1]: Starting user@500.service... Dec 13 14:31:11.874084 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:11.985834 systemd[1546]: Queued start job for default target default.target. Dec 13 14:31:11.986544 systemd[1546]: Reached target paths.target. Dec 13 14:31:11.986593 systemd[1546]: Reached target sockets.target. Dec 13 14:31:11.986612 systemd[1546]: Reached target timers.target. Dec 13 14:31:11.986626 systemd[1546]: Reached target basic.target. Dec 13 14:31:11.986761 systemd[1]: Started user@500.service. Dec 13 14:31:11.988123 systemd[1]: Started session-1.scope. Dec 13 14:31:11.988727 systemd[1546]: Reached target default.target. Dec 13 14:31:11.988924 systemd[1546]: Startup finished in 107ms. Dec 13 14:31:12.830764 login[1543]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:31:12.836579 systemd[1]: Started session-2.scope. Dec 13 14:31:12.837419 systemd-logind[1425]: New session 2 of user core. Dec 13 14:31:13.699852 waagent[1537]: 2024-12-13T14:31:13.699719Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:31:13.713257 waagent[1537]: 2024-12-13T14:31:13.713146Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:31:13.715711 waagent[1537]: 2024-12-13T14:31:13.715632Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:31:13.718218 waagent[1537]: 2024-12-13T14:31:13.718128Z INFO Daemon Daemon Run daemon Dec 13 14:31:13.721056 waagent[1537]: 2024-12-13T14:31:13.720500Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:31:13.734148 waagent[1537]: 2024-12-13T14:31:13.734012Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:31:13.742085 waagent[1537]: 2024-12-13T14:31:13.741958Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:31:13.747919 waagent[1537]: 2024-12-13T14:31:13.747823Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:31:13.750499 waagent[1537]: 2024-12-13T14:31:13.750414Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:31:13.753817 waagent[1537]: 2024-12-13T14:31:13.753740Z INFO Daemon Daemon Activate resource disk Dec 13 14:31:13.756319 waagent[1537]: 2024-12-13T14:31:13.756245Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:31:13.766383 waagent[1537]: 2024-12-13T14:31:13.766292Z INFO Daemon Daemon Found device: None Dec 13 14:31:13.768888 waagent[1537]: 2024-12-13T14:31:13.768802Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:31:13.772746 waagent[1537]: 2024-12-13T14:31:13.772671Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:31:13.778935 waagent[1537]: 2024-12-13T14:31:13.778853Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:31:13.781844 waagent[1537]: 2024-12-13T14:31:13.781770Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:31:13.793227 waagent[1537]: 2024-12-13T14:31:13.793066Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:31:13.801315 waagent[1537]: 2024-12-13T14:31:13.801177Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:31:13.806149 waagent[1537]: 2024-12-13T14:31:13.806057Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:31:13.808761 waagent[1537]: 2024-12-13T14:31:13.808677Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:31:13.856839 waagent[1537]: 2024-12-13T14:31:13.856561Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:31:13.905351 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:31:13.914152 waagent[1537]: 2024-12-13T14:31:13.914013Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:31:13.927896 waagent[1537]: 2024-12-13T14:31:13.914557Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:31:13.927896 waagent[1537]: 2024-12-13T14:31:13.915528Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:31:13.927896 waagent[1537]: 2024-12-13T14:31:13.916554Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:31:13.927896 waagent[1537]: 2024-12-13T14:31:13.917559Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:31:13.927896 waagent[1537]: 2024-12-13T14:31:13.918134Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:31:13.953302 waagent[1537]: 2024-12-13T14:31:13.953148Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:31:13.958588 waagent[1537]: 2024-12-13T14:31:13.958525Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:31:13.961410 waagent[1537]: 2024-12-13T14:31:13.961345Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:31:14.163823 waagent[1537]: 2024-12-13T14:31:14.163664Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:31:14.173869 waagent[1537]: 2024-12-13T14:31:14.173773Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:31:14.178801 waagent[1537]: 2024-12-13T14:31:14.174391Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:31:14.249124 waagent[1537]: 2024-12-13T14:31:14.248919Z INFO Daemon Daemon Found private key matching thumbprint 6751EEF20F39E378A11E9ADDB77DCA64E9923F49 Dec 13 14:31:14.253162 waagent[1537]: 2024-12-13T14:31:14.253071Z INFO Daemon Daemon Certificate with thumbprint CA48B00B2A4E7960B461E85D85072651849C7E6C has no matching private key. Dec 13 14:31:14.257521 waagent[1537]: 2024-12-13T14:31:14.257432Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:31:14.284419 waagent[1537]: 2024-12-13T14:31:14.284324Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 69afb553-1b4d-4d25-9ef7-6d169fdcd8ac New eTag: 17307598566342724560] Dec 13 14:31:14.289878 waagent[1537]: 2024-12-13T14:31:14.289768Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:31:14.303438 waagent[1537]: 2024-12-13T14:31:14.303351Z INFO Daemon Daemon Starting provisioning Dec 13 14:31:14.306485 waagent[1537]: 2024-12-13T14:31:14.306389Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:31:14.308964 waagent[1537]: 2024-12-13T14:31:14.308885Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-01993ae768] Dec 13 14:31:14.319874 waagent[1537]: 2024-12-13T14:31:14.319738Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-01993ae768] Dec 13 14:31:14.323438 waagent[1537]: 2024-12-13T14:31:14.323329Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:31:14.327352 waagent[1537]: 2024-12-13T14:31:14.327254Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:31:14.343042 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:31:14.343299 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:31:14.343374 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:31:14.343754 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:31:14.347633 systemd-networkd[1196]: eth0: DHCPv6 lease lost Dec 13 14:31:14.349371 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:31:14.349617 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:31:14.352393 systemd[1]: Starting systemd-networkd.service... Dec 13 14:31:14.386411 systemd-networkd[1589]: enP27952s1: Link UP Dec 13 14:31:14.386422 systemd-networkd[1589]: enP27952s1: Gained carrier Dec 13 14:31:14.387828 systemd-networkd[1589]: eth0: Link UP Dec 13 14:31:14.387837 systemd-networkd[1589]: eth0: Gained carrier Dec 13 14:31:14.388286 systemd-networkd[1589]: lo: Link UP Dec 13 14:31:14.388295 systemd-networkd[1589]: lo: Gained carrier Dec 13 14:31:14.388714 systemd-networkd[1589]: eth0: Gained IPv6LL Dec 13 14:31:14.389044 systemd-networkd[1589]: Enumeration completed Dec 13 14:31:14.389192 systemd[1]: Started systemd-networkd.service. Dec 13 14:31:14.391551 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:31:14.398643 waagent[1537]: 2024-12-13T14:31:14.393519Z INFO Daemon Daemon Create user account if not exists Dec 13 14:31:14.398643 waagent[1537]: 2024-12-13T14:31:14.398041Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:31:14.401661 waagent[1537]: 2024-12-13T14:31:14.401389Z INFO Daemon Daemon Configure sudoer Dec 13 14:31:14.402405 systemd-networkd[1589]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:31:14.404341 waagent[1537]: 2024-12-13T14:31:14.404261Z INFO Daemon Daemon Configure sshd Dec 13 14:31:14.406720 waagent[1537]: 2024-12-13T14:31:14.406649Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:31:14.442724 systemd-networkd[1589]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:31:14.446515 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:31:15.511408 waagent[1537]: 2024-12-13T14:31:15.511310Z INFO Daemon Daemon Provisioning complete Dec 13 14:31:15.531684 waagent[1537]: 2024-12-13T14:31:15.531561Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:31:15.535386 waagent[1537]: 2024-12-13T14:31:15.535294Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:31:15.541730 waagent[1537]: 2024-12-13T14:31:15.541648Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:31:15.819264 waagent[1598]: 2024-12-13T14:31:15.819056Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:31:15.820057 waagent[1598]: 2024-12-13T14:31:15.819982Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:31:15.820211 waagent[1598]: 2024-12-13T14:31:15.820154Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:31:15.832200 waagent[1598]: 2024-12-13T14:31:15.832106Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:31:15.832395 waagent[1598]: 2024-12-13T14:31:15.832336Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:31:15.898080 waagent[1598]: 2024-12-13T14:31:15.897936Z INFO ExtHandler ExtHandler Found private key matching thumbprint 6751EEF20F39E378A11E9ADDB77DCA64E9923F49 Dec 13 14:31:15.898333 waagent[1598]: 2024-12-13T14:31:15.898263Z INFO ExtHandler ExtHandler Certificate with thumbprint CA48B00B2A4E7960B461E85D85072651849C7E6C has no matching private key. Dec 13 14:31:15.898631 waagent[1598]: 2024-12-13T14:31:15.898533Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:31:15.914975 waagent[1598]: 2024-12-13T14:31:15.914907Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 1bf27677-68f2-45bf-9c0a-cf862d6e5a57 New eTag: 17307598566342724560] Dec 13 14:31:15.915560 waagent[1598]: 2024-12-13T14:31:15.915498Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:31:15.951550 waagent[1598]: 2024-12-13T14:31:15.951401Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:31:15.960814 waagent[1598]: 2024-12-13T14:31:15.960699Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1598 Dec 13 14:31:15.964465 waagent[1598]: 2024-12-13T14:31:15.964375Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:31:15.965768 waagent[1598]: 2024-12-13T14:31:15.965698Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:31:15.997494 waagent[1598]: 2024-12-13T14:31:15.997423Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:31:15.997958 waagent[1598]: 2024-12-13T14:31:15.997889Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:31:16.006820 waagent[1598]: 2024-12-13T14:31:16.006756Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:31:16.007385 waagent[1598]: 2024-12-13T14:31:16.007319Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:31:16.008557 waagent[1598]: 2024-12-13T14:31:16.008488Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:31:16.009970 waagent[1598]: 2024-12-13T14:31:16.009909Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:31:16.010425 waagent[1598]: 2024-12-13T14:31:16.010367Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:31:16.010592 waagent[1598]: 2024-12-13T14:31:16.010535Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:31:16.011187 waagent[1598]: 2024-12-13T14:31:16.011131Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:31:16.011645 waagent[1598]: 2024-12-13T14:31:16.011566Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:31:16.012281 waagent[1598]: 2024-12-13T14:31:16.012226Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:31:16.012438 waagent[1598]: 2024-12-13T14:31:16.012390Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:31:16.012890 waagent[1598]: 2024-12-13T14:31:16.012836Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:31:16.013479 waagent[1598]: 2024-12-13T14:31:16.013414Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:31:16.013479 waagent[1598]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:31:16.013479 waagent[1598]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:31:16.013479 waagent[1598]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:31:16.013479 waagent[1598]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:31:16.013479 waagent[1598]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:31:16.013479 waagent[1598]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:31:16.015840 waagent[1598]: 2024-12-13T14:31:16.015616Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:31:16.016223 waagent[1598]: 2024-12-13T14:31:16.016139Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:31:16.016747 waagent[1598]: 2024-12-13T14:31:16.016678Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:31:16.017947 waagent[1598]: 2024-12-13T14:31:16.017875Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:31:16.018414 waagent[1598]: 2024-12-13T14:31:16.018358Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:31:16.018646 waagent[1598]: 2024-12-13T14:31:16.018565Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:31:16.019142 waagent[1598]: 2024-12-13T14:31:16.019089Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:31:16.042400 waagent[1598]: 2024-12-13T14:31:16.042320Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:31:16.043553 waagent[1598]: 2024-12-13T14:31:16.043489Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:31:16.046428 waagent[1598]: 2024-12-13T14:31:16.046364Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1589' Dec 13 14:31:16.046743 waagent[1598]: 2024-12-13T14:31:16.046684Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:31:16.065196 waagent[1598]: 2024-12-13T14:31:16.065070Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:31:16.065196 waagent[1598]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:31:16.065196 waagent[1598]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:31:16.065196 waagent[1598]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:18:47 brd ff:ff:ff:ff:ff:ff Dec 13 14:31:16.065196 waagent[1598]: 3: enP27952s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:18:47 brd ff:ff:ff:ff:ff:ff\ altname enP27952p0s2 Dec 13 14:31:16.065196 waagent[1598]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:31:16.065196 waagent[1598]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:31:16.065196 waagent[1598]: 2: eth0 inet 10.200.8.29/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:31:16.065196 waagent[1598]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:31:16.065196 waagent[1598]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:31:16.065196 waagent[1598]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:1847/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:31:16.095117 waagent[1598]: 2024-12-13T14:31:16.094970Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:31:16.228625 waagent[1598]: 2024-12-13T14:31:16.228487Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Dec 13 14:31:16.236277 waagent[1598]: 2024-12-13T14:31:16.236143Z INFO EnvHandler ExtHandler Firewall rules: Dec 13 14:31:16.236277 waagent[1598]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:31:16.236277 waagent[1598]: pkts bytes target prot opt in out source destination Dec 13 14:31:16.236277 waagent[1598]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:31:16.236277 waagent[1598]: pkts bytes target prot opt in out source destination Dec 13 14:31:16.236277 waagent[1598]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:31:16.236277 waagent[1598]: pkts bytes target prot opt in out source destination Dec 13 14:31:16.236277 waagent[1598]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:31:16.236277 waagent[1598]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:31:16.239979 waagent[1598]: 2024-12-13T14:31:16.239914Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:31:16.408195 waagent[1598]: 2024-12-13T14:31:16.408111Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:31:16.545445 waagent[1537]: 2024-12-13T14:31:16.545251Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:31:16.551245 waagent[1537]: 2024-12-13T14:31:16.551170Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:31:17.622766 waagent[1635]: 2024-12-13T14:31:17.622643Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:31:17.623520 waagent[1635]: 2024-12-13T14:31:17.623447Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:31:17.623691 waagent[1635]: 2024-12-13T14:31:17.623634Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:31:17.623840 waagent[1635]: 2024-12-13T14:31:17.623793Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 14:31:17.633924 waagent[1635]: 2024-12-13T14:31:17.633805Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:31:17.634346 waagent[1635]: 2024-12-13T14:31:17.634283Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:31:17.634522 waagent[1635]: 2024-12-13T14:31:17.634472Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:31:17.647282 waagent[1635]: 2024-12-13T14:31:17.647185Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:31:17.657049 waagent[1635]: 2024-12-13T14:31:17.656981Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:31:17.658104 waagent[1635]: 2024-12-13T14:31:17.658037Z INFO ExtHandler Dec 13 14:31:17.658259 waagent[1635]: 2024-12-13T14:31:17.658206Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b011c5a9-7105-4965-9897-a3c4ff61c56a eTag: 17307598566342724560 source: Fabric] Dec 13 14:31:17.659006 waagent[1635]: 2024-12-13T14:31:17.658947Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:31:17.660136 waagent[1635]: 2024-12-13T14:31:17.660073Z INFO ExtHandler Dec 13 14:31:17.660269 waagent[1635]: 2024-12-13T14:31:17.660220Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:31:17.667715 waagent[1635]: 2024-12-13T14:31:17.667665Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:31:17.668197 waagent[1635]: 2024-12-13T14:31:17.668149Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:31:17.690243 waagent[1635]: 2024-12-13T14:31:17.690152Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:31:17.760230 waagent[1635]: 2024-12-13T14:31:17.760088Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CA48B00B2A4E7960B461E85D85072651849C7E6C', 'hasPrivateKey': False} Dec 13 14:31:17.761273 waagent[1635]: 2024-12-13T14:31:17.761194Z INFO ExtHandler Downloaded certificate {'thumbprint': '6751EEF20F39E378A11E9ADDB77DCA64E9923F49', 'hasPrivateKey': True} Dec 13 14:31:17.762362 waagent[1635]: 2024-12-13T14:31:17.762292Z INFO ExtHandler Fetch goal state completed Dec 13 14:31:17.784887 waagent[1635]: 2024-12-13T14:31:17.784775Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:31:17.796768 waagent[1635]: 2024-12-13T14:31:17.796675Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1635 Dec 13 14:31:17.799862 waagent[1635]: 2024-12-13T14:31:17.799795Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:31:17.800850 waagent[1635]: 2024-12-13T14:31:17.800790Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:31:17.801146 waagent[1635]: 2024-12-13T14:31:17.801088Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:31:17.803115 waagent[1635]: 2024-12-13T14:31:17.803056Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:31:17.808007 waagent[1635]: 2024-12-13T14:31:17.807952Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:31:17.808411 waagent[1635]: 2024-12-13T14:31:17.808353Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:31:17.816818 waagent[1635]: 2024-12-13T14:31:17.816764Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:31:17.817299 waagent[1635]: 2024-12-13T14:31:17.817239Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:31:17.829969 waagent[1635]: 2024-12-13T14:31:17.829836Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Dec 13 14:31:17.833137 waagent[1635]: 2024-12-13T14:31:17.833023Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Dec 13 14:31:17.834240 waagent[1635]: 2024-12-13T14:31:17.834166Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:31:17.835810 waagent[1635]: 2024-12-13T14:31:17.835745Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:31:17.836241 waagent[1635]: 2024-12-13T14:31:17.836186Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:31:17.836398 waagent[1635]: 2024-12-13T14:31:17.836350Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:31:17.836991 waagent[1635]: 2024-12-13T14:31:17.836930Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:31:17.837416 waagent[1635]: 2024-12-13T14:31:17.837359Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:31:17.838267 waagent[1635]: 2024-12-13T14:31:17.838214Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:31:17.838411 waagent[1635]: 2024-12-13T14:31:17.838351Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:31:17.838647 waagent[1635]: 2024-12-13T14:31:17.838595Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:31:17.838863 waagent[1635]: 2024-12-13T14:31:17.838817Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:31:17.839013 waagent[1635]: 2024-12-13T14:31:17.838949Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:31:17.839013 waagent[1635]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:31:17.839013 waagent[1635]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:31:17.839013 waagent[1635]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:31:17.839013 waagent[1635]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:31:17.839013 waagent[1635]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:31:17.839013 waagent[1635]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:31:17.839470 waagent[1635]: 2024-12-13T14:31:17.839395Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:31:17.842823 waagent[1635]: 2024-12-13T14:31:17.842717Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:31:17.843228 waagent[1635]: 2024-12-13T14:31:17.843169Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:31:17.845165 waagent[1635]: 2024-12-13T14:31:17.845114Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:31:17.847219 waagent[1635]: 2024-12-13T14:31:17.847104Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:31:17.848905 waagent[1635]: 2024-12-13T14:31:17.848849Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:31:17.862692 waagent[1635]: 2024-12-13T14:31:17.862597Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:31:17.862692 waagent[1635]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:31:17.862692 waagent[1635]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:31:17.862692 waagent[1635]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:18:47 brd ff:ff:ff:ff:ff:ff Dec 13 14:31:17.862692 waagent[1635]: 3: enP27952s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:18:47 brd ff:ff:ff:ff:ff:ff\ altname enP27952p0s2 Dec 13 14:31:17.862692 waagent[1635]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:31:17.862692 waagent[1635]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:31:17.862692 waagent[1635]: 2: eth0 inet 10.200.8.29/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:31:17.862692 waagent[1635]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:31:17.862692 waagent[1635]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:31:17.862692 waagent[1635]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:1847/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:31:17.877343 waagent[1635]: 2024-12-13T14:31:17.877200Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:31:17.925127 waagent[1635]: 2024-12-13T14:31:17.925052Z INFO ExtHandler ExtHandler Dec 13 14:31:17.927233 waagent[1635]: 2024-12-13T14:31:17.927165Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e67671ac-ae9d-4cbc-a104-d8becb003274 correlation 67e53d82-471d-4e30-8cf9-e7011eb12207 created: 2024-12-13T14:30:33.618469Z] Dec 13 14:31:17.935033 waagent[1635]: 2024-12-13T14:31:17.934969Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:31:17.939703 waagent[1635]: 2024-12-13T14:31:17.939611Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 14 ms] Dec 13 14:31:17.941139 waagent[1635]: 2024-12-13T14:31:17.941072Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:31:17.941139 waagent[1635]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:31:17.941139 waagent[1635]: pkts bytes target prot opt in out source destination Dec 13 14:31:17.941139 waagent[1635]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:31:17.941139 waagent[1635]: pkts bytes target prot opt in out source destination Dec 13 14:31:17.941139 waagent[1635]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:31:17.941139 waagent[1635]: pkts bytes target prot opt in out source destination Dec 13 14:31:17.941139 waagent[1635]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:31:17.941139 waagent[1635]: 165 18029 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:31:17.941139 waagent[1635]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:31:17.963214 waagent[1635]: 2024-12-13T14:31:17.963131Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:31:17.978091 waagent[1635]: 2024-12-13T14:31:17.977989Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F5F800BA-CB2C-412B-9423-199B0AC05A74;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:31:21.656180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:31:21.656497 systemd[1]: Stopped kubelet.service. Dec 13 14:31:21.656565 systemd[1]: kubelet.service: Consumed 1.190s CPU time. Dec 13 14:31:21.658775 systemd[1]: Starting kubelet.service... Dec 13 14:31:21.744782 systemd[1]: Started kubelet.service. Dec 13 14:31:22.298588 kubelet[1684]: E1213 14:31:22.298505 1684 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:22.302180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:22.302341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:32.406232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:31:32.406594 systemd[1]: Stopped kubelet.service. Dec 13 14:31:32.408810 systemd[1]: Starting kubelet.service... Dec 13 14:31:32.492219 systemd[1]: Started kubelet.service. Dec 13 14:31:33.038797 kubelet[1694]: E1213 14:31:33.038727 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:33.040654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:33.040842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:40.574557 systemd[1]: Created slice system-sshd.slice. Dec 13 14:31:40.576593 systemd[1]: Started sshd@0-10.200.8.29:22-10.200.16.10:50660.service. Dec 13 14:31:41.337536 sshd[1701]: Accepted publickey for core from 10.200.16.10 port 50660 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:41.339918 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:41.345654 systemd-logind[1425]: New session 3 of user core. Dec 13 14:31:41.346070 systemd[1]: Started session-3.scope. Dec 13 14:31:41.958305 systemd[1]: Started sshd@1-10.200.8.29:22-10.200.16.10:50676.service. Dec 13 14:31:42.665707 sshd[1706]: Accepted publickey for core from 10.200.16.10 port 50676 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:42.667466 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:42.672908 systemd-logind[1425]: New session 4 of user core. Dec 13 14:31:42.673707 systemd[1]: Started session-4.scope. Dec 13 14:31:43.055720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:31:43.055976 systemd[1]: Stopped kubelet.service. Dec 13 14:31:43.060855 systemd[1]: Starting kubelet.service... Dec 13 14:31:43.155059 systemd[1]: Started kubelet.service. Dec 13 14:31:43.171629 sshd[1706]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:43.176982 systemd[1]: sshd@1-10.200.8.29:22-10.200.16.10:50676.service: Deactivated successfully. Dec 13 14:31:43.178026 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:31:43.180118 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:31:43.181184 systemd-logind[1425]: Removed session 4. Dec 13 14:31:43.291115 systemd[1]: Started sshd@2-10.200.8.29:22-10.200.16.10:50686.service. Dec 13 14:31:43.685120 kubelet[1714]: E1213 14:31:43.685050 1714 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:43.687259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:43.687425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:44.005494 sshd[1722]: Accepted publickey for core from 10.200.16.10 port 50686 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:44.006882 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:44.012451 systemd[1]: Started session-5.scope. Dec 13 14:31:44.013164 systemd-logind[1425]: New session 5 of user core. Dec 13 14:31:44.502760 sshd[1722]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:44.506805 systemd[1]: sshd@2-10.200.8.29:22-10.200.16.10:50686.service: Deactivated successfully. Dec 13 14:31:44.507995 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:31:44.508913 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:31:44.509763 systemd-logind[1425]: Removed session 5. Dec 13 14:31:44.621034 systemd[1]: Started sshd@3-10.200.8.29:22-10.200.16.10:50688.service. Dec 13 14:31:45.327767 sshd[1728]: Accepted publickey for core from 10.200.16.10 port 50688 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:45.330134 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:45.335206 systemd[1]: Started session-6.scope. Dec 13 14:31:45.335857 systemd-logind[1425]: New session 6 of user core. Dec 13 14:31:45.829891 sshd[1728]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:45.832995 systemd[1]: sshd@3-10.200.8.29:22-10.200.16.10:50688.service: Deactivated successfully. Dec 13 14:31:45.833907 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:31:45.834544 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:31:45.835341 systemd-logind[1425]: Removed session 6. Dec 13 14:31:45.948355 systemd[1]: Started sshd@4-10.200.8.29:22-10.200.16.10:50696.service. Dec 13 14:31:46.658751 sshd[1734]: Accepted publickey for core from 10.200.16.10 port 50696 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:31:46.660284 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:46.665483 systemd[1]: Started session-7.scope. Dec 13 14:31:46.666140 systemd-logind[1425]: New session 7 of user core. Dec 13 14:31:47.163953 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:31:47.164310 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:31:47.185053 systemd[1]: Starting coreos-metadata.service... Dec 13 14:31:47.243431 coreos-metadata[1741]: Dec 13 14:31:47.243 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:31:47.246947 coreos-metadata[1741]: Dec 13 14:31:47.246 INFO Fetch successful Dec 13 14:31:47.247199 coreos-metadata[1741]: Dec 13 14:31:47.247 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 14:31:47.248931 coreos-metadata[1741]: Dec 13 14:31:47.248 INFO Fetch successful Dec 13 14:31:47.249424 coreos-metadata[1741]: Dec 13 14:31:47.249 INFO Fetching http://168.63.129.16/machine/e1e8efee-ce4c-4f30-a994-fa070e9f42c3/85905500%2Da963%2D4912%2Dbe3a%2D1003a3922bbf.%5Fci%2D3510.3.6%2Da%2D01993ae768?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 14:31:47.251063 coreos-metadata[1741]: Dec 13 14:31:47.251 INFO Fetch successful Dec 13 14:31:47.292495 coreos-metadata[1741]: Dec 13 14:31:47.292 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:31:47.305875 coreos-metadata[1741]: Dec 13 14:31:47.305 INFO Fetch successful Dec 13 14:31:47.315911 systemd[1]: Finished coreos-metadata.service. Dec 13 14:31:50.494126 systemd[1]: Stopped kubelet.service. Dec 13 14:31:50.497474 systemd[1]: Starting kubelet.service... Dec 13 14:31:50.535703 systemd[1]: Reloading. Dec 13 14:31:50.648255 /usr/lib/systemd/system-generators/torcx-generator[1802]: time="2024-12-13T14:31:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:50.648298 /usr/lib/systemd/system-generators/torcx-generator[1802]: time="2024-12-13T14:31:50Z" level=info msg="torcx already run" Dec 13 14:31:50.776972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:50.776993 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:50.794354 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:50.901132 systemd[1]: Started kubelet.service. Dec 13 14:31:50.903727 systemd[1]: Stopping kubelet.service... Dec 13 14:31:50.904023 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:31:50.904186 systemd[1]: Stopped kubelet.service. Dec 13 14:31:50.906181 systemd[1]: Starting kubelet.service... Dec 13 14:31:54.670177 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 14:31:55.585459 systemd[1]: Started kubelet.service. Dec 13 14:31:55.637868 update_engine[1426]: I1213 14:31:55.637046 1426 update_attempter.cc:509] Updating boot flags... Dec 13 14:31:55.639322 kubelet[1869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:55.639322 kubelet[1869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:31:55.639322 kubelet[1869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:55.639772 kubelet[1869]: I1213 14:31:55.639392 1869 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:31:55.926875 kubelet[1869]: I1213 14:31:55.926814 1869 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:31:55.927196 kubelet[1869]: I1213 14:31:55.927172 1869 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:31:55.927872 kubelet[1869]: I1213 14:31:55.927840 1869 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:31:55.945087 kubelet[1869]: I1213 14:31:55.945054 1869 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:31:55.984656 kubelet[1869]: I1213 14:31:55.984615 1869 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:31:55.984927 kubelet[1869]: I1213 14:31:55.984910 1869 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:31:55.985276 kubelet[1869]: I1213 14:31:55.985249 1869 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:31:55.985441 kubelet[1869]: I1213 14:31:55.985290 1869 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:31:55.985441 kubelet[1869]: I1213 14:31:55.985307 1869 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:31:55.985441 kubelet[1869]: I1213 14:31:55.985438 1869 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:55.985594 kubelet[1869]: I1213 14:31:55.985546 1869 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:31:55.985648 kubelet[1869]: I1213 14:31:55.985597 1869 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:31:55.985648 kubelet[1869]: I1213 14:31:55.985641 1869 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:31:55.985726 kubelet[1869]: I1213 14:31:55.985661 1869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:31:55.986904 kubelet[1869]: E1213 14:31:55.986884 1869 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:55.987064 kubelet[1869]: E1213 14:31:55.987047 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:55.989549 kubelet[1869]: I1213 14:31:55.989532 1869 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:31:55.993251 kubelet[1869]: I1213 14:31:55.993232 1869 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:31:55.993412 kubelet[1869]: W1213 14:31:55.993401 1869 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:31:55.998270 kubelet[1869]: I1213 14:31:55.998252 1869 server.go:1256] "Started kubelet" Dec 13 14:31:56.000275 kubelet[1869]: I1213 14:31:55.999895 1869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:31:56.000275 kubelet[1869]: I1213 14:31:56.000252 1869 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:31:56.011716 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:31:56.011984 kubelet[1869]: I1213 14:31:56.011957 1869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:31:56.013216 kubelet[1869]: I1213 14:31:56.013191 1869 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:31:56.014663 kubelet[1869]: I1213 14:31:56.014645 1869 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:31:56.018157 kubelet[1869]: E1213 14:31:56.018141 1869 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.29\" not found" Dec 13 14:31:56.018315 kubelet[1869]: I1213 14:31:56.018303 1869 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:31:56.018490 kubelet[1869]: I1213 14:31:56.018467 1869 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:31:56.018561 kubelet[1869]: I1213 14:31:56.018527 1869 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:31:56.018829 kubelet[1869]: E1213 14:31:56.018811 1869 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:31:56.021386 kubelet[1869]: I1213 14:31:56.021369 1869 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:31:56.021641 kubelet[1869]: I1213 14:31:56.021608 1869 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:31:56.024277 kubelet[1869]: E1213 14:31:56.024253 1869 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.29\" not found" node="10.200.8.29" Dec 13 14:31:56.024866 kubelet[1869]: I1213 14:31:56.024847 1869 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:31:56.037978 kubelet[1869]: I1213 14:31:56.037944 1869 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:31:56.037978 kubelet[1869]: I1213 14:31:56.037967 1869 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:31:56.038142 kubelet[1869]: I1213 14:31:56.037987 1869 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:56.119910 kubelet[1869]: I1213 14:31:56.119873 1869 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.29" Dec 13 14:31:56.124658 kubelet[1869]: I1213 14:31:56.124624 1869 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.29" Dec 13 14:31:56.931266 kubelet[1869]: I1213 14:31:56.931198 1869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:31:56.931917 kubelet[1869]: W1213 14:31:56.931478 1869 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:31:56.931917 kubelet[1869]: W1213 14:31:56.931521 1869 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:31:56.988221 kubelet[1869]: I1213 14:31:56.988155 1869 apiserver.go:52] "Watching apiserver" Dec 13 14:31:56.988542 kubelet[1869]: E1213 14:31:56.988187 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:57.108249 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 13 14:31:57.644117 kubelet[1869]: I1213 14:31:57.644079 1869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:31:57.648020 kubelet[1869]: I1213 14:31:57.647990 1869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:31:57.648353 kubelet[1869]: I1213 14:31:57.648337 1869 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:31:57.648501 kubelet[1869]: I1213 14:31:57.648490 1869 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:31:57.648678 kubelet[1869]: E1213 14:31:57.648657 1869 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:31:57.753255 kubelet[1869]: E1213 14:31:57.752587 1869 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:31:57.933170 systemd[1]: sshd@4-10.200.8.29:22-10.200.16.10:50696.service: Deactivated successfully. Dec 13 14:31:57.930047 sshd[1734]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:57.934020 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:31:57.935549 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:31:57.936371 systemd-logind[1425]: Removed session 7. Dec 13 14:31:57.953536 kubelet[1869]: E1213 14:31:57.953491 1869 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:31:57.975541 kubelet[1869]: I1213 14:31:57.975418 1869 policy_none.go:49] "None policy: Start" Dec 13 14:31:57.976548 kubelet[1869]: I1213 14:31:57.976449 1869 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:31:57.976548 kubelet[1869]: I1213 14:31:57.976491 1869 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:31:57.985609 systemd[1]: Created slice kubepods.slice. Dec 13 14:31:57.988811 kubelet[1869]: E1213 14:31:57.988781 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:57.993777 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:31:57.997148 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:31:58.004150 kubelet[1869]: I1213 14:31:58.002379 1869 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:31:58.004150 kubelet[1869]: I1213 14:31:58.002966 1869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:31:58.109787 kubelet[1869]: I1213 14:31:58.109745 1869 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:31:58.110806 env[1435]: time="2024-12-13T14:31:58.110753899Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:31:58.111345 kubelet[1869]: I1213 14:31:58.111085 1869 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:31:58.354084 kubelet[1869]: I1213 14:31:58.353910 1869 topology_manager.go:215] "Topology Admit Handler" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" podNamespace="kube-system" podName="cilium-6d5lw" Dec 13 14:31:58.355476 kubelet[1869]: I1213 14:31:58.355435 1869 topology_manager.go:215] "Topology Admit Handler" podUID="b5dbf31d-9036-4667-929a-73255f67aaba" podNamespace="kube-system" podName="kube-proxy-rpd2n" Dec 13 14:31:58.370408 systemd[1]: Created slice kubepods-burstable-pod9868b9d0_1a78_437e_a8a1_59156a49a5f1.slice. Dec 13 14:31:58.379238 systemd[1]: Created slice kubepods-besteffort-podb5dbf31d_9036_4667_929a_73255f67aaba.slice. Dec 13 14:31:58.419296 kubelet[1869]: I1213 14:31:58.419249 1869 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:31:58.433226 kubelet[1869]: I1213 14:31:58.433189 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-bpf-maps\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433226 kubelet[1869]: I1213 14:31:58.433240 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9868b9d0-1a78-437e-a8a1-59156a49a5f1-clustermesh-secrets\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433437 kubelet[1869]: I1213 14:31:58.433271 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5dbf31d-9036-4667-929a-73255f67aaba-lib-modules\") pod \"kube-proxy-rpd2n\" (UID: \"b5dbf31d-9036-4667-929a-73255f67aaba\") " pod="kube-system/kube-proxy-rpd2n" Dec 13 14:31:58.433437 kubelet[1869]: I1213 14:31:58.433299 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48c5l\" (UniqueName: \"kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-kube-api-access-48c5l\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433437 kubelet[1869]: I1213 14:31:58.433326 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5dbf31d-9036-4667-929a-73255f67aaba-kube-proxy\") pod \"kube-proxy-rpd2n\" (UID: \"b5dbf31d-9036-4667-929a-73255f67aaba\") " pod="kube-system/kube-proxy-rpd2n" Dec 13 14:31:58.433437 kubelet[1869]: I1213 14:31:58.433351 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-cgroup\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433437 kubelet[1869]: I1213 14:31:58.433375 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-etc-cni-netd\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433437 kubelet[1869]: I1213 14:31:58.433406 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-lib-modules\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433701 kubelet[1869]: I1213 14:31:58.433439 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hubble-tls\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433701 kubelet[1869]: I1213 14:31:58.433470 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-net\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433701 kubelet[1869]: I1213 14:31:58.433503 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-kernel\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433701 kubelet[1869]: I1213 14:31:58.433532 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5dbf31d-9036-4667-929a-73255f67aaba-xtables-lock\") pod \"kube-proxy-rpd2n\" (UID: \"b5dbf31d-9036-4667-929a-73255f67aaba\") " pod="kube-system/kube-proxy-rpd2n" Dec 13 14:31:58.433701 kubelet[1869]: I1213 14:31:58.433600 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-run\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433701 kubelet[1869]: I1213 14:31:58.433639 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hostproc\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433930 kubelet[1869]: I1213 14:31:58.433677 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cni-path\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433930 kubelet[1869]: I1213 14:31:58.433709 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-config-path\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433930 kubelet[1869]: I1213 14:31:58.433737 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-xtables-lock\") pod \"cilium-6d5lw\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " pod="kube-system/cilium-6d5lw" Dec 13 14:31:58.433930 kubelet[1869]: I1213 14:31:58.433778 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwpb\" (UniqueName: \"kubernetes.io/projected/b5dbf31d-9036-4667-929a-73255f67aaba-kube-api-access-vwwpb\") pod \"kube-proxy-rpd2n\" (UID: \"b5dbf31d-9036-4667-929a-73255f67aaba\") " pod="kube-system/kube-proxy-rpd2n" Dec 13 14:31:58.679551 env[1435]: time="2024-12-13T14:31:58.679479504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6d5lw,Uid:9868b9d0-1a78-437e-a8a1-59156a49a5f1,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:58.686853 env[1435]: time="2024-12-13T14:31:58.686806404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpd2n,Uid:b5dbf31d-9036-4667-929a-73255f67aaba,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:58.989513 kubelet[1869]: E1213 14:31:58.989369 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:59.895302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047741457.mount: Deactivated successfully. Dec 13 14:31:59.990106 kubelet[1869]: E1213 14:31:59.990060 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:00.092967 env[1435]: time="2024-12-13T14:32:00.092905315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.138877 env[1435]: time="2024-12-13T14:32:00.138803616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.185032 env[1435]: time="2024-12-13T14:32:00.184963316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.232399 env[1435]: time="2024-12-13T14:32:00.232331317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.279873 env[1435]: time="2024-12-13T14:32:00.279805017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.389440 env[1435]: time="2024-12-13T14:32:00.389368818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.439293 env[1435]: time="2024-12-13T14:32:00.439075618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.535423 env[1435]: time="2024-12-13T14:32:00.535353919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:00.990547 kubelet[1869]: E1213 14:32:00.990491 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:01.991261 kubelet[1869]: E1213 14:32:01.991217 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:02.758220 env[1435]: time="2024-12-13T14:32:02.758118835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:02.758220 env[1435]: time="2024-12-13T14:32:02.758169035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:02.758220 env[1435]: time="2024-12-13T14:32:02.758188335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:02.759031 env[1435]: time="2024-12-13T14:32:02.758945635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8 pid=2006 runtime=io.containerd.runc.v2 Dec 13 14:32:02.784147 systemd[1]: Started cri-containerd-30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8.scope. Dec 13 14:32:02.815893 env[1435]: time="2024-12-13T14:32:02.815844235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6d5lw,Uid:9868b9d0-1a78-437e-a8a1-59156a49a5f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\"" Dec 13 14:32:02.818213 env[1435]: time="2024-12-13T14:32:02.818179535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:32:02.845563 env[1435]: time="2024-12-13T14:32:02.845496236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:02.845816 env[1435]: time="2024-12-13T14:32:02.845533836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:02.845816 env[1435]: time="2024-12-13T14:32:02.845547236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:02.845986 env[1435]: time="2024-12-13T14:32:02.845843336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe391936e8118815015aa9eaa3cf0ae5e36508520df085d7646b9ed32ce502ab pid=2047 runtime=io.containerd.runc.v2 Dec 13 14:32:02.860512 systemd[1]: Started cri-containerd-fe391936e8118815015aa9eaa3cf0ae5e36508520df085d7646b9ed32ce502ab.scope. Dec 13 14:32:02.890307 env[1435]: time="2024-12-13T14:32:02.890264536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpd2n,Uid:b5dbf31d-9036-4667-929a-73255f67aaba,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe391936e8118815015aa9eaa3cf0ae5e36508520df085d7646b9ed32ce502ab\"" Dec 13 14:32:02.992054 kubelet[1869]: E1213 14:32:02.991993 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:03.992431 kubelet[1869]: E1213 14:32:03.992371 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:04.992957 kubelet[1869]: E1213 14:32:04.992905 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:05.993222 kubelet[1869]: E1213 14:32:05.993173 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:06.993956 kubelet[1869]: E1213 14:32:06.993904 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:07.994382 kubelet[1869]: E1213 14:32:07.994318 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:08.994717 kubelet[1869]: E1213 14:32:08.994676 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:09.802584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061824775.mount: Deactivated successfully. Dec 13 14:32:09.995611 kubelet[1869]: E1213 14:32:09.995507 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:10.995986 kubelet[1869]: E1213 14:32:10.995937 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:11.996883 kubelet[1869]: E1213 14:32:11.996821 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:12.557016 env[1435]: time="2024-12-13T14:32:12.556947084Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:12.633770 env[1435]: time="2024-12-13T14:32:12.633697784Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:12.639371 env[1435]: time="2024-12-13T14:32:12.639311084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:12.640366 env[1435]: time="2024-12-13T14:32:12.640323484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:32:12.642333 env[1435]: time="2024-12-13T14:32:12.642292784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:32:12.643748 env[1435]: time="2024-12-13T14:32:12.643712884Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:32:12.700052 env[1435]: time="2024-12-13T14:32:12.699770412Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\"" Dec 13 14:32:12.701026 env[1435]: time="2024-12-13T14:32:12.700983919Z" level=info msg="StartContainer for \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\"" Dec 13 14:32:12.736177 systemd[1]: Started cri-containerd-de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c.scope. Dec 13 14:32:12.787224 env[1435]: time="2024-12-13T14:32:12.787162522Z" level=info msg="StartContainer for \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\" returns successfully" Dec 13 14:32:12.797661 systemd[1]: cri-containerd-de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c.scope: Deactivated successfully. Dec 13 14:32:15.934104 kubelet[1869]: E1213 14:32:12.997676 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:15.934104 kubelet[1869]: E1213 14:32:13.998542 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:15.934104 kubelet[1869]: E1213 14:32:14.999599 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:13.670120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c-rootfs.mount: Deactivated successfully. Dec 13 14:32:15.986334 kubelet[1869]: E1213 14:32:15.986284 1869 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:16.000674 kubelet[1869]: E1213 14:32:16.000620 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:17.000862 kubelet[1869]: E1213 14:32:17.000800 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:18.001112 kubelet[1869]: E1213 14:32:18.001058 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:19.002128 kubelet[1869]: E1213 14:32:19.001962 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:20.291845 kubelet[1869]: E1213 14:32:20.003197 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:21.004461 kubelet[1869]: E1213 14:32:21.004361 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:22.005521 kubelet[1869]: E1213 14:32:22.005453 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:22.804171 env[1435]: time="2024-12-13T14:32:22.804112013Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c,ID:de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c,Pid:2103,ExitStatus:0,ExitedAt:2024-12-13 14:32:12.800253898 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Dec 13 14:32:23.006123 kubelet[1869]: E1213 14:32:23.006066 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:23.886669 env[1435]: time="2024-12-13T14:32:23.886585835Z" level=info msg="TaskExit event &TaskExit{ContainerID:de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c,ID:de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c,Pid:2103,ExitStatus:0,ExitedAt:2024-12-13 14:32:12.800253898 +0000 UTC,XXX_unrecognized:[],}" Dec 13 14:32:24.007171 kubelet[1869]: E1213 14:32:24.007110 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:25.008283 kubelet[1869]: E1213 14:32:25.008220 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:25.887384 env[1435]: time="2024-12-13T14:32:25.887320629Z" level=error msg="get state for de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c" error="context deadline exceeded: unknown" Dec 13 14:32:25.887384 env[1435]: time="2024-12-13T14:32:25.887368931Z" level=warning msg="unknown status" status=0 Dec 13 14:32:26.009360 kubelet[1869]: E1213 14:32:26.009300 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:27.010314 kubelet[1869]: E1213 14:32:27.010271 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:28.011349 kubelet[1869]: E1213 14:32:28.011314 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:28.714368 env[1435]: time="2024-12-13T14:32:28.714310951Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:32:28.932117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427093293.mount: Deactivated successfully. Dec 13 14:32:28.955788 env[1435]: time="2024-12-13T14:32:28.955718541Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\"" Dec 13 14:32:28.956788 env[1435]: time="2024-12-13T14:32:28.956737976Z" level=info msg="StartContainer for \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\"" Dec 13 14:32:28.989992 systemd[1]: Started cri-containerd-df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a.scope. Dec 13 14:32:29.011469 kubelet[1869]: E1213 14:32:29.011405 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:29.031433 env[1435]: time="2024-12-13T14:32:29.031382443Z" level=info msg="StartContainer for \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\" returns successfully" Dec 13 14:32:29.043948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:32:29.044235 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:32:29.044436 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:32:29.048439 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:29.056326 systemd[1]: cri-containerd-df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a.scope: Deactivated successfully. Dec 13 14:32:29.060396 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:29.147086 env[1435]: time="2024-12-13T14:32:29.147030157Z" level=info msg="shim disconnected" id=df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a Dec 13 14:32:29.147462 env[1435]: time="2024-12-13T14:32:29.147438071Z" level=warning msg="cleaning up after shim disconnected" id=df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a namespace=k8s.io Dec 13 14:32:29.147585 env[1435]: time="2024-12-13T14:32:29.147557775Z" level=info msg="cleaning up dead shim" Dec 13 14:32:29.175869 env[1435]: time="2024-12-13T14:32:29.175806832Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2188 runtime=io.containerd.runc.v2\n" Dec 13 14:32:29.719755 env[1435]: time="2024-12-13T14:32:29.719700342Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:32:29.777297 env[1435]: time="2024-12-13T14:32:29.777228289Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\"" Dec 13 14:32:29.778480 env[1435]: time="2024-12-13T14:32:29.778439330Z" level=info msg="StartContainer for \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\"" Dec 13 14:32:29.800155 systemd[1]: Started cri-containerd-438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71.scope. Dec 13 14:32:29.848409 systemd[1]: cri-containerd-438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71.scope: Deactivated successfully. Dec 13 14:32:29.853292 env[1435]: time="2024-12-13T14:32:29.853235162Z" level=info msg="StartContainer for \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\" returns successfully" Dec 13 14:32:29.911232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a-rootfs.mount: Deactivated successfully. Dec 13 14:32:29.911381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398500945.mount: Deactivated successfully. Dec 13 14:32:30.012553 kubelet[1869]: E1213 14:32:30.012429 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:30.363526 env[1435]: time="2024-12-13T14:32:30.363383413Z" level=info msg="shim disconnected" id=438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71 Dec 13 14:32:30.363526 env[1435]: time="2024-12-13T14:32:30.363445515Z" level=warning msg="cleaning up after shim disconnected" id=438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71 namespace=k8s.io Dec 13 14:32:30.363526 env[1435]: time="2024-12-13T14:32:30.363456415Z" level=info msg="cleaning up dead shim" Dec 13 14:32:30.373082 env[1435]: time="2024-12-13T14:32:30.373039731Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2244 runtime=io.containerd.runc.v2\n" Dec 13 14:32:30.377811 env[1435]: time="2024-12-13T14:32:30.377769787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:30.389703 env[1435]: time="2024-12-13T14:32:30.389652479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:30.394048 env[1435]: time="2024-12-13T14:32:30.394009923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:30.398976 env[1435]: time="2024-12-13T14:32:30.398943785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:30.399353 env[1435]: time="2024-12-13T14:32:30.399322998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:32:30.401586 env[1435]: time="2024-12-13T14:32:30.401538671Z" level=info msg="CreateContainer within sandbox \"fe391936e8118815015aa9eaa3cf0ae5e36508520df085d7646b9ed32ce502ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:32:30.429543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998550689.mount: Deactivated successfully. Dec 13 14:32:30.436470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361859244.mount: Deactivated successfully. Dec 13 14:32:30.450486 env[1435]: time="2024-12-13T14:32:30.450435883Z" level=info msg="CreateContainer within sandbox \"fe391936e8118815015aa9eaa3cf0ae5e36508520df085d7646b9ed32ce502ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5828c0ea6fa85b505b06787b55a7f6d4a7666328594c041127e5c62955ff96f5\"" Dec 13 14:32:30.451120 env[1435]: time="2024-12-13T14:32:30.451066104Z" level=info msg="StartContainer for \"5828c0ea6fa85b505b06787b55a7f6d4a7666328594c041127e5c62955ff96f5\"" Dec 13 14:32:30.479020 systemd[1]: Started cri-containerd-5828c0ea6fa85b505b06787b55a7f6d4a7666328594c041127e5c62955ff96f5.scope. Dec 13 14:32:30.518750 env[1435]: time="2024-12-13T14:32:30.518698434Z" level=info msg="StartContainer for \"5828c0ea6fa85b505b06787b55a7f6d4a7666328594c041127e5c62955ff96f5\" returns successfully" Dec 13 14:32:30.723736 env[1435]: time="2024-12-13T14:32:30.723684493Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:32:30.755220 kubelet[1869]: I1213 14:32:30.754919 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rpd2n" podStartSLOduration=7.246937048 podStartE2EDuration="34.754858821s" podCreationTimestamp="2024-12-13 14:31:56 +0000 UTC" firstStartedPulling="2024-12-13 14:32:02.891727136 +0000 UTC m=+7.299518163" lastFinishedPulling="2024-12-13 14:32:30.399648909 +0000 UTC m=+34.807439936" observedRunningTime="2024-12-13 14:32:30.73114684 +0000 UTC m=+35.138937967" watchObservedRunningTime="2024-12-13 14:32:30.754858821 +0000 UTC m=+35.162649848" Dec 13 14:32:30.766370 env[1435]: time="2024-12-13T14:32:30.766313999Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\"" Dec 13 14:32:30.767339 env[1435]: time="2024-12-13T14:32:30.767304132Z" level=info msg="StartContainer for \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\"" Dec 13 14:32:30.788245 systemd[1]: Started cri-containerd-c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931.scope. Dec 13 14:32:30.821784 systemd[1]: cri-containerd-c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931.scope: Deactivated successfully. Dec 13 14:32:30.823864 env[1435]: time="2024-12-13T14:32:30.823755493Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9868b9d0_1a78_437e_a8a1_59156a49a5f1.slice/cri-containerd-c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931.scope/memory.events\": no such file or directory" Dec 13 14:32:30.831156 env[1435]: time="2024-12-13T14:32:30.831112936Z" level=info msg="StartContainer for \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\" returns successfully" Dec 13 14:32:30.940740 env[1435]: time="2024-12-13T14:32:30.940678149Z" level=info msg="shim disconnected" id=c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931 Dec 13 14:32:30.941021 env[1435]: time="2024-12-13T14:32:30.940833954Z" level=warning msg="cleaning up after shim disconnected" id=c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931 namespace=k8s.io Dec 13 14:32:30.941021 env[1435]: time="2024-12-13T14:32:30.940854854Z" level=info msg="cleaning up dead shim" Dec 13 14:32:30.949059 env[1435]: time="2024-12-13T14:32:30.948998623Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2463 runtime=io.containerd.runc.v2\n" Dec 13 14:32:31.012988 kubelet[1869]: E1213 14:32:31.012829 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:31.730881 env[1435]: time="2024-12-13T14:32:31.730820992Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:32:31.841839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850541006.mount: Deactivated successfully. Dec 13 14:32:31.988965 env[1435]: time="2024-12-13T14:32:31.988801180Z" level=info msg="CreateContainer within sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\"" Dec 13 14:32:31.989916 env[1435]: time="2024-12-13T14:32:31.989870114Z" level=info msg="StartContainer for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\"" Dec 13 14:32:32.013699 kubelet[1869]: E1213 14:32:32.013648 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:32.018277 systemd[1]: Started cri-containerd-f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023.scope. Dec 13 14:32:32.062371 env[1435]: time="2024-12-13T14:32:32.062297391Z" level=info msg="StartContainer for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" returns successfully" Dec 13 14:32:32.098489 systemd[1]: run-containerd-runc-k8s.io-f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023-runc.omRVRK.mount: Deactivated successfully. Dec 13 14:32:32.217072 kubelet[1869]: I1213 14:32:32.217035 1869 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:32:32.748974 kubelet[1869]: I1213 14:32:32.748934 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6d5lw" podStartSLOduration=26.925378435 podStartE2EDuration="36.748892884s" podCreationTimestamp="2024-12-13 14:31:56 +0000 UTC" firstStartedPulling="2024-12-13 14:32:02.817463335 +0000 UTC m=+7.225254362" lastFinishedPulling="2024-12-13 14:32:12.640977784 +0000 UTC m=+17.048768811" observedRunningTime="2024-12-13 14:32:32.748502872 +0000 UTC m=+37.156293999" watchObservedRunningTime="2024-12-13 14:32:32.748892884 +0000 UTC m=+37.156683911" Dec 13 14:32:32.863600 kernel: Initializing XFRM netlink socket Dec 13 14:32:33.014126 kubelet[1869]: E1213 14:32:33.013991 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:34.015225 kubelet[1869]: E1213 14:32:34.015165 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:34.499173 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:32:34.498524 systemd-networkd[1589]: cilium_host: Link UP Dec 13 14:32:34.500016 systemd-networkd[1589]: cilium_net: Link UP Dec 13 14:32:34.500024 systemd-networkd[1589]: cilium_net: Gained carrier Dec 13 14:32:34.500227 systemd-networkd[1589]: cilium_host: Gained carrier Dec 13 14:32:34.502806 systemd-networkd[1589]: cilium_host: Gained IPv6LL Dec 13 14:32:34.618308 systemd-networkd[1589]: cilium_vxlan: Link UP Dec 13 14:32:34.618318 systemd-networkd[1589]: cilium_vxlan: Gained carrier Dec 13 14:32:34.985783 systemd-networkd[1589]: cilium_net: Gained IPv6LL Dec 13 14:32:35.016055 kubelet[1869]: E1213 14:32:35.015990 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:35.092612 kernel: NET: Registered PF_ALG protocol family Dec 13 14:32:35.986418 kubelet[1869]: E1213 14:32:35.986303 1869 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:36.016857 kubelet[1869]: E1213 14:32:36.016787 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:36.082713 systemd-networkd[1589]: lxc_health: Link UP Dec 13 14:32:36.092367 systemd-networkd[1589]: cilium_vxlan: Gained IPv6LL Dec 13 14:32:36.097781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:32:36.098357 systemd-networkd[1589]: lxc_health: Gained carrier Dec 13 14:32:37.017546 kubelet[1869]: E1213 14:32:37.017491 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:37.369910 systemd-networkd[1589]: lxc_health: Gained IPv6LL Dec 13 14:32:38.018232 kubelet[1869]: E1213 14:32:38.018196 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:38.860671 kubelet[1869]: I1213 14:32:38.860619 1869 topology_manager.go:215] "Topology Admit Handler" podUID="61dd1392-dfa5-4e82-8312-6648aa646f7c" podNamespace="default" podName="nginx-deployment-6d5f899847-2nx8d" Dec 13 14:32:38.868455 systemd[1]: Created slice kubepods-besteffort-pod61dd1392_dfa5_4e82_8312_6648aa646f7c.slice. Dec 13 14:32:38.921022 kubelet[1869]: I1213 14:32:38.920969 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frg9w\" (UniqueName: \"kubernetes.io/projected/61dd1392-dfa5-4e82-8312-6648aa646f7c-kube-api-access-frg9w\") pod \"nginx-deployment-6d5f899847-2nx8d\" (UID: \"61dd1392-dfa5-4e82-8312-6648aa646f7c\") " pod="default/nginx-deployment-6d5f899847-2nx8d" Dec 13 14:32:39.019102 kubelet[1869]: E1213 14:32:39.019045 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:39.173249 env[1435]: time="2024-12-13T14:32:39.173186559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2nx8d,Uid:61dd1392-dfa5-4e82-8312-6648aa646f7c,Namespace:default,Attempt:0,}" Dec 13 14:32:39.466195 systemd-networkd[1589]: lxc9c454488d0e9: Link UP Dec 13 14:32:39.473601 kernel: eth0: renamed from tmp4926e Dec 13 14:32:39.486817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:39.486961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9c454488d0e9: link becomes ready Dec 13 14:32:39.499063 systemd-networkd[1589]: lxc9c454488d0e9: Gained carrier Dec 13 14:32:40.020103 kubelet[1869]: E1213 14:32:40.020046 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:40.569152 env[1435]: time="2024-12-13T14:32:40.569044204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:40.569152 env[1435]: time="2024-12-13T14:32:40.569111806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:40.569152 env[1435]: time="2024-12-13T14:32:40.569125707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:40.569959 env[1435]: time="2024-12-13T14:32:40.569883026Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4926e0150ab7165485ed2d26c0aa16db334b285c3b5ac56bb0679fdcac9d371d pid=2996 runtime=io.containerd.runc.v2 Dec 13 14:32:40.598748 systemd[1]: run-containerd-runc-k8s.io-4926e0150ab7165485ed2d26c0aa16db334b285c3b5ac56bb0679fdcac9d371d-runc.EwBvkR.mount: Deactivated successfully. Dec 13 14:32:40.604551 systemd[1]: Started cri-containerd-4926e0150ab7165485ed2d26c0aa16db334b285c3b5ac56bb0679fdcac9d371d.scope. Dec 13 14:32:40.645180 env[1435]: time="2024-12-13T14:32:40.645122951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2nx8d,Uid:61dd1392-dfa5-4e82-8312-6648aa646f7c,Namespace:default,Attempt:0,} returns sandbox id \"4926e0150ab7165485ed2d26c0aa16db334b285c3b5ac56bb0679fdcac9d371d\"" Dec 13 14:32:40.647327 env[1435]: time="2024-12-13T14:32:40.647295307Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:32:40.761907 systemd-networkd[1589]: lxc9c454488d0e9: Gained IPv6LL Dec 13 14:32:41.021681 kubelet[1869]: E1213 14:32:41.021622 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:42.022372 kubelet[1869]: E1213 14:32:42.022311 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:43.022752 kubelet[1869]: E1213 14:32:43.022690 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:44.023042 kubelet[1869]: E1213 14:32:44.022975 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:45.023696 kubelet[1869]: E1213 14:32:45.023629 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:46.024741 kubelet[1869]: E1213 14:32:46.024654 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:46.756312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395553704.mount: Deactivated successfully. Dec 13 14:32:47.025010 kubelet[1869]: E1213 14:32:47.024868 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:48.025407 kubelet[1869]: E1213 14:32:48.025270 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:49.026379 kubelet[1869]: E1213 14:32:49.026285 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:49.591938 env[1435]: time="2024-12-13T14:32:49.591876616Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:49.596277 env[1435]: time="2024-12-13T14:32:49.596231406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:49.601386 env[1435]: time="2024-12-13T14:32:49.601345811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:49.604889 env[1435]: time="2024-12-13T14:32:49.604852484Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:49.605565 env[1435]: time="2024-12-13T14:32:49.605531998Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:32:49.607656 env[1435]: time="2024-12-13T14:32:49.607624541Z" level=info msg="CreateContainer within sandbox \"4926e0150ab7165485ed2d26c0aa16db334b285c3b5ac56bb0679fdcac9d371d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:32:49.648514 env[1435]: time="2024-12-13T14:32:49.648454885Z" level=info msg="CreateContainer within sandbox \"4926e0150ab7165485ed2d26c0aa16db334b285c3b5ac56bb0679fdcac9d371d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a0e551d941567d003deb5f6187b0be4c72d1f42287a36bbe7f20cdc0a70814c6\"" Dec 13 14:32:49.650785 env[1435]: time="2024-12-13T14:32:49.650730932Z" level=info msg="StartContainer for \"a0e551d941567d003deb5f6187b0be4c72d1f42287a36bbe7f20cdc0a70814c6\"" Dec 13 14:32:49.680779 systemd[1]: run-containerd-runc-k8s.io-a0e551d941567d003deb5f6187b0be4c72d1f42287a36bbe7f20cdc0a70814c6-runc.J4Fnde.mount: Deactivated successfully. Dec 13 14:32:49.687334 systemd[1]: Started cri-containerd-a0e551d941567d003deb5f6187b0be4c72d1f42287a36bbe7f20cdc0a70814c6.scope. Dec 13 14:32:49.718191 env[1435]: time="2024-12-13T14:32:49.718130926Z" level=info msg="StartContainer for \"a0e551d941567d003deb5f6187b0be4c72d1f42287a36bbe7f20cdc0a70814c6\" returns successfully" Dec 13 14:32:50.027425 kubelet[1869]: E1213 14:32:50.027375 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:51.028126 kubelet[1869]: E1213 14:32:51.028059 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:52.028956 kubelet[1869]: E1213 14:32:52.028897 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:53.029891 kubelet[1869]: E1213 14:32:53.029839 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:54.030468 kubelet[1869]: E1213 14:32:54.030411 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:55.030607 kubelet[1869]: E1213 14:32:55.030532 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:55.986314 kubelet[1869]: E1213 14:32:55.986252 1869 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:56.031634 kubelet[1869]: E1213 14:32:56.031589 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:57.032361 kubelet[1869]: E1213 14:32:57.032295 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:57.317455 kubelet[1869]: I1213 14:32:57.317327 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-2nx8d" podStartSLOduration=10.35818209 podStartE2EDuration="19.317276802s" podCreationTimestamp="2024-12-13 14:32:38 +0000 UTC" firstStartedPulling="2024-12-13 14:32:40.646810294 +0000 UTC m=+45.054601321" lastFinishedPulling="2024-12-13 14:32:49.605904906 +0000 UTC m=+54.013696033" observedRunningTime="2024-12-13 14:32:49.777259648 +0000 UTC m=+54.185050775" watchObservedRunningTime="2024-12-13 14:32:57.317276802 +0000 UTC m=+61.725067929" Dec 13 14:32:57.317706 kubelet[1869]: I1213 14:32:57.317467 1869 topology_manager.go:215] "Topology Admit Handler" podUID="75ca3e43-37ae-49c7-8220-3587ac482741" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:32:57.322970 systemd[1]: Created slice kubepods-besteffort-pod75ca3e43_37ae_49c7_8220_3587ac482741.slice. Dec 13 14:32:57.344210 kubelet[1869]: I1213 14:32:57.344169 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjgt\" (UniqueName: \"kubernetes.io/projected/75ca3e43-37ae-49c7-8220-3587ac482741-kube-api-access-dsjgt\") pod \"nfs-server-provisioner-0\" (UID: \"75ca3e43-37ae-49c7-8220-3587ac482741\") " pod="default/nfs-server-provisioner-0" Dec 13 14:32:57.344210 kubelet[1869]: I1213 14:32:57.344225 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/75ca3e43-37ae-49c7-8220-3587ac482741-data\") pod \"nfs-server-provisioner-0\" (UID: \"75ca3e43-37ae-49c7-8220-3587ac482741\") " pod="default/nfs-server-provisioner-0" Dec 13 14:32:57.627593 env[1435]: time="2024-12-13T14:32:57.627429081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:75ca3e43-37ae-49c7-8220-3587ac482741,Namespace:default,Attempt:0,}" Dec 13 14:32:57.708150 systemd-networkd[1589]: lxc806609b1e712: Link UP Dec 13 14:32:57.718994 kernel: eth0: renamed from tmpcd2aa Dec 13 14:32:57.736202 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:57.736342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc806609b1e712: link becomes ready Dec 13 14:32:57.737971 systemd-networkd[1589]: lxc806609b1e712: Gained carrier Dec 13 14:32:57.943099 env[1435]: time="2024-12-13T14:32:57.942791451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:57.943099 env[1435]: time="2024-12-13T14:32:57.942847552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:57.943099 env[1435]: time="2024-12-13T14:32:57.942864852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:57.943615 env[1435]: time="2024-12-13T14:32:57.943547664Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd2aa34247d8158a9428cec56d0dfd30bc772fe0129e10573cedcb3f3b325866 pid=3127 runtime=io.containerd.runc.v2 Dec 13 14:32:57.972288 systemd[1]: Started cri-containerd-cd2aa34247d8158a9428cec56d0dfd30bc772fe0129e10573cedcb3f3b325866.scope. Dec 13 14:32:58.025399 env[1435]: time="2024-12-13T14:32:58.025343075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:75ca3e43-37ae-49c7-8220-3587ac482741,Namespace:default,Attempt:0,} returns sandbox id \"cd2aa34247d8158a9428cec56d0dfd30bc772fe0129e10573cedcb3f3b325866\"" Dec 13 14:32:58.027354 env[1435]: time="2024-12-13T14:32:58.027308008Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:32:58.033272 kubelet[1869]: E1213 14:32:58.033217 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:58.456657 systemd[1]: run-containerd-runc-k8s.io-cd2aa34247d8158a9428cec56d0dfd30bc772fe0129e10573cedcb3f3b325866-runc.dqO1U4.mount: Deactivated successfully. Dec 13 14:32:59.034129 kubelet[1869]: E1213 14:32:59.034070 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:32:59.193868 systemd-networkd[1589]: lxc806609b1e712: Gained IPv6LL Dec 13 14:33:00.035313 kubelet[1869]: E1213 14:33:00.035241 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:00.923790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007276817.mount: Deactivated successfully. Dec 13 14:33:01.036156 kubelet[1869]: E1213 14:33:01.036103 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:02.037233 kubelet[1869]: E1213 14:33:02.037183 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:03.037446 kubelet[1869]: E1213 14:33:03.037387 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:04.037706 kubelet[1869]: E1213 14:33:04.037648 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:05.038564 kubelet[1869]: E1213 14:33:05.038510 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:06.038816 kubelet[1869]: E1213 14:33:06.038751 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:06.137181 env[1435]: time="2024-12-13T14:33:06.137108741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:06.229032 env[1435]: time="2024-12-13T14:33:06.228972871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:06.234278 env[1435]: time="2024-12-13T14:33:06.234222947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:06.281248 env[1435]: time="2024-12-13T14:33:06.281187327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:06.282089 env[1435]: time="2024-12-13T14:33:06.282047740Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:33:06.284783 env[1435]: time="2024-12-13T14:33:06.284748379Z" level=info msg="CreateContainer within sandbox \"cd2aa34247d8158a9428cec56d0dfd30bc772fe0129e10573cedcb3f3b325866\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:33:06.496353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510590018.mount: Deactivated successfully. Dec 13 14:33:06.640375 env[1435]: time="2024-12-13T14:33:06.640295727Z" level=info msg="CreateContainer within sandbox \"cd2aa34247d8158a9428cec56d0dfd30bc772fe0129e10573cedcb3f3b325866\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d26062b1cca14d9f3ad1c42ed65b6c53fa42e73e212cf5daf042c344e0948d43\"" Dec 13 14:33:06.641321 env[1435]: time="2024-12-13T14:33:06.641206840Z" level=info msg="StartContainer for \"d26062b1cca14d9f3ad1c42ed65b6c53fa42e73e212cf5daf042c344e0948d43\"" Dec 13 14:33:06.665222 systemd[1]: Started cri-containerd-d26062b1cca14d9f3ad1c42ed65b6c53fa42e73e212cf5daf042c344e0948d43.scope. Dec 13 14:33:06.700543 env[1435]: time="2024-12-13T14:33:06.700480999Z" level=info msg="StartContainer for \"d26062b1cca14d9f3ad1c42ed65b6c53fa42e73e212cf5daf042c344e0948d43\" returns successfully" Dec 13 14:33:06.823011 kubelet[1869]: I1213 14:33:06.822745 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.566859419 podStartE2EDuration="9.822693468s" podCreationTimestamp="2024-12-13 14:32:57 +0000 UTC" firstStartedPulling="2024-12-13 14:32:58.026647797 +0000 UTC m=+62.434438824" lastFinishedPulling="2024-12-13 14:33:06.282481846 +0000 UTC m=+70.690272873" observedRunningTime="2024-12-13 14:33:06.822315663 +0000 UTC m=+71.230106790" watchObservedRunningTime="2024-12-13 14:33:06.822693468 +0000 UTC m=+71.230484495" Dec 13 14:33:07.039428 kubelet[1869]: E1213 14:33:07.039368 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:08.039786 kubelet[1869]: E1213 14:33:08.039722 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:09.040362 kubelet[1869]: E1213 14:33:09.040299 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:10.040600 kubelet[1869]: E1213 14:33:10.040535 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:11.041596 kubelet[1869]: E1213 14:33:11.041505 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:12.042417 kubelet[1869]: E1213 14:33:12.042353 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:13.043453 kubelet[1869]: E1213 14:33:13.043392 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:14.044155 kubelet[1869]: E1213 14:33:14.044093 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:15.044399 kubelet[1869]: E1213 14:33:15.044343 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:15.986589 kubelet[1869]: E1213 14:33:15.986525 1869 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:16.044785 kubelet[1869]: E1213 14:33:16.044726 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:16.354628 kubelet[1869]: I1213 14:33:16.354049 1869 topology_manager.go:215] "Topology Admit Handler" podUID="08c1c443-cae8-40e3-b144-3fa4d19e4c79" podNamespace="default" podName="test-pod-1" Dec 13 14:33:16.360813 systemd[1]: Created slice kubepods-besteffort-pod08c1c443_cae8_40e3_b144_3fa4d19e4c79.slice. Dec 13 14:33:16.456163 kubelet[1869]: I1213 14:33:16.456101 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c1ab4c1-7611-4d61-86d1-4e434b9eeac4\" (UniqueName: \"kubernetes.io/nfs/08c1c443-cae8-40e3-b144-3fa4d19e4c79-pvc-0c1ab4c1-7611-4d61-86d1-4e434b9eeac4\") pod \"test-pod-1\" (UID: \"08c1c443-cae8-40e3-b144-3fa4d19e4c79\") " pod="default/test-pod-1" Dec 13 14:33:16.456163 kubelet[1869]: I1213 14:33:16.456170 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrmjr\" (UniqueName: \"kubernetes.io/projected/08c1c443-cae8-40e3-b144-3fa4d19e4c79-kube-api-access-xrmjr\") pod \"test-pod-1\" (UID: \"08c1c443-cae8-40e3-b144-3fa4d19e4c79\") " pod="default/test-pod-1" Dec 13 14:33:16.606614 kernel: FS-Cache: Loaded Dec 13 14:33:16.729283 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:33:16.729451 kernel: RPC: Registered udp transport module. Dec 13 14:33:16.729490 kernel: RPC: Registered tcp transport module. Dec 13 14:33:16.734652 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:33:16.821607 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:33:17.002658 kernel: NFS: Registering the id_resolver key type Dec 13 14:33:17.002834 kernel: Key type id_resolver registered Dec 13 14:33:17.002862 kernel: Key type id_legacy registered Dec 13 14:33:17.045364 kubelet[1869]: E1213 14:33:17.045285 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:18.046142 kubelet[1869]: E1213 14:33:18.046093 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:18.183027 nfsidmap[3249]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-01993ae768' Dec 13 14:33:18.204899 nfsidmap[3250]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-01993ae768' Dec 13 14:33:18.464296 env[1435]: time="2024-12-13T14:33:18.464232866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:08c1c443-cae8-40e3-b144-3fa4d19e4c79,Namespace:default,Attempt:0,}" Dec 13 14:33:19.046861 kubelet[1869]: E1213 14:33:19.046783 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:20.047740 kubelet[1869]: E1213 14:33:20.047673 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:21.048435 kubelet[1869]: E1213 14:33:21.048375 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:21.171929 systemd-networkd[1589]: lxce0851b3061aa: Link UP Dec 13 14:33:21.188129 kernel: eth0: renamed from tmp5c1d8 Dec 13 14:33:21.203228 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:33:21.203378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce0851b3061aa: link becomes ready Dec 13 14:33:21.207536 systemd-networkd[1589]: lxce0851b3061aa: Gained carrier Dec 13 14:33:21.849470 env[1435]: time="2024-12-13T14:33:21.849391902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:21.849470 env[1435]: time="2024-12-13T14:33:21.849430703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:21.849470 env[1435]: time="2024-12-13T14:33:21.849444203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:21.850026 env[1435]: time="2024-12-13T14:33:21.849860307Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c1d87658eb16bca8bd6f0b127b9d18ee2a6bc1cbd86b9ae32e581c0cc5841c8 pid=3277 runtime=io.containerd.runc.v2 Dec 13 14:33:21.876275 systemd[1]: Started cri-containerd-5c1d87658eb16bca8bd6f0b127b9d18ee2a6bc1cbd86b9ae32e581c0cc5841c8.scope. Dec 13 14:33:21.925058 env[1435]: time="2024-12-13T14:33:21.924962449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:08c1c443-cae8-40e3-b144-3fa4d19e4c79,Namespace:default,Attempt:0,} returns sandbox id \"5c1d87658eb16bca8bd6f0b127b9d18ee2a6bc1cbd86b9ae32e581c0cc5841c8\"" Dec 13 14:33:21.927960 env[1435]: time="2024-12-13T14:33:21.927928982Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:33:22.048785 kubelet[1869]: E1213 14:33:22.048713 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:22.502003 env[1435]: time="2024-12-13T14:33:22.501947028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:22.507706 env[1435]: time="2024-12-13T14:33:22.507645891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:22.511884 env[1435]: time="2024-12-13T14:33:22.511843237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:22.516552 env[1435]: time="2024-12-13T14:33:22.516500789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:22.518047 env[1435]: time="2024-12-13T14:33:22.518000805Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:33:22.520984 env[1435]: time="2024-12-13T14:33:22.520949538Z" level=info msg="CreateContainer within sandbox \"5c1d87658eb16bca8bd6f0b127b9d18ee2a6bc1cbd86b9ae32e581c0cc5841c8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:33:22.559749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813582785.mount: Deactivated successfully. Dec 13 14:33:22.583108 env[1435]: time="2024-12-13T14:33:22.583044223Z" level=info msg="CreateContainer within sandbox \"5c1d87658eb16bca8bd6f0b127b9d18ee2a6bc1cbd86b9ae32e581c0cc5841c8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"fed6c6983ba9932cb4dabb205797a139ea4bfb5a960d46b6df0b40855427152b\"" Dec 13 14:33:22.583747 env[1435]: time="2024-12-13T14:33:22.583698230Z" level=info msg="StartContainer for \"fed6c6983ba9932cb4dabb205797a139ea4bfb5a960d46b6df0b40855427152b\"" Dec 13 14:33:22.605877 systemd[1]: Started cri-containerd-fed6c6983ba9932cb4dabb205797a139ea4bfb5a960d46b6df0b40855427152b.scope. Dec 13 14:33:22.637980 env[1435]: time="2024-12-13T14:33:22.637916528Z" level=info msg="StartContainer for \"fed6c6983ba9932cb4dabb205797a139ea4bfb5a960d46b6df0b40855427152b\" returns successfully" Dec 13 14:33:22.681727 systemd-networkd[1589]: lxce0851b3061aa: Gained IPv6LL Dec 13 14:33:22.863832 kubelet[1869]: I1213 14:33:22.863679 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=24.272040178 podStartE2EDuration="24.863632118s" podCreationTimestamp="2024-12-13 14:32:58 +0000 UTC" firstStartedPulling="2024-12-13 14:33:21.927236974 +0000 UTC m=+86.335028001" lastFinishedPulling="2024-12-13 14:33:22.518828814 +0000 UTC m=+86.926619941" observedRunningTime="2024-12-13 14:33:22.863337915 +0000 UTC m=+87.271128942" watchObservedRunningTime="2024-12-13 14:33:22.863632118 +0000 UTC m=+87.271423145" Dec 13 14:33:23.049756 kubelet[1869]: E1213 14:33:23.049698 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:24.050618 kubelet[1869]: E1213 14:33:24.050532 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:25.051688 kubelet[1869]: E1213 14:33:25.051633 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:26.052600 kubelet[1869]: E1213 14:33:26.052517 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:27.053346 kubelet[1869]: E1213 14:33:27.053277 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:28.054196 kubelet[1869]: E1213 14:33:28.054132 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:28.684786 systemd[1]: run-containerd-runc-k8s.io-f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023-runc.6MKSGt.mount: Deactivated successfully. Dec 13 14:33:28.703627 env[1435]: time="2024-12-13T14:33:28.703547478Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:33:28.710040 env[1435]: time="2024-12-13T14:33:28.709997943Z" level=info msg="StopContainer for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" with timeout 2 (s)" Dec 13 14:33:28.710294 env[1435]: time="2024-12-13T14:33:28.710265546Z" level=info msg="Stop container \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" with signal terminated" Dec 13 14:33:28.717055 systemd-networkd[1589]: lxc_health: Link DOWN Dec 13 14:33:28.717063 systemd-networkd[1589]: lxc_health: Lost carrier Dec 13 14:33:28.740908 systemd[1]: cri-containerd-f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023.scope: Deactivated successfully. Dec 13 14:33:28.741202 systemd[1]: cri-containerd-f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023.scope: Consumed 6.616s CPU time. Dec 13 14:33:28.762695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023-rootfs.mount: Deactivated successfully. Dec 13 14:33:29.055306 kubelet[1869]: E1213 14:33:29.055165 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:30.428183 kubelet[1869]: E1213 14:33:30.055533 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:30.719614 env[1435]: time="2024-12-13T14:33:30.719405377Z" level=info msg="Kill container \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\"" Dec 13 14:33:31.055912 kubelet[1869]: E1213 14:33:31.055774 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:32.056830 kubelet[1869]: E1213 14:33:32.056771 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:32.447269 kubelet[1869]: I1213 14:33:32.447220 1869 topology_manager.go:215] "Topology Admit Handler" podUID="5d00467b-a270-47b8-b408-0c69780e5c56" podNamespace="kube-system" podName="cilium-operator-5cc964979-pft26" Dec 13 14:33:32.454060 systemd[1]: Created slice kubepods-besteffort-pod5d00467b_a270_47b8_b408_0c69780e5c56.slice. Dec 13 14:33:32.568634 kubelet[1869]: I1213 14:33:32.568541 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d00467b-a270-47b8-b408-0c69780e5c56-cilium-config-path\") pod \"cilium-operator-5cc964979-pft26\" (UID: \"5d00467b-a270-47b8-b408-0c69780e5c56\") " pod="kube-system/cilium-operator-5cc964979-pft26" Dec 13 14:33:32.568893 kubelet[1869]: I1213 14:33:32.568719 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdvn\" (UniqueName: \"kubernetes.io/projected/5d00467b-a270-47b8-b408-0c69780e5c56-kube-api-access-qhdvn\") pod \"cilium-operator-5cc964979-pft26\" (UID: \"5d00467b-a270-47b8-b408-0c69780e5c56\") " pod="kube-system/cilium-operator-5cc964979-pft26" Dec 13 14:33:32.761455 env[1435]: time="2024-12-13T14:33:32.758815280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pft26,Uid:5d00467b-a270-47b8-b408-0c69780e5c56,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:33.026848 kubelet[1869]: E1213 14:33:33.026719 1869 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:33:33.057027 kubelet[1869]: E1213 14:33:33.056962 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:34.058161 kubelet[1869]: E1213 14:33:34.058100 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:35.059211 kubelet[1869]: E1213 14:33:35.059151 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:35.986162 kubelet[1869]: E1213 14:33:35.986102 1869 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:36.059920 kubelet[1869]: E1213 14:33:36.059858 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:37.060500 kubelet[1869]: E1213 14:33:37.060442 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:37.209011 env[1435]: time="2024-12-13T14:33:37.208930606Z" level=error msg="get state for f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023" error="context deadline exceeded: unknown" Dec 13 14:33:37.209540 env[1435]: time="2024-12-13T14:33:37.209501612Z" level=warning msg="unknown status" status=0 Dec 13 14:33:38.028144 kubelet[1869]: E1213 14:33:38.028101 1869 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:33:38.060860 kubelet[1869]: E1213 14:33:38.060798 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:38.422225 kubelet[1869]: I1213 14:33:38.422188 1869 setters.go:568] "Node became not ready" node="10.200.8.29" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:33:38Z","lastTransitionTime":"2024-12-13T14:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:33:38.742826 env[1435]: time="2024-12-13T14:33:38.742658025Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023,ID:f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023,Pid:2493,ExitStatus:0,ExitedAt:2024-12-13 14:33:28.742217569 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Dec 13 14:33:39.061724 kubelet[1869]: E1213 14:33:39.061562 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:39.886362 env[1435]: time="2024-12-13T14:33:39.886288582Z" level=info msg="TaskExit event &TaskExit{ContainerID:f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023,ID:f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023,Pid:2493,ExitStatus:0,ExitedAt:2024-12-13 14:33:28.742217569 +0000 UTC,XXX_unrecognized:[],}" Dec 13 14:33:40.063339 kubelet[1869]: E1213 14:33:40.063275 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:41.043498 env[1435]: time="2024-12-13T14:33:41.043413634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:41.044037 env[1435]: time="2024-12-13T14:33:41.043454934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:41.044037 env[1435]: time="2024-12-13T14:33:41.043469434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:41.044037 env[1435]: time="2024-12-13T14:33:41.043939838Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b15af9db8d0c39b02dc63e22c90f7fa3b0e50449a0dc0ae87163079b937522d1 pid=3422 runtime=io.containerd.runc.v2 Dec 13 14:33:41.068418 kubelet[1869]: E1213 14:33:41.067854 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:41.073256 systemd[1]: run-containerd-runc-k8s.io-b15af9db8d0c39b02dc63e22c90f7fa3b0e50449a0dc0ae87163079b937522d1-runc.QiUIjy.mount: Deactivated successfully. Dec 13 14:33:41.080055 systemd[1]: Started cri-containerd-b15af9db8d0c39b02dc63e22c90f7fa3b0e50449a0dc0ae87163079b937522d1.scope. Dec 13 14:33:41.134418 env[1435]: time="2024-12-13T14:33:41.134312822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pft26,Uid:5d00467b-a270-47b8-b408-0c69780e5c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"b15af9db8d0c39b02dc63e22c90f7fa3b0e50449a0dc0ae87163079b937522d1\"" Dec 13 14:33:41.136468 env[1435]: time="2024-12-13T14:33:41.136419340Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:33:41.886742 env[1435]: time="2024-12-13T14:33:41.886532346Z" level=error msg="get state for f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023" error="context deadline exceeded: unknown" Dec 13 14:33:41.886742 env[1435]: time="2024-12-13T14:33:41.886582846Z" level=warning msg="unknown status" status=0 Dec 13 14:33:42.068452 kubelet[1869]: E1213 14:33:42.068382 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:42.601213 env[1435]: E1213 14:33:42.601178 1435 exec.go:87] error executing command in container: failed to exec in container: failed to create exec "19481ca21241071c01952e439aecc9747b49dc07476e8f01de5c2e93de1eb208": cannot exec in a deleted state: unknown Dec 13 14:33:42.605892 env[1435]: time="2024-12-13T14:33:42.605844733Z" level=info msg="StopContainer for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" returns successfully" Dec 13 14:33:42.606774 env[1435]: time="2024-12-13T14:33:42.606741540Z" level=info msg="StopPodSandbox for \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\"" Dec 13 14:33:42.606911 env[1435]: time="2024-12-13T14:33:42.606815541Z" level=info msg="Container to stop \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:42.606911 env[1435]: time="2024-12-13T14:33:42.606835041Z" level=info msg="Container to stop \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:42.606911 env[1435]: time="2024-12-13T14:33:42.606849641Z" level=info msg="Container to stop \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:42.606911 env[1435]: time="2024-12-13T14:33:42.606863841Z" level=info msg="Container to stop \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:42.606911 env[1435]: time="2024-12-13T14:33:42.606877741Z" level=info msg="Container to stop \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:42.609633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8-shm.mount: Deactivated successfully. Dec 13 14:33:42.617878 systemd[1]: cri-containerd-30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8.scope: Deactivated successfully. Dec 13 14:33:42.637122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8-rootfs.mount: Deactivated successfully. Dec 13 14:33:42.655014 env[1435]: time="2024-12-13T14:33:42.654954354Z" level=info msg="shim disconnected" id=30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8 Dec 13 14:33:42.655277 env[1435]: time="2024-12-13T14:33:42.655254557Z" level=warning msg="cleaning up after shim disconnected" id=30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8 namespace=k8s.io Dec 13 14:33:42.655386 env[1435]: time="2024-12-13T14:33:42.655371158Z" level=info msg="cleaning up dead shim" Dec 13 14:33:42.655741 env[1435]: time="2024-12-13T14:33:42.655699161Z" level=info msg="shim disconnected" id=de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c Dec 13 14:33:42.655854 env[1435]: time="2024-12-13T14:33:42.655747761Z" level=warning msg="cleaning up after shim disconnected" id=de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c namespace=k8s.io Dec 13 14:33:42.655854 env[1435]: time="2024-12-13T14:33:42.655760261Z" level=info msg="cleaning up dead shim" Dec 13 14:33:42.656243 env[1435]: time="2024-12-13T14:33:42.656196265Z" level=info msg="shim disconnected" id=f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023 Dec 13 14:33:42.656447 env[1435]: time="2024-12-13T14:33:42.656421467Z" level=warning msg="cleaning up after shim disconnected" id=f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023 namespace=k8s.io Dec 13 14:33:42.656585 env[1435]: time="2024-12-13T14:33:42.656548068Z" level=info msg="cleaning up dead shim" Dec 13 14:33:42.679244 env[1435]: time="2024-12-13T14:33:42.679185862Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3476 runtime=io.containerd.runc.v2\n" Dec 13 14:33:42.680599 env[1435]: time="2024-12-13T14:33:42.680526474Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3475 runtime=io.containerd.runc.v2\n" Dec 13 14:33:42.681885 env[1435]: time="2024-12-13T14:33:42.681850585Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3474 runtime=io.containerd.runc.v2\n" Dec 13 14:33:42.682177 env[1435]: time="2024-12-13T14:33:42.682143888Z" level=info msg="TearDown network for sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" successfully" Dec 13 14:33:42.682259 env[1435]: time="2024-12-13T14:33:42.682177288Z" level=info msg="StopPodSandbox for \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" returns successfully" Dec 13 14:33:42.734089 kubelet[1869]: I1213 14:33:42.734046 1869 topology_manager.go:215] "Topology Admit Handler" podUID="9753a578-acb7-4efb-b544-833bae53136b" podNamespace="kube-system" podName="cilium-j4bp6" Dec 13 14:33:42.734366 kubelet[1869]: E1213 14:33:42.734344 1869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" containerName="mount-bpf-fs" Dec 13 14:33:42.734366 kubelet[1869]: E1213 14:33:42.734368 1869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" containerName="clean-cilium-state" Dec 13 14:33:42.734511 kubelet[1869]: E1213 14:33:42.734379 1869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" containerName="apply-sysctl-overwrites" Dec 13 14:33:42.734511 kubelet[1869]: E1213 14:33:42.734388 1869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" containerName="cilium-agent" Dec 13 14:33:42.734511 kubelet[1869]: E1213 14:33:42.734396 1869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" containerName="mount-cgroup" Dec 13 14:33:42.734511 kubelet[1869]: I1213 14:33:42.734426 1869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" containerName="cilium-agent" Dec 13 14:33:42.740638 systemd[1]: Created slice kubepods-burstable-pod9753a578_acb7_4efb_b544_833bae53136b.slice. Dec 13 14:33:42.834341 kubelet[1869]: I1213 14:33:42.834274 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-lib-modules\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834341 kubelet[1869]: I1213 14:33:42.834348 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-kernel\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834744 kubelet[1869]: I1213 14:33:42.834376 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-etc-cni-netd\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834744 kubelet[1869]: I1213 14:33:42.834428 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9868b9d0-1a78-437e-a8a1-59156a49a5f1-clustermesh-secrets\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834744 kubelet[1869]: I1213 14:33:42.834561 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-config-path\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834744 kubelet[1869]: I1213 14:33:42.834639 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-net\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834744 kubelet[1869]: I1213 14:33:42.834693 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-run\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.834744 kubelet[1869]: I1213 14:33:42.834732 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-bpf-maps\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835093 kubelet[1869]: I1213 14:33:42.834791 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48c5l\" (UniqueName: \"kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-kube-api-access-48c5l\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835093 kubelet[1869]: I1213 14:33:42.834840 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cni-path\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835093 kubelet[1869]: I1213 14:33:42.834879 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hostproc\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835093 kubelet[1869]: I1213 14:33:42.834925 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-xtables-lock\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835093 kubelet[1869]: I1213 14:33:42.834978 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-cgroup\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835093 kubelet[1869]: I1213 14:33:42.835035 1869 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hubble-tls\") pod \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\" (UID: \"9868b9d0-1a78-437e-a8a1-59156a49a5f1\") " Dec 13 14:33:42.835414 kubelet[1869]: I1213 14:33:42.835167 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-cni-path\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835414 kubelet[1869]: I1213 14:33:42.835207 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-etc-cni-netd\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835414 kubelet[1869]: I1213 14:33:42.835261 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-cilium-run\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835414 kubelet[1869]: I1213 14:33:42.835301 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9753a578-acb7-4efb-b544-833bae53136b-hubble-tls\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835414 kubelet[1869]: I1213 14:33:42.835358 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-hostproc\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835414 kubelet[1869]: I1213 14:33:42.835411 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-host-proc-sys-net\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835766 kubelet[1869]: I1213 14:33:42.835453 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9753a578-acb7-4efb-b544-833bae53136b-cilium-ipsec-secrets\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835766 kubelet[1869]: I1213 14:33:42.835507 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9753a578-acb7-4efb-b544-833bae53136b-cilium-config-path\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835766 kubelet[1869]: I1213 14:33:42.835544 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9753a578-acb7-4efb-b544-833bae53136b-clustermesh-secrets\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835766 kubelet[1869]: I1213 14:33:42.835614 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-host-proc-sys-kernel\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.835766 kubelet[1869]: I1213 14:33:42.835671 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-lib-modules\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.836028 kubelet[1869]: I1213 14:33:42.835708 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-xtables-lock\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.836028 kubelet[1869]: I1213 14:33:42.835764 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgks5\" (UniqueName: \"kubernetes.io/projected/9753a578-acb7-4efb-b544-833bae53136b-kube-api-access-vgks5\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.836028 kubelet[1869]: I1213 14:33:42.835822 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-bpf-maps\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.836028 kubelet[1869]: I1213 14:33:42.835861 1869 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9753a578-acb7-4efb-b544-833bae53136b-cilium-cgroup\") pod \"cilium-j4bp6\" (UID: \"9753a578-acb7-4efb-b544-833bae53136b\") " pod="kube-system/cilium-j4bp6" Dec 13 14:33:42.836028 kubelet[1869]: I1213 14:33:42.834304 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.836366 kubelet[1869]: I1213 14:33:42.836017 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.836366 kubelet[1869]: I1213 14:33:42.836072 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838594 kubelet[1869]: I1213 14:33:42.837510 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cni-path" (OuterVolumeSpecName: "cni-path") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838594 kubelet[1869]: I1213 14:33:42.837552 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838594 kubelet[1869]: I1213 14:33:42.837589 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838594 kubelet[1869]: I1213 14:33:42.837615 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838854 kubelet[1869]: I1213 14:33:42.838707 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hostproc" (OuterVolumeSpecName: "hostproc") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838854 kubelet[1869]: I1213 14:33:42.838755 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.838854 kubelet[1869]: I1213 14:33:42.838782 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:42.840287 kubelet[1869]: I1213 14:33:42.840260 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:33:42.844907 systemd[1]: var-lib-kubelet-pods-9868b9d0\x2d1a78\x2d437e\x2da8a1\x2d59156a49a5f1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:33:42.845029 systemd[1]: var-lib-kubelet-pods-9868b9d0\x2d1a78\x2d437e\x2da8a1\x2d59156a49a5f1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:33:42.849163 kubelet[1869]: I1213 14:33:42.849135 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9868b9d0-1a78-437e-a8a1-59156a49a5f1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:33:42.849850 kubelet[1869]: I1213 14:33:42.849821 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:33:42.854014 systemd[1]: var-lib-kubelet-pods-9868b9d0\x2d1a78\x2d437e\x2da8a1\x2d59156a49a5f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48c5l.mount: Deactivated successfully. Dec 13 14:33:42.856293 kubelet[1869]: I1213 14:33:42.854906 1869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-kube-api-access-48c5l" (OuterVolumeSpecName: "kube-api-access-48c5l") pod "9868b9d0-1a78-437e-a8a1-59156a49a5f1" (UID: "9868b9d0-1a78-437e-a8a1-59156a49a5f1"). InnerVolumeSpecName "kube-api-access-48c5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:33:42.894127 kubelet[1869]: I1213 14:33:42.894088 1869 scope.go:117] "RemoveContainer" containerID="f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023" Dec 13 14:33:42.902297 env[1435]: time="2024-12-13T14:33:42.902245477Z" level=info msg="RemoveContainer for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\"" Dec 13 14:33:42.903503 systemd[1]: Removed slice kubepods-burstable-pod9868b9d0_1a78_437e_a8a1_59156a49a5f1.slice. Dec 13 14:33:42.903706 systemd[1]: kubepods-burstable-pod9868b9d0_1a78_437e_a8a1_59156a49a5f1.slice: Consumed 6.734s CPU time. Dec 13 14:33:42.920491 env[1435]: time="2024-12-13T14:33:42.920432033Z" level=info msg="RemoveContainer for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" returns successfully" Dec 13 14:33:42.920881 kubelet[1869]: I1213 14:33:42.920838 1869 scope.go:117] "RemoveContainer" containerID="c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931" Dec 13 14:33:42.922132 env[1435]: time="2024-12-13T14:33:42.922100247Z" level=info msg="RemoveContainer for \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\"" Dec 13 14:33:42.932003 env[1435]: time="2024-12-13T14:33:42.931960632Z" level=info msg="RemoveContainer for \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\" returns successfully" Dec 13 14:33:42.932188 kubelet[1869]: I1213 14:33:42.932163 1869 scope.go:117] "RemoveContainer" containerID="438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71" Dec 13 14:33:42.933203 env[1435]: time="2024-12-13T14:33:42.933173042Z" level=info msg="RemoveContainer for \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\"" Dec 13 14:33:42.937076 kubelet[1869]: I1213 14:33:42.937051 1869 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cni-path\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937081 1869 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hostproc\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937096 1869 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-xtables-lock\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937111 1869 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-cgroup\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937125 1869 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-hubble-tls\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937139 1869 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-lib-modules\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937170 1869 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-kernel\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937182 kubelet[1869]: I1213 14:33:42.937184 1869 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-etc-cni-netd\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937463 kubelet[1869]: I1213 14:33:42.937199 1869 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9868b9d0-1a78-437e-a8a1-59156a49a5f1-clustermesh-secrets\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937463 kubelet[1869]: I1213 14:33:42.937214 1869 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-config-path\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937463 kubelet[1869]: I1213 14:33:42.937229 1869 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-host-proc-sys-net\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937463 kubelet[1869]: I1213 14:33:42.937245 1869 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-cilium-run\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937463 kubelet[1869]: I1213 14:33:42.937260 1869 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9868b9d0-1a78-437e-a8a1-59156a49a5f1-bpf-maps\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.937463 kubelet[1869]: I1213 14:33:42.937276 1869 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48c5l\" (UniqueName: \"kubernetes.io/projected/9868b9d0-1a78-437e-a8a1-59156a49a5f1-kube-api-access-48c5l\") on node \"10.200.8.29\" DevicePath \"\"" Dec 13 14:33:42.941850 env[1435]: time="2024-12-13T14:33:42.941807716Z" level=info msg="RemoveContainer for \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\" returns successfully" Dec 13 14:33:42.944444 kubelet[1869]: I1213 14:33:42.944413 1869 scope.go:117] "RemoveContainer" containerID="df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a" Dec 13 14:33:42.945966 env[1435]: time="2024-12-13T14:33:42.945931852Z" level=info msg="RemoveContainer for \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\"" Dec 13 14:33:42.951391 env[1435]: time="2024-12-13T14:33:42.951352598Z" level=info msg="RemoveContainer for \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\" returns successfully" Dec 13 14:33:42.951553 kubelet[1869]: I1213 14:33:42.951525 1869 scope.go:117] "RemoveContainer" containerID="de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c" Dec 13 14:33:42.952652 env[1435]: time="2024-12-13T14:33:42.952624609Z" level=info msg="RemoveContainer for \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\"" Dec 13 14:33:42.970239 env[1435]: time="2024-12-13T14:33:42.970185060Z" level=info msg="RemoveContainer for \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\" returns successfully" Dec 13 14:33:42.970500 kubelet[1869]: I1213 14:33:42.970469 1869 scope.go:117] "RemoveContainer" containerID="f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023" Dec 13 14:33:42.970894 env[1435]: time="2024-12-13T14:33:42.970798465Z" level=error msg="ContainerStatus for \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\": not found" Dec 13 14:33:42.971124 kubelet[1869]: E1213 14:33:42.971098 1869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\": not found" containerID="f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023" Dec 13 14:33:42.971217 kubelet[1869]: I1213 14:33:42.971185 1869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023"} err="failed to get container status \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\": rpc error: code = NotFound desc = an error occurred when try to find container \"f27afda36897d9cd8161e8993d3dfb4763e423478d09e516cfa96650f8475023\": not found" Dec 13 14:33:42.971217 kubelet[1869]: I1213 14:33:42.971205 1869 scope.go:117] "RemoveContainer" containerID="c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931" Dec 13 14:33:42.971533 env[1435]: time="2024-12-13T14:33:42.971473971Z" level=error msg="ContainerStatus for \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\": not found" Dec 13 14:33:42.971694 kubelet[1869]: E1213 14:33:42.971676 1869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\": not found" containerID="c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931" Dec 13 14:33:42.971768 kubelet[1869]: I1213 14:33:42.971713 1869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931"} err="failed to get container status \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1b392a65b0596983e3f50b9b153d839f8f0df8311c692165e523ce38967c931\": not found" Dec 13 14:33:42.971768 kubelet[1869]: I1213 14:33:42.971727 1869 scope.go:117] "RemoveContainer" containerID="438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71" Dec 13 14:33:42.971955 env[1435]: time="2024-12-13T14:33:42.971901275Z" level=error msg="ContainerStatus for \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\": not found" Dec 13 14:33:42.972070 kubelet[1869]: E1213 14:33:42.972053 1869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\": not found" containerID="438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71" Dec 13 14:33:42.972146 kubelet[1869]: I1213 14:33:42.972083 1869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71"} err="failed to get container status \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\": rpc error: code = NotFound desc = an error occurred when try to find container \"438a5179150fa863d02f3ef951da10a8ce59ff193c8497c36c414d8d34d47a71\": not found" Dec 13 14:33:42.972146 kubelet[1869]: I1213 14:33:42.972098 1869 scope.go:117] "RemoveContainer" containerID="df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a" Dec 13 14:33:42.972321 env[1435]: time="2024-12-13T14:33:42.972266578Z" level=error msg="ContainerStatus for \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\": not found" Dec 13 14:33:42.972444 kubelet[1869]: E1213 14:33:42.972424 1869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\": not found" containerID="df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a" Dec 13 14:33:42.972518 kubelet[1869]: I1213 14:33:42.972455 1869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a"} err="failed to get container status \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"df83495b2c3c6ebc48ce2a597f3be9d111b40890dc282f09d6086edd08e7ad7a\": not found" Dec 13 14:33:42.972518 kubelet[1869]: I1213 14:33:42.972468 1869 scope.go:117] "RemoveContainer" containerID="de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c" Dec 13 14:33:42.972694 env[1435]: time="2024-12-13T14:33:42.972648681Z" level=error msg="ContainerStatus for \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\": not found" Dec 13 14:33:42.972839 kubelet[1869]: E1213 14:33:42.972804 1869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\": not found" containerID="de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c" Dec 13 14:33:42.972914 kubelet[1869]: I1213 14:33:42.972849 1869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c"} err="failed to get container status \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"de0af38648f365566f65f20f3aaeeb3dfb0c0acc443d282139934cf354909c5c\": not found" Dec 13 14:33:43.030031 kubelet[1869]: E1213 14:33:43.029993 1869 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:33:43.045409 env[1435]: time="2024-12-13T14:33:43.045357702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j4bp6,Uid:9753a578-acb7-4efb-b544-833bae53136b,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:43.068783 kubelet[1869]: E1213 14:33:43.068732 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:43.101417 env[1435]: time="2024-12-13T14:33:43.101221377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:43.101417 env[1435]: time="2024-12-13T14:33:43.101283277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:43.101417 env[1435]: time="2024-12-13T14:33:43.101299077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:43.102956 env[1435]: time="2024-12-13T14:33:43.101681281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc pid=3527 runtime=io.containerd.runc.v2 Dec 13 14:33:43.117673 systemd[1]: Started cri-containerd-4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc.scope. Dec 13 14:33:43.144687 env[1435]: time="2024-12-13T14:33:43.144638346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j4bp6,Uid:9753a578-acb7-4efb-b544-833bae53136b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\"" Dec 13 14:33:43.157590 env[1435]: time="2024-12-13T14:33:43.157526955Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:33:43.190824 env[1435]: time="2024-12-13T14:33:43.190764237Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc\"" Dec 13 14:33:43.191460 env[1435]: time="2024-12-13T14:33:43.191422243Z" level=info msg="StartContainer for \"2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc\"" Dec 13 14:33:43.209890 systemd[1]: Started cri-containerd-2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc.scope. Dec 13 14:33:43.250082 env[1435]: time="2024-12-13T14:33:43.250021541Z" level=info msg="StartContainer for \"2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc\" returns successfully" Dec 13 14:33:43.253445 systemd[1]: cri-containerd-2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc.scope: Deactivated successfully. Dec 13 14:33:43.315155 env[1435]: time="2024-12-13T14:33:43.315093994Z" level=info msg="shim disconnected" id=2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc Dec 13 14:33:43.315155 env[1435]: time="2024-12-13T14:33:43.315153394Z" level=warning msg="cleaning up after shim disconnected" id=2d3e58d421eeedc6bdce259b38786d7725c6e9985aadbe1a48aefa8b2c8227bc namespace=k8s.io Dec 13 14:33:43.315155 env[1435]: time="2024-12-13T14:33:43.315165595Z" level=info msg="cleaning up dead shim" Dec 13 14:33:43.324037 env[1435]: time="2024-12-13T14:33:43.323986769Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3616 runtime=io.containerd.runc.v2\n" Dec 13 14:33:43.652590 kubelet[1869]: I1213 14:33:43.652539 1869 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9868b9d0-1a78-437e-a8a1-59156a49a5f1" path="/var/lib/kubelet/pods/9868b9d0-1a78-437e-a8a1-59156a49a5f1/volumes" Dec 13 14:33:43.907224 env[1435]: time="2024-12-13T14:33:43.907179325Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:33:43.938267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177668130.mount: Deactivated successfully. Dec 13 14:33:43.962754 env[1435]: time="2024-12-13T14:33:43.962691797Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898\"" Dec 13 14:33:43.963554 env[1435]: time="2024-12-13T14:33:43.963518804Z" level=info msg="StartContainer for \"29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898\"" Dec 13 14:33:43.988378 systemd[1]: Started cri-containerd-29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898.scope. Dec 13 14:33:44.043385 systemd[1]: cri-containerd-29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898.scope: Deactivated successfully. Dec 13 14:33:44.049694 env[1435]: time="2024-12-13T14:33:44.049634831Z" level=info msg="StartContainer for \"29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898\" returns successfully" Dec 13 14:33:44.070178 kubelet[1869]: E1213 14:33:44.070094 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:44.193200 env[1435]: time="2024-12-13T14:33:44.192497133Z" level=info msg="shim disconnected" id=29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898 Dec 13 14:33:44.193200 env[1435]: time="2024-12-13T14:33:44.192550734Z" level=warning msg="cleaning up after shim disconnected" id=29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898 namespace=k8s.io Dec 13 14:33:44.193200 env[1435]: time="2024-12-13T14:33:44.192562634Z" level=info msg="cleaning up dead shim" Dec 13 14:33:44.201610 env[1435]: time="2024-12-13T14:33:44.201549710Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3677 runtime=io.containerd.runc.v2\n" Dec 13 14:33:44.609832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29eea7bcf93783d785dd6ab8a1dbe18c6f9c4547b3c65497d9c7c8bdc3bc4898-rootfs.mount: Deactivated successfully. Dec 13 14:33:44.913309 env[1435]: time="2024-12-13T14:33:44.913262798Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:33:44.990872 env[1435]: time="2024-12-13T14:33:44.990804150Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1\"" Dec 13 14:33:44.992263 env[1435]: time="2024-12-13T14:33:44.992185462Z" level=info msg="StartContainer for \"d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1\"" Dec 13 14:33:45.025489 systemd[1]: Started cri-containerd-d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1.scope. Dec 13 14:33:45.063529 systemd[1]: cri-containerd-d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1.scope: Deactivated successfully. Dec 13 14:33:45.065795 env[1435]: time="2024-12-13T14:33:45.065735575Z" level=info msg="StartContainer for \"d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1\" returns successfully" Dec 13 14:33:45.067145 env[1435]: time="2024-12-13T14:33:45.067107187Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:45.070814 kubelet[1869]: E1213 14:33:45.070770 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:45.072177 env[1435]: time="2024-12-13T14:33:45.072141329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:45.078433 env[1435]: time="2024-12-13T14:33:45.078393681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:45.078741 env[1435]: time="2024-12-13T14:33:45.078705983Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:33:45.081491 env[1435]: time="2024-12-13T14:33:45.081449806Z" level=info msg="CreateContainer within sandbox \"b15af9db8d0c39b02dc63e22c90f7fa3b0e50449a0dc0ae87163079b937522d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:33:45.585871 env[1435]: time="2024-12-13T14:33:45.585810409Z" level=info msg="shim disconnected" id=d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1 Dec 13 14:33:45.586274 env[1435]: time="2024-12-13T14:33:45.586248513Z" level=warning msg="cleaning up after shim disconnected" id=d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1 namespace=k8s.io Dec 13 14:33:45.586408 env[1435]: time="2024-12-13T14:33:45.586390014Z" level=info msg="cleaning up dead shim" Dec 13 14:33:45.595778 env[1435]: time="2024-12-13T14:33:45.595735492Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3736 runtime=io.containerd.runc.v2\n" Dec 13 14:33:45.600331 env[1435]: time="2024-12-13T14:33:45.600284029Z" level=info msg="CreateContainer within sandbox \"b15af9db8d0c39b02dc63e22c90f7fa3b0e50449a0dc0ae87163079b937522d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"85f50090aa270a168b15c01d2fc4ee34c1969de45d27f08b4dfdfff4343293b7\"" Dec 13 14:33:45.601271 env[1435]: time="2024-12-13T14:33:45.601238237Z" level=info msg="StartContainer for \"85f50090aa270a168b15c01d2fc4ee34c1969de45d27f08b4dfdfff4343293b7\"" Dec 13 14:33:45.612082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d768e3790dc4387277f77e3106ac0781f7344f5f9ba7fb11b7c8cc9c533a81d1-rootfs.mount: Deactivated successfully. Dec 13 14:33:45.631320 systemd[1]: run-containerd-runc-k8s.io-85f50090aa270a168b15c01d2fc4ee34c1969de45d27f08b4dfdfff4343293b7-runc.BzJ5X4.mount: Deactivated successfully. Dec 13 14:33:45.638919 systemd[1]: Started cri-containerd-85f50090aa270a168b15c01d2fc4ee34c1969de45d27f08b4dfdfff4343293b7.scope. Dec 13 14:33:45.676662 env[1435]: time="2024-12-13T14:33:45.676545265Z" level=info msg="StartContainer for \"85f50090aa270a168b15c01d2fc4ee34c1969de45d27f08b4dfdfff4343293b7\" returns successfully" Dec 13 14:33:45.919783 env[1435]: time="2024-12-13T14:33:45.919732191Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:33:45.928585 kubelet[1869]: I1213 14:33:45.928543 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pft26" podStartSLOduration=9.985241313 podStartE2EDuration="13.928491064s" podCreationTimestamp="2024-12-13 14:33:32 +0000 UTC" firstStartedPulling="2024-12-13 14:33:41.135987737 +0000 UTC m=+105.543778764" lastFinishedPulling="2024-12-13 14:33:45.079237488 +0000 UTC m=+109.487028515" observedRunningTime="2024-12-13 14:33:45.927718758 +0000 UTC m=+110.335509785" watchObservedRunningTime="2024-12-13 14:33:45.928491064 +0000 UTC m=+110.336282091" Dec 13 14:33:45.954532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957085233.mount: Deactivated successfully. Dec 13 14:33:45.964364 env[1435]: time="2024-12-13T14:33:45.964296563Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c\"" Dec 13 14:33:45.965098 env[1435]: time="2024-12-13T14:33:45.964992568Z" level=info msg="StartContainer for \"cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c\"" Dec 13 14:33:45.982106 systemd[1]: Started cri-containerd-cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c.scope. Dec 13 14:33:46.011459 systemd[1]: cri-containerd-cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c.scope: Deactivated successfully. Dec 13 14:33:46.016651 env[1435]: time="2024-12-13T14:33:46.016602097Z" level=info msg="StartContainer for \"cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c\" returns successfully" Dec 13 14:33:46.048803 env[1435]: time="2024-12-13T14:33:46.048743863Z" level=info msg="shim disconnected" id=cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c Dec 13 14:33:46.048803 env[1435]: time="2024-12-13T14:33:46.048802163Z" level=warning msg="cleaning up after shim disconnected" id=cef740147380544b1d283275563ba927c55d77abd0152fc861c6bd41c8b3881c namespace=k8s.io Dec 13 14:33:46.048803 env[1435]: time="2024-12-13T14:33:46.048813663Z" level=info msg="cleaning up dead shim" Dec 13 14:33:46.057675 env[1435]: time="2024-12-13T14:33:46.057620436Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3830 runtime=io.containerd.runc.v2\n" Dec 13 14:33:46.070956 kubelet[1869]: E1213 14:33:46.070910 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:46.925391 env[1435]: time="2024-12-13T14:33:46.925310698Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:33:46.962634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3150492312.mount: Deactivated successfully. Dec 13 14:33:46.973161 env[1435]: time="2024-12-13T14:33:46.973100892Z" level=info msg="CreateContainer within sandbox \"4b1c8a1047d55de4eb0dc47d269785bece52e054da36a023ebf3397b1c71a4cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689\"" Dec 13 14:33:46.973739 env[1435]: time="2024-12-13T14:33:46.973702197Z" level=info msg="StartContainer for \"bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689\"" Dec 13 14:33:47.001008 systemd[1]: Started cri-containerd-bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689.scope. Dec 13 14:33:47.044033 env[1435]: time="2024-12-13T14:33:47.043975474Z" level=info msg="StartContainer for \"bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689\" returns successfully" Dec 13 14:33:47.073605 kubelet[1869]: E1213 14:33:47.071638 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:47.382616 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:33:47.944565 kubelet[1869]: I1213 14:33:47.944508 1869 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-j4bp6" podStartSLOduration=14.944468239 podStartE2EDuration="14.944468239s" podCreationTimestamp="2024-12-13 14:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:47.944397038 +0000 UTC m=+112.352188065" watchObservedRunningTime="2024-12-13 14:33:47.944468239 +0000 UTC m=+112.352259266" Dec 13 14:33:48.072585 kubelet[1869]: E1213 14:33:48.072509 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:49.073395 kubelet[1869]: E1213 14:33:49.073346 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:49.111365 systemd[1]: run-containerd-runc-k8s.io-bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689-runc.hkGxmM.mount: Deactivated successfully. Dec 13 14:33:50.074928 kubelet[1869]: E1213 14:33:50.074884 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:50.114079 systemd-networkd[1589]: lxc_health: Link UP Dec 13 14:33:50.123693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:33:50.123683 systemd-networkd[1589]: lxc_health: Gained carrier Dec 13 14:33:51.076010 kubelet[1869]: E1213 14:33:51.075971 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:51.281544 systemd[1]: run-containerd-runc-k8s.io-bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689-runc.Ckp2vr.mount: Deactivated successfully. Dec 13 14:33:51.289707 systemd-networkd[1589]: lxc_health: Gained IPv6LL Dec 13 14:33:52.077252 kubelet[1869]: E1213 14:33:52.077182 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:53.078537 kubelet[1869]: E1213 14:33:53.078480 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:53.493716 systemd[1]: run-containerd-runc-k8s.io-bd4c5b61ab05c9e626558603871fca32cb9f5c0d88fc29a2625a6bfe0c744689-runc.QrOq9K.mount: Deactivated successfully. Dec 13 14:33:54.079053 kubelet[1869]: E1213 14:33:54.078990 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:55.080158 kubelet[1869]: E1213 14:33:55.080094 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:55.985956 kubelet[1869]: E1213 14:33:55.985880 1869 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:56.024542 env[1435]: time="2024-12-13T14:33:56.024486940Z" level=info msg="StopPodSandbox for \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\"" Dec 13 14:33:56.025156 env[1435]: time="2024-12-13T14:33:56.025089145Z" level=info msg="TearDown network for sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" successfully" Dec 13 14:33:56.025156 env[1435]: time="2024-12-13T14:33:56.025149545Z" level=info msg="StopPodSandbox for \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" returns successfully" Dec 13 14:33:56.025883 env[1435]: time="2024-12-13T14:33:56.025847550Z" level=info msg="RemovePodSandbox for \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\"" Dec 13 14:33:56.026057 env[1435]: time="2024-12-13T14:33:56.025887351Z" level=info msg="Forcibly stopping sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\"" Dec 13 14:33:56.026057 env[1435]: time="2024-12-13T14:33:56.025979851Z" level=info msg="TearDown network for sandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" successfully" Dec 13 14:33:56.042795 env[1435]: time="2024-12-13T14:33:56.042742479Z" level=info msg="RemovePodSandbox \"30e7b809c74b98ea5ce8cdd89051d3c7fa6b7fa73353fabcc8dc47af511e91f8\" returns successfully" Dec 13 14:33:56.080949 kubelet[1869]: E1213 14:33:56.080889 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:57.081128 kubelet[1869]: E1213 14:33:57.081073 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:58.082386 kubelet[1869]: E1213 14:33:58.082332 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:59.083334 kubelet[1869]: E1213 14:33:59.083208 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:00.083555 kubelet[1869]: E1213 14:34:00.083489 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:01.084383 kubelet[1869]: E1213 14:34:01.084324 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:02.084918 kubelet[1869]: E1213 14:34:02.084854 1869 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"