Jul 2 08:01:21.033150 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:01:21.033183 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:01:21.033198 kernel: BIOS-provided physical RAM map: Jul 2 08:01:21.033208 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 08:01:21.033218 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 08:01:21.033228 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 08:01:21.033243 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jul 2 08:01:21.033254 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 08:01:21.033265 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 08:01:21.033276 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 08:01:21.033287 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 08:01:21.033298 kernel: printk: bootconsole [earlyser0] enabled Jul 2 08:01:21.033308 kernel: NX (Execute Disable) protection: active Jul 2 08:01:21.033319 kernel: efi: EFI v2.70 by Microsoft Jul 2 08:01:21.033336 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Jul 2 08:01:21.033347 kernel: random: crng init done Jul 2 08:01:21.033359 kernel: SMBIOS 3.1.0 present. Jul 2 08:01:21.033371 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 08:01:21.033383 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 08:01:21.033395 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 08:01:21.033406 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Jul 2 08:01:21.033418 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 08:01:21.033451 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 08:01:21.033463 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 08:01:21.033473 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 08:01:21.033484 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 08:01:21.033497 kernel: tsc: Detected 2593.907 MHz processor Jul 2 08:01:21.033508 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:01:21.033520 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:01:21.033532 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 08:01:21.033544 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:01:21.033555 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 08:01:21.033569 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 08:01:21.033582 kernel: Using GB pages for direct mapping Jul 2 08:01:21.033595 kernel: Secure boot disabled Jul 2 08:01:21.033607 kernel: ACPI: Early table checksum verification disabled Jul 2 08:01:21.033619 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 08:01:21.033632 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033644 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033654 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 08:01:21.033673 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 08:01:21.033686 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033699 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033711 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033722 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033733 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033748 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033760 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 08:01:21.033772 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 08:01:21.033783 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 08:01:21.033795 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 08:01:21.038218 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 08:01:21.038235 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 08:01:21.038250 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 08:01:21.038269 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 08:01:21.038283 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 08:01:21.038296 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 08:01:21.038308 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 08:01:21.038321 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 08:01:21.038334 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 08:01:21.038346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 08:01:21.038358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 08:01:21.038370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 08:01:21.038385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 08:01:21.038397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 08:01:21.038410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 08:01:21.038423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 08:01:21.038461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 08:01:21.038474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 08:01:21.038487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 08:01:21.038500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 08:01:21.038513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 08:01:21.038528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 08:01:21.038541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 08:01:21.038555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 08:01:21.038567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 08:01:21.038581 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 08:01:21.038594 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 08:01:21.038607 kernel: Zone ranges: Jul 2 08:01:21.038620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:01:21.038632 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 08:01:21.038647 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 08:01:21.038661 kernel: Movable zone start for each node Jul 2 08:01:21.038674 kernel: Early memory node ranges Jul 2 08:01:21.038686 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 08:01:21.038699 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 08:01:21.038712 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 08:01:21.038725 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 08:01:21.038738 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 08:01:21.038752 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:01:21.038767 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 08:01:21.038780 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 08:01:21.038793 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 08:01:21.038806 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 08:01:21.038819 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:01:21.038832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 08:01:21.038844 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:01:21.038857 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 08:01:21.038870 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 08:01:21.038886 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 08:01:21.038899 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 08:01:21.038912 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:01:21.038926 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 08:01:21.038939 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 08:01:21.038952 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 08:01:21.038964 kernel: pcpu-alloc: [0] 0 1 Jul 2 08:01:21.038977 kernel: Hyper-V: PV spinlocks enabled Jul 2 08:01:21.038990 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 08:01:21.039005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 08:01:21.039018 kernel: Policy zone: Normal Jul 2 08:01:21.039033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:01:21.039046 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:01:21.039058 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 08:01:21.039071 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:01:21.039084 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:01:21.039098 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 308056K reserved, 0K cma-reserved) Jul 2 08:01:21.039114 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:01:21.039127 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:01:21.039150 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:01:21.039166 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:01:21.039180 kernel: rcu: RCU event tracing is enabled. Jul 2 08:01:21.039194 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:01:21.039208 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:01:21.039221 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:01:21.039235 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:01:21.039248 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:01:21.039262 kernel: Using NULL legacy PIC Jul 2 08:01:21.039278 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 08:01:21.039292 kernel: Console: colour dummy device 80x25 Jul 2 08:01:21.039306 kernel: printk: console [tty1] enabled Jul 2 08:01:21.039320 kernel: printk: console [ttyS0] enabled Jul 2 08:01:21.039333 kernel: printk: bootconsole [earlyser0] disabled Jul 2 08:01:21.039349 kernel: ACPI: Core revision 20210730 Jul 2 08:01:21.039363 kernel: Failed to register legacy timer interrupt Jul 2 08:01:21.039376 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:01:21.039390 kernel: Hyper-V: Using IPI hypercalls Jul 2 08:01:21.039403 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jul 2 08:01:21.039417 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 08:01:21.039444 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 08:01:21.039458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:01:21.039472 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 08:01:21.039486 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:01:21.039502 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 08:01:21.039516 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 08:01:21.039530 kernel: RETBleed: Vulnerable Jul 2 08:01:21.039543 kernel: Speculative Store Bypass: Vulnerable Jul 2 08:01:21.039557 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 08:01:21.039570 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 08:01:21.039583 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 08:01:21.039597 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 08:01:21.039610 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 08:01:21.039624 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 08:01:21.039640 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 08:01:21.039653 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 08:01:21.039666 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 08:01:21.039680 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 08:01:21.039694 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 08:01:21.039708 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 08:01:21.039720 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 08:01:21.039734 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 08:01:21.039747 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:01:21.039761 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:01:21.039774 kernel: LSM: Security Framework initializing Jul 2 08:01:21.039787 kernel: SELinux: Initializing. Jul 2 08:01:21.039803 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:01:21.039817 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:01:21.039830 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 08:01:21.039844 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 08:01:21.039858 kernel: signal: max sigframe size: 3632 Jul 2 08:01:21.039871 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:01:21.039885 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 08:01:21.039899 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:01:21.039912 kernel: x86: Booting SMP configuration: Jul 2 08:01:21.039926 kernel: .... node #0, CPUs: #1 Jul 2 08:01:21.039946 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 08:01:21.039960 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 08:01:21.039974 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:01:21.039988 kernel: smpboot: Max logical packages: 1 Jul 2 08:01:21.040002 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 08:01:21.040016 kernel: devtmpfs: initialized Jul 2 08:01:21.040029 kernel: x86/mm: Memory block size: 128MB Jul 2 08:01:21.040043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 08:01:21.040060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:01:21.040073 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:01:21.040087 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:01:21.040101 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:01:21.040115 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:01:21.040128 kernel: audit: type=2000 audit(1719907279.023:1): state=initialized audit_enabled=0 res=1 Jul 2 08:01:21.040142 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:01:21.040155 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:01:21.040168 kernel: cpuidle: using governor menu Jul 2 08:01:21.040185 kernel: ACPI: bus type PCI registered Jul 2 08:01:21.040199 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:01:21.040213 kernel: dca service started, version 1.12.1 Jul 2 08:01:21.040226 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:01:21.040240 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:01:21.040254 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:01:21.040267 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:01:21.040281 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:01:21.040295 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:01:21.040311 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:01:21.040324 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:01:21.040338 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:01:21.040352 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:01:21.040366 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:01:21.040380 kernel: ACPI: Interpreter enabled Jul 2 08:01:21.040393 kernel: ACPI: PM: (supports S0 S5) Jul 2 08:01:21.040407 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:01:21.040421 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:01:21.040469 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 08:01:21.040483 kernel: iommu: Default domain type: Translated Jul 2 08:01:21.040497 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:01:21.040510 kernel: vgaarb: loaded Jul 2 08:01:21.040524 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:01:21.040537 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:01:21.040551 kernel: PTP clock support registered Jul 2 08:01:21.040565 kernel: Registered efivars operations Jul 2 08:01:21.040579 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:01:21.040593 kernel: PCI: System does not support PCI Jul 2 08:01:21.040609 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 08:01:21.040622 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:01:21.040635 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:01:21.040649 kernel: pnp: PnP ACPI init Jul 2 08:01:21.040662 kernel: pnp: PnP ACPI: found 3 devices Jul 2 08:01:21.040676 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:01:21.040690 kernel: NET: Registered PF_INET protocol family Jul 2 08:01:21.040704 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:01:21.040720 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 08:01:21.040734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:01:21.040747 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:01:21.040761 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 08:01:21.040775 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 08:01:21.040789 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 08:01:21.040802 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 08:01:21.040816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:01:21.040830 kernel: NET: Registered PF_XDP protocol family Jul 2 08:01:21.040846 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:01:21.040859 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 08:01:21.040873 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Jul 2 08:01:21.040887 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 08:01:21.040901 kernel: Initialise system trusted keyrings Jul 2 08:01:21.040914 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 08:01:21.040927 kernel: Key type asymmetric registered Jul 2 08:01:21.040941 kernel: Asymmetric key parser 'x509' registered Jul 2 08:01:21.040954 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:01:21.040970 kernel: io scheduler mq-deadline registered Jul 2 08:01:21.040984 kernel: io scheduler kyber registered Jul 2 08:01:21.040997 kernel: io scheduler bfq registered Jul 2 08:01:21.041011 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:01:21.041025 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:01:21.041039 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:01:21.041052 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 08:01:21.041066 kernel: i8042: PNP: No PS/2 controller found. Jul 2 08:01:21.041224 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 08:01:21.041342 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T08:01:20 UTC (1719907280) Jul 2 08:01:21.041482 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 08:01:21.041501 kernel: fail to initialize ptp_kvm Jul 2 08:01:21.041515 kernel: intel_pstate: CPU model not supported Jul 2 08:01:21.041529 kernel: efifb: probing for efifb Jul 2 08:01:21.041543 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 08:01:21.041557 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 08:01:21.041570 kernel: efifb: scrolling: redraw Jul 2 08:01:21.041587 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 08:01:21.041601 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 08:01:21.041616 kernel: fb0: EFI VGA frame buffer device Jul 2 08:01:21.041629 kernel: pstore: Registered efi as persistent store backend Jul 2 08:01:21.041643 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:01:21.041657 kernel: Segment Routing with IPv6 Jul 2 08:01:21.041670 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:01:21.041684 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:01:21.041698 kernel: Key type dns_resolver registered Jul 2 08:01:21.041713 kernel: IPI shorthand broadcast: enabled Jul 2 08:01:21.041726 kernel: sched_clock: Marking stable (759852700, 23085600)->(980775000, -197836700) Jul 2 08:01:21.041740 kernel: registered taskstats version 1 Jul 2 08:01:21.041753 kernel: Loading compiled-in X.509 certificates Jul 2 08:01:21.041768 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:01:21.041782 kernel: Key type .fscrypt registered Jul 2 08:01:21.041795 kernel: Key type fscrypt-provisioning registered Jul 2 08:01:21.041809 kernel: pstore: Using crash dump compression: deflate Jul 2 08:01:21.041825 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:01:21.041839 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:01:21.041853 kernel: ima: No architecture policies found Jul 2 08:01:21.041866 kernel: clk: Disabling unused clocks Jul 2 08:01:21.041880 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:01:21.041894 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:01:21.041908 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:01:21.041922 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:01:21.041936 kernel: Run /init as init process Jul 2 08:01:21.041949 kernel: with arguments: Jul 2 08:01:21.041965 kernel: /init Jul 2 08:01:21.041979 kernel: with environment: Jul 2 08:01:21.041992 kernel: HOME=/ Jul 2 08:01:21.042005 kernel: TERM=linux Jul 2 08:01:21.042019 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:01:21.042035 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:01:21.042052 systemd[1]: Detected virtualization microsoft. Jul 2 08:01:21.042069 systemd[1]: Detected architecture x86-64. Jul 2 08:01:21.042082 systemd[1]: Running in initrd. Jul 2 08:01:21.042096 systemd[1]: No hostname configured, using default hostname. Jul 2 08:01:21.042110 systemd[1]: Hostname set to . Jul 2 08:01:21.042125 systemd[1]: Initializing machine ID from random generator. Jul 2 08:01:21.042140 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:01:21.042154 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:01:21.042168 systemd[1]: Reached target cryptsetup.target. Jul 2 08:01:21.042183 systemd[1]: Reached target paths.target. Jul 2 08:01:21.042199 systemd[1]: Reached target slices.target. Jul 2 08:01:21.042214 systemd[1]: Reached target swap.target. Jul 2 08:01:21.042227 systemd[1]: Reached target timers.target. Jul 2 08:01:21.042243 systemd[1]: Listening on iscsid.socket. Jul 2 08:01:21.042257 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:01:21.042271 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:01:21.042286 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:01:21.042303 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:01:21.042318 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:01:21.042332 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:01:21.042347 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:01:21.042362 systemd[1]: Reached target sockets.target. Jul 2 08:01:21.042376 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:01:21.042390 systemd[1]: Finished network-cleanup.service. Jul 2 08:01:21.042405 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:01:21.042420 systemd[1]: Starting systemd-journald.service... Jul 2 08:01:21.042446 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:01:21.042461 systemd[1]: Starting systemd-resolved.service... Jul 2 08:01:21.042476 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:01:21.042490 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:01:21.042509 systemd-journald[183]: Journal started Jul 2 08:01:21.042575 systemd-journald[183]: Runtime Journal (/run/log/journal/8a33c74b1e8e4675b5372d1fd2f3e095) is 8.0M, max 159.0M, 151.0M free. Jul 2 08:01:21.026464 systemd-modules-load[184]: Inserted module 'overlay' Jul 2 08:01:21.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.071023 kernel: audit: type=1130 audit(1719907281.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.071076 systemd[1]: Started systemd-journald.service. Jul 2 08:01:21.074291 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:01:21.119942 kernel: audit: type=1130 audit(1719907281.073:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.119969 kernel: audit: type=1130 audit(1719907281.088:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.119980 kernel: audit: type=1130 audit(1719907281.091:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.088734 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:01:21.092636 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:01:21.125002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:01:21.145897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:01:21.150560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:01:21.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.167464 kernel: audit: type=1130 audit(1719907281.150:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.171742 kernel: Bridge firewalling registered Jul 2 08:01:21.171575 systemd-resolved[185]: Positive Trust Anchors: Jul 2 08:01:21.171585 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:01:21.171634 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:01:21.173051 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:01:21.195580 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:01:21.213175 kernel: audit: type=1130 audit(1719907281.194:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.212021 systemd-resolved[185]: Defaulting to hostname 'linux'. Jul 2 08:01:21.215903 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 2 08:01:21.220064 systemd[1]: Started systemd-resolved.service. Jul 2 08:01:21.232646 dracut-cmdline[200]: dracut-dracut-053 Jul 2 08:01:21.232646 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:01:21.266251 kernel: audit: type=1130 audit(1719907281.222:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.222388 systemd[1]: Reached target nss-lookup.target. Jul 2 08:01:21.273447 kernel: SCSI subsystem initialized Jul 2 08:01:21.298625 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:01:21.298676 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:01:21.300271 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:01:21.308400 systemd-modules-load[184]: Inserted module 'dm_multipath' Jul 2 08:01:21.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.309002 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:01:21.335344 kernel: audit: type=1130 audit(1719907281.313:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.335375 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:01:21.313980 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:01:21.335527 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:01:21.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.354450 kernel: audit: type=1130 audit(1719907281.341:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.366445 kernel: iscsi: registered transport (tcp) Jul 2 08:01:21.394019 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:01:21.394072 kernel: QLogic iSCSI HBA Driver Jul 2 08:01:21.423269 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:01:21.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.428642 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:01:21.479452 kernel: raid6: avx512x4 gen() 18474 MB/s Jul 2 08:01:21.499445 kernel: raid6: avx512x4 xor() 8447 MB/s Jul 2 08:01:21.519439 kernel: raid6: avx512x2 gen() 18626 MB/s Jul 2 08:01:21.539443 kernel: raid6: avx512x2 xor() 29720 MB/s Jul 2 08:01:21.559437 kernel: raid6: avx512x1 gen() 18583 MB/s Jul 2 08:01:21.579440 kernel: raid6: avx512x1 xor() 26879 MB/s Jul 2 08:01:21.599441 kernel: raid6: avx2x4 gen() 18499 MB/s Jul 2 08:01:21.619439 kernel: raid6: avx2x4 xor() 7858 MB/s Jul 2 08:01:21.639438 kernel: raid6: avx2x2 gen() 18484 MB/s Jul 2 08:01:21.659442 kernel: raid6: avx2x2 xor() 22200 MB/s Jul 2 08:01:21.679437 kernel: raid6: avx2x1 gen() 14194 MB/s Jul 2 08:01:21.699439 kernel: raid6: avx2x1 xor() 19420 MB/s Jul 2 08:01:21.720438 kernel: raid6: sse2x4 gen() 11743 MB/s Jul 2 08:01:21.741435 kernel: raid6: sse2x4 xor() 7388 MB/s Jul 2 08:01:21.761436 kernel: raid6: sse2x2 gen() 12952 MB/s Jul 2 08:01:21.782438 kernel: raid6: sse2x2 xor() 7642 MB/s Jul 2 08:01:21.802435 kernel: raid6: sse2x1 gen() 11641 MB/s Jul 2 08:01:21.825502 kernel: raid6: sse2x1 xor() 5917 MB/s Jul 2 08:01:21.825522 kernel: raid6: using algorithm avx512x2 gen() 18626 MB/s Jul 2 08:01:21.825534 kernel: raid6: .... xor() 29720 MB/s, rmw enabled Jul 2 08:01:21.828822 kernel: raid6: using avx512x2 recovery algorithm Jul 2 08:01:21.848445 kernel: xor: automatically using best checksumming function avx Jul 2 08:01:21.944451 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:01:21.952337 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:01:21.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.953000 audit: BPF prog-id=7 op=LOAD Jul 2 08:01:21.955000 audit: BPF prog-id=8 op=LOAD Jul 2 08:01:21.956399 systemd[1]: Starting systemd-udevd.service... Jul 2 08:01:21.970964 systemd-udevd[383]: Using default interface naming scheme 'v252'. Jul 2 08:01:21.978376 systemd[1]: Started systemd-udevd.service. Jul 2 08:01:21.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:21.981616 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:01:22.002622 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Jul 2 08:01:22.032989 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:01:22.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:22.038214 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:01:22.072338 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:01:22.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:22.130447 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:01:22.150948 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 08:01:22.151007 kernel: AES CTR mode by8 optimization enabled Jul 2 08:01:22.156220 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 08:01:22.168444 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 08:01:22.184451 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 08:01:22.184504 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 08:01:22.194624 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:01:22.208448 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 08:01:22.216458 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 08:01:22.216497 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 08:01:22.220443 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 08:01:22.232761 kernel: scsi host0: storvsc_host_t Jul 2 08:01:22.232939 kernel: scsi host1: storvsc_host_t Jul 2 08:01:22.232969 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 08:01:22.244449 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 08:01:22.272297 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 08:01:22.272515 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 08:01:22.272639 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 08:01:22.281802 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 08:01:22.282023 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 08:01:22.290581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:01:22.290623 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 08:01:22.298702 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 08:01:22.298883 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 08:01:22.300457 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 08:01:22.391615 kernel: hv_netvsc 0022489d-201b-0022-489d-201b0022489d eth0: VF slot 1 added Jul 2 08:01:22.401628 kernel: hv_vmbus: registering driver hv_pci Jul 2 08:01:22.401666 kernel: hv_pci 2f13cdb8-7703-4ee1-b583-5ad4faf81ec7: PCI VMBus probing: Using version 0x10004 Jul 2 08:01:22.418825 kernel: hv_pci 2f13cdb8-7703-4ee1-b583-5ad4faf81ec7: PCI host bridge to bus 7703:00 Jul 2 08:01:22.419012 kernel: pci_bus 7703:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 08:01:22.419142 kernel: pci_bus 7703:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 08:01:22.428523 kernel: pci 7703:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 08:01:22.439242 kernel: pci 7703:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 08:01:22.455733 kernel: pci 7703:00:02.0: enabling Extended Tags Jul 2 08:01:22.469564 kernel: pci 7703:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7703:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 08:01:22.478983 kernel: pci_bus 7703:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 08:01:22.479164 kernel: pci 7703:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 08:01:22.574452 kernel: mlx5_core 7703:00:02.0: firmware version: 14.30.1284 Jul 2 08:01:22.736453 kernel: mlx5_core 7703:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 08:01:22.810267 kernel: mlx5_core 7703:00:02.0: Supported tc offload range - chains: 1, prios: 1 Jul 2 08:01:22.810572 kernel: mlx5_core 7703:00:02.0: mlx5e_tc_post_act_init:40:(pid 187): firmware level support is missing Jul 2 08:01:22.817450 kernel: hv_netvsc 0022489d-201b-0022-489d-201b0022489d eth0: VF registering: eth1 Jul 2 08:01:22.817651 kernel: mlx5_core 7703:00:02.0 eth1: joined to eth0 Jul 2 08:01:22.832449 kernel: mlx5_core 7703:00:02.0 enP30467s1: renamed from eth1 Jul 2 08:01:22.889280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:01:22.918455 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (438) Jul 2 08:01:22.931772 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:01:23.116104 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:01:23.119330 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:01:23.126006 systemd[1]: Starting disk-uuid.service... Jul 2 08:01:23.151363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:01:24.147452 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:01:24.148935 disk-uuid[553]: The operation has completed successfully. Jul 2 08:01:24.225635 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:01:24.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.225739 systemd[1]: Finished disk-uuid.service. Jul 2 08:01:24.233144 systemd[1]: Starting verity-setup.service... Jul 2 08:01:24.280445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 08:01:24.654946 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:01:24.659415 systemd[1]: Finished verity-setup.service. Jul 2 08:01:24.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.664138 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:01:24.735485 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:01:24.735868 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:01:24.738007 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 08:01:24.738777 systemd[1]: Starting ignition-setup.service... Jul 2 08:01:24.744057 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:01:24.776665 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:01:24.776704 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:01:24.776720 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:01:24.828284 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:01:24.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.833000 audit: BPF prog-id=9 op=LOAD Jul 2 08:01:24.834280 systemd[1]: Starting systemd-networkd.service... Jul 2 08:01:24.845476 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:01:24.861970 systemd-networkd[826]: lo: Link UP Jul 2 08:01:24.861981 systemd-networkd[826]: lo: Gained carrier Jul 2 08:01:24.865927 systemd-networkd[826]: Enumeration completed Jul 2 08:01:24.867539 systemd[1]: Started systemd-networkd.service. Jul 2 08:01:24.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.870018 systemd[1]: Reached target network.target. Jul 2 08:01:24.872165 systemd-networkd[826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:01:24.879046 systemd[1]: Starting iscsiuio.service... Jul 2 08:01:24.884888 systemd[1]: Started iscsiuio.service. Jul 2 08:01:24.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.887275 systemd[1]: Starting iscsid.service... Jul 2 08:01:24.891232 iscsid[835]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:01:24.895256 iscsid[835]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:01:24.895256 iscsid[835]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:01:24.895256 iscsid[835]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:01:24.895256 iscsid[835]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:01:24.895256 iscsid[835]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:01:24.911488 systemd[1]: Started iscsid.service. Jul 2 08:01:24.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.924937 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:01:24.935480 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:01:24.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.939812 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:01:24.949600 kernel: mlx5_core 7703:00:02.0 enP30467s1: Link up Jul 2 08:01:24.944759 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:01:24.949646 systemd[1]: Reached target remote-fs.target. Jul 2 08:01:24.952437 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:01:24.963371 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:01:24.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.976097 systemd[1]: Finished ignition-setup.service. Jul 2 08:01:24.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:24.981311 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:01:24.996252 kernel: hv_netvsc 0022489d-201b-0022-489d-201b0022489d eth0: Data path switched to VF: enP30467s1 Jul 2 08:01:24.996493 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:01:24.996823 systemd-networkd[826]: enP30467s1: Link UP Jul 2 08:01:24.997081 systemd-networkd[826]: eth0: Link UP Jul 2 08:01:24.997537 systemd-networkd[826]: eth0: Gained carrier Jul 2 08:01:25.003866 systemd-networkd[826]: enP30467s1: Gained carrier Jul 2 08:01:25.054524 systemd-networkd[826]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 08:01:26.177669 systemd-networkd[826]: eth0: Gained IPv6LL Jul 2 08:01:28.509738 ignition[850]: Ignition 2.14.0 Jul 2 08:01:28.509755 ignition[850]: Stage: fetch-offline Jul 2 08:01:28.509844 ignition[850]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:28.509893 ignition[850]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:28.554963 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:28.613127 ignition[850]: parsed url from cmdline: "" Jul 2 08:01:28.613216 ignition[850]: no config URL provided Jul 2 08:01:28.613227 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:01:28.613245 ignition[850]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:01:28.613253 ignition[850]: failed to fetch config: resource requires networking Jul 2 08:01:28.620590 ignition[850]: Ignition finished successfully Jul 2 08:01:28.626001 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:01:28.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.629998 systemd[1]: Starting ignition-fetch.service... Jul 2 08:01:28.654907 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 08:01:28.654946 kernel: audit: type=1130 audit(1719907288.628:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.657782 ignition[856]: Ignition 2.14.0 Jul 2 08:01:28.657792 ignition[856]: Stage: fetch Jul 2 08:01:28.657926 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:28.657959 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:28.668268 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:28.668417 ignition[856]: parsed url from cmdline: "" Jul 2 08:01:28.668421 ignition[856]: no config URL provided Jul 2 08:01:28.670969 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:01:28.670983 ignition[856]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:01:28.671028 ignition[856]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 08:01:28.771086 ignition[856]: GET result: OK Jul 2 08:01:28.771182 ignition[856]: config has been read from IMDS userdata Jul 2 08:01:28.771202 ignition[856]: parsing config with SHA512: 7e6c153e5d15d17776802c974dc9bae1f0048061d8b3d8d7c2d848e62c4198e51d7be3a5face74fc079ca1c2b85e6ec43ddbfc1cd30d919a9b1cf917bd6df85a Jul 2 08:01:28.774313 unknown[856]: fetched base config from "system" Jul 2 08:01:28.774834 ignition[856]: fetch: fetch complete Jul 2 08:01:28.794434 kernel: audit: type=1130 audit(1719907288.779:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.774323 unknown[856]: fetched base config from "system" Jul 2 08:01:28.774840 ignition[856]: fetch: fetch passed Jul 2 08:01:28.774330 unknown[856]: fetched user config from "azure" Jul 2 08:01:28.774888 ignition[856]: Ignition finished successfully Jul 2 08:01:28.776575 systemd[1]: Finished ignition-fetch.service. Jul 2 08:01:28.780952 systemd[1]: Starting ignition-kargs.service... Jul 2 08:01:28.813379 ignition[862]: Ignition 2.14.0 Jul 2 08:01:28.813390 ignition[862]: Stage: kargs Jul 2 08:01:28.813553 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:28.813588 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:28.819046 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:28.820105 ignition[862]: kargs: kargs passed Jul 2 08:01:28.820154 ignition[862]: Ignition finished successfully Jul 2 08:01:28.826881 systemd[1]: Finished ignition-kargs.service. Jul 2 08:01:28.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.844344 systemd[1]: Starting ignition-disks.service... Jul 2 08:01:28.849525 kernel: audit: type=1130 audit(1719907288.831:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.856884 ignition[868]: Ignition 2.14.0 Jul 2 08:01:28.856895 ignition[868]: Stage: disks Jul 2 08:01:28.857023 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:28.857048 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:28.865156 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:28.868255 ignition[868]: disks: disks passed Jul 2 08:01:28.868301 ignition[868]: Ignition finished successfully Jul 2 08:01:28.872768 systemd[1]: Finished ignition-disks.service. Jul 2 08:01:28.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.875013 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:01:28.893440 kernel: audit: type=1130 audit(1719907288.874:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.888895 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:01:28.893405 systemd[1]: Reached target local-fs.target. Jul 2 08:01:28.895393 systemd[1]: Reached target sysinit.target. Jul 2 08:01:28.899752 systemd[1]: Reached target basic.target. Jul 2 08:01:28.902449 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:01:28.966303 systemd-fsck[876]: ROOT: clean, 614/7326000 files, 481076/7359488 blocks Jul 2 08:01:28.976788 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:01:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:28.982355 systemd[1]: Mounting sysroot.mount... Jul 2 08:01:28.996689 kernel: audit: type=1130 audit(1719907288.980:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:29.006441 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:01:29.006795 systemd[1]: Mounted sysroot.mount. Jul 2 08:01:29.009283 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:01:29.043793 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:01:29.049670 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 08:01:29.055410 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:01:29.055464 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:01:29.058684 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:01:29.118920 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:01:29.124241 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:01:29.141451 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (887) Jul 2 08:01:29.141499 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:01:29.149019 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:01:29.156836 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:01:29.156865 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:01:29.160696 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:01:29.173091 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:01:29.194698 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:01:29.199777 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:01:29.782161 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:01:29.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:29.787772 systemd[1]: Starting ignition-mount.service... Jul 2 08:01:29.802889 kernel: audit: type=1130 audit(1719907289.786:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:29.800875 systemd[1]: Starting sysroot-boot.service... Jul 2 08:01:29.807483 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 08:01:29.807607 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 08:01:29.827350 ignition[953]: INFO : Ignition 2.14.0 Jul 2 08:01:29.829763 ignition[953]: INFO : Stage: mount Jul 2 08:01:29.829763 ignition[953]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:29.829763 ignition[953]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:29.840183 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:29.840183 ignition[953]: INFO : mount: mount passed Jul 2 08:01:29.840183 ignition[953]: INFO : Ignition finished successfully Jul 2 08:01:29.842154 systemd[1]: Finished ignition-mount.service. Jul 2 08:01:29.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:29.866463 kernel: audit: type=1130 audit(1719907289.850:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:29.869371 systemd[1]: Finished sysroot-boot.service. Jul 2 08:01:29.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:29.886454 kernel: audit: type=1130 audit(1719907289.873:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:30.922269 coreos-metadata[886]: Jul 02 08:01:30.922 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 08:01:30.941456 coreos-metadata[886]: Jul 02 08:01:30.941 INFO Fetch successful Jul 2 08:01:30.978390 coreos-metadata[886]: Jul 02 08:01:30.978 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 08:01:30.995327 coreos-metadata[886]: Jul 02 08:01:30.995 INFO Fetch successful Jul 2 08:01:31.018494 coreos-metadata[886]: Jul 02 08:01:31.018 INFO wrote hostname ci-3510.3.5-a-a726d90360 to /sysroot/etc/hostname Jul 2 08:01:31.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:31.020693 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 08:01:31.041454 kernel: audit: type=1130 audit(1719907291.024:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:31.025527 systemd[1]: Starting ignition-files.service... Jul 2 08:01:31.044926 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:01:31.060442 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (966) Jul 2 08:01:31.069830 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:01:31.069866 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:01:31.069879 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:01:31.078292 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:01:31.092117 ignition[985]: INFO : Ignition 2.14.0 Jul 2 08:01:31.092117 ignition[985]: INFO : Stage: files Jul 2 08:01:31.096422 ignition[985]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:31.096422 ignition[985]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:31.107084 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:31.172168 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:01:31.175587 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:01:31.175587 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:01:31.233590 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:01:31.237555 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:01:31.240799 unknown[985]: wrote ssh authorized keys file for user: core Jul 2 08:01:31.243400 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:01:31.273078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 08:01:31.278623 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 08:01:31.320740 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (987) Jul 2 08:01:31.295536 systemd[1]: mnt-oem1331659284.mount: Deactivated successfully. Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1331659284" Jul 2 08:01:31.323352 ignition[985]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1331659284": device or resource busy Jul 2 08:01:31.323352 ignition[985]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1331659284", trying btrfs: device or resource busy Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1331659284" Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1331659284" Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem1331659284" Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem1331659284" Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2031541529" Jul 2 08:01:31.323352 ignition[985]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2031541529": device or resource busy Jul 2 08:01:31.323352 ignition[985]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2031541529", trying btrfs: device or resource busy Jul 2 08:01:31.323352 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2031541529" Jul 2 08:01:31.312290 systemd[1]: mnt-oem2031541529.mount: Deactivated successfully. Jul 2 08:01:31.399124 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2031541529" Jul 2 08:01:31.399124 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2031541529" Jul 2 08:01:31.399124 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2031541529" Jul 2 08:01:31.399124 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 08:01:31.399124 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:01:31.399124 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 08:01:31.891497 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Jul 2 08:01:32.303557 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:01:32.303557 ignition[985]: INFO : files: op(f): [started] processing unit "waagent.service" Jul 2 08:01:32.303557 ignition[985]: INFO : files: op(f): [finished] processing unit "waagent.service" Jul 2 08:01:32.303557 ignition[985]: INFO : files: op(10): [started] processing unit "nvidia.service" Jul 2 08:01:32.303557 ignition[985]: INFO : files: op(10): [finished] processing unit "nvidia.service" Jul 2 08:01:32.303557 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Jul 2 08:01:32.339129 kernel: audit: type=1130 audit(1719907292.317:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.313938 systemd[1]: Finished ignition-files.service. Jul 2 08:01:32.340341 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Jul 2 08:01:32.340341 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Jul 2 08:01:32.340341 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Jul 2 08:01:32.340341 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:01:32.340341 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:01:32.340341 ignition[985]: INFO : files: files passed Jul 2 08:01:32.340341 ignition[985]: INFO : Ignition finished successfully Jul 2 08:01:32.318449 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:01:32.367533 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:01:32.371798 systemd[1]: Starting ignition-quench.service... Jul 2 08:01:32.374686 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:01:32.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.374763 systemd[1]: Finished ignition-quench.service. Jul 2 08:01:32.471642 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:01:32.472988 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:01:32.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.481600 systemd[1]: Reached target ignition-complete.target. Jul 2 08:01:32.486926 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:01:32.500331 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:01:32.500466 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:01:32.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.504556 systemd[1]: Reached target initrd-fs.target. Jul 2 08:01:32.508291 systemd[1]: Reached target initrd.target. Jul 2 08:01:32.510364 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:01:32.511192 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:01:32.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.523065 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:01:32.525994 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:01:32.537788 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:01:32.541909 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:01:32.546319 systemd[1]: Stopped target timers.target. Jul 2 08:01:32.550087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:01:32.552337 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:01:32.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.556400 systemd[1]: Stopped target initrd.target. Jul 2 08:01:32.559846 systemd[1]: Stopped target basic.target. Jul 2 08:01:32.563482 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:01:32.567810 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:01:32.571898 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:01:32.575834 systemd[1]: Stopped target remote-fs.target. Jul 2 08:01:32.579763 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:01:32.583675 systemd[1]: Stopped target sysinit.target. Jul 2 08:01:32.587510 systemd[1]: Stopped target local-fs.target. Jul 2 08:01:32.591210 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:01:32.595389 systemd[1]: Stopped target swap.target. Jul 2 08:01:32.598803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:01:32.601138 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:01:32.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.604905 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:01:32.608721 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:01:32.611154 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:01:32.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.615118 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:01:32.617796 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:01:32.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.622393 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:01:32.624622 systemd[1]: Stopped ignition-files.service. Jul 2 08:01:32.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.628524 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 08:01:32.631185 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 08:01:32.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.636616 systemd[1]: Stopping ignition-mount.service... Jul 2 08:01:32.640073 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:01:32.642457 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:01:32.642637 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:01:32.651251 ignition[1023]: INFO : Ignition 2.14.0 Jul 2 08:01:32.651251 ignition[1023]: INFO : Stage: umount Jul 2 08:01:32.651251 ignition[1023]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:32.651251 ignition[1023]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 08:01:32.651251 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 08:01:32.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.675833 ignition[1023]: INFO : umount: umount passed Jul 2 08:01:32.675833 ignition[1023]: INFO : Ignition finished successfully Jul 2 08:01:32.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.655519 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:01:32.658642 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:01:32.677371 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:01:32.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.677493 systemd[1]: Stopped ignition-mount.service. Jul 2 08:01:32.680047 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:01:32.680132 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:01:32.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.681361 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:01:32.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.681407 systemd[1]: Stopped ignition-disks.service. Jul 2 08:01:32.689734 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:01:32.689785 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:01:32.692408 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:01:32.692464 systemd[1]: Stopped ignition-fetch.service. Jul 2 08:01:32.698962 systemd[1]: Stopped target network.target. Jul 2 08:01:32.699398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:01:32.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.699447 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:01:32.699809 systemd[1]: Stopped target paths.target. Jul 2 08:01:32.700147 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:01:32.707820 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:01:32.708310 systemd[1]: Stopped target slices.target. Jul 2 08:01:32.708742 systemd[1]: Stopped target sockets.target. Jul 2 08:01:32.709154 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:01:32.709183 systemd[1]: Closed iscsid.socket. Jul 2 08:01:32.709577 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:01:32.709604 systemd[1]: Closed iscsiuio.socket. Jul 2 08:01:32.709939 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:01:32.709975 systemd[1]: Stopped ignition-setup.service. Jul 2 08:01:32.710931 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:01:32.731608 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:01:32.739024 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:01:32.739137 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:01:32.776000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:01:32.776529 systemd-networkd[826]: eth0: DHCPv6 lease lost Jul 2 08:01:32.779516 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:01:32.779629 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:01:32.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.785750 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:01:32.787000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:01:32.785798 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:01:32.790723 systemd[1]: Stopping network-cleanup.service... Jul 2 08:01:32.794092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:01:32.794145 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:01:32.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.802402 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:01:32.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.802955 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:01:32.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.807022 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:01:32.807069 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:01:32.811199 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:01:32.818551 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:01:32.818632 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:01:32.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.819311 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:01:32.819447 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:01:32.826764 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:01:32.826814 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:01:32.836558 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:01:32.837553 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:01:32.843926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:01:32.844012 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:01:32.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.851597 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:01:32.851638 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:01:32.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.859507 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:01:32.859565 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:01:32.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.866658 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:01:32.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.868793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:01:32.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.868870 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 08:01:32.871455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:01:32.895422 kernel: hv_netvsc 0022489d-201b-0022-489d-201b0022489d eth0: Data path switched from VF: enP30467s1 Jul 2 08:01:32.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:32.871510 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:01:32.875735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:01:32.875789 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:01:32.888350 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 08:01:32.889002 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:01:32.889116 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:01:32.909517 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:01:32.909617 systemd[1]: Stopped network-cleanup.service. Jul 2 08:01:32.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:33.659486 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:01:33.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:33.659601 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:01:33.681186 kernel: kauditd_printk_skb: 39 callbacks suppressed Jul 2 08:01:33.681208 kernel: audit: type=1131 audit(1719907293.663:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:33.664165 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:01:33.685965 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:01:33.686049 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:01:33.691198 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:01:33.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:33.709443 kernel: audit: type=1131 audit(1719907293.690:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:33.715236 systemd[1]: Switching root. Jul 2 08:01:33.744085 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 2 08:01:33.744161 iscsid[835]: iscsid shutting down. Jul 2 08:01:33.745927 systemd-journald[183]: Journal stopped Jul 2 08:01:49.723172 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:01:49.723210 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:01:49.723230 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:01:49.723244 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:01:49.723258 kernel: SELinux: policy capability open_perms=1 Jul 2 08:01:49.723272 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:01:49.723292 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:01:49.723310 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:01:49.723326 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:01:49.723340 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:01:49.723355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:01:49.723369 kernel: audit: type=1403 audit(1719907296.346:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:01:49.723387 systemd[1]: Successfully loaded SELinux policy in 275.827ms. Jul 2 08:01:49.723406 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.916ms. Jul 2 08:01:49.723455 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:01:49.723476 systemd[1]: Detected virtualization microsoft. Jul 2 08:01:49.723491 systemd[1]: Detected architecture x86-64. Jul 2 08:01:49.723511 systemd[1]: Detected first boot. Jul 2 08:01:49.723533 systemd[1]: Hostname set to . Jul 2 08:01:49.723550 systemd[1]: Initializing machine ID from random generator. Jul 2 08:01:49.723568 kernel: audit: type=1400 audit(1719907297.183:81): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:01:49.723585 kernel: audit: type=1400 audit(1719907297.200:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:01:49.723602 kernel: audit: type=1400 audit(1719907297.200:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:01:49.723620 kernel: audit: type=1334 audit(1719907297.225:84): prog-id=10 op=LOAD Jul 2 08:01:49.723634 kernel: audit: type=1334 audit(1719907297.225:85): prog-id=10 op=UNLOAD Jul 2 08:01:49.723650 kernel: audit: type=1334 audit(1719907297.230:86): prog-id=11 op=LOAD Jul 2 08:01:49.723662 kernel: audit: type=1334 audit(1719907297.230:87): prog-id=11 op=UNLOAD Jul 2 08:01:49.723682 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:01:49.723696 kernel: audit: type=1400 audit(1719907298.924:88): avc: denied { associate } for pid=1056 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:01:49.723710 kernel: audit: type=1300 audit(1719907298.924:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1039 pid=1056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:49.723723 kernel: audit: type=1327 audit(1719907298.924:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:01:49.723737 kernel: audit: type=1400 audit(1719907298.932:89): avc: denied { associate } for pid=1056 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:01:49.723753 kernel: audit: type=1300 audit(1719907298.932:89): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1039 pid=1056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:49.725283 kernel: audit: type=1307 audit(1719907298.932:89): cwd="/" Jul 2 08:01:49.725303 kernel: audit: type=1302 audit(1719907298.932:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:49.725319 kernel: audit: type=1302 audit(1719907298.932:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:49.725335 kernel: audit: type=1327 audit(1719907298.932:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:01:49.725351 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:01:49.725372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:01:49.725387 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:01:49.725410 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:01:49.725435 kernel: audit: type=1334 audit(1719907309.196:90): prog-id=12 op=LOAD Jul 2 08:01:49.725450 kernel: audit: type=1334 audit(1719907309.196:91): prog-id=3 op=UNLOAD Jul 2 08:01:49.725464 kernel: audit: type=1334 audit(1719907309.202:92): prog-id=13 op=LOAD Jul 2 08:01:49.725480 kernel: audit: type=1334 audit(1719907309.207:93): prog-id=14 op=LOAD Jul 2 08:01:49.725494 kernel: audit: type=1334 audit(1719907309.207:94): prog-id=4 op=UNLOAD Jul 2 08:01:49.725512 kernel: audit: type=1334 audit(1719907309.207:95): prog-id=5 op=UNLOAD Jul 2 08:01:49.725526 kernel: audit: type=1334 audit(1719907309.212:96): prog-id=15 op=LOAD Jul 2 08:01:49.725540 kernel: audit: type=1334 audit(1719907309.212:97): prog-id=12 op=UNLOAD Jul 2 08:01:49.725553 kernel: audit: type=1334 audit(1719907309.218:98): prog-id=16 op=LOAD Jul 2 08:01:49.725566 kernel: audit: type=1334 audit(1719907309.223:99): prog-id=17 op=LOAD Jul 2 08:01:49.725581 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:01:49.725612 systemd[1]: Stopped iscsiuio.service. Jul 2 08:01:49.725626 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:01:49.725641 systemd[1]: Stopped iscsid.service. Jul 2 08:01:49.725651 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:01:49.725663 systemd[1]: Stopped initrd-switch-root.service. Jul 2 08:01:49.725673 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:01:49.725686 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:01:49.725697 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:01:49.725708 systemd[1]: Created slice system-getty.slice. Jul 2 08:01:49.725718 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:01:49.725732 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:01:49.725745 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:01:49.725755 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:01:49.725768 systemd[1]: Created slice user.slice. Jul 2 08:01:49.725780 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:01:49.725789 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:01:49.725799 systemd[1]: Set up automount boot.automount. Jul 2 08:01:49.725811 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:01:49.725824 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 08:01:49.725835 systemd[1]: Stopped target initrd-fs.target. Jul 2 08:01:49.725848 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 08:01:49.725858 systemd[1]: Reached target integritysetup.target. Jul 2 08:01:49.725870 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:01:49.725879 systemd[1]: Reached target remote-fs.target. Jul 2 08:01:49.725894 systemd[1]: Reached target slices.target. Jul 2 08:01:49.725906 systemd[1]: Reached target swap.target. Jul 2 08:01:49.725916 systemd[1]: Reached target torcx.target. Jul 2 08:01:49.725930 systemd[1]: Reached target veritysetup.target. Jul 2 08:01:49.725941 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:01:49.725953 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:01:49.725963 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:01:49.725977 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:01:49.725990 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:01:49.726000 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:01:49.726011 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:01:49.726021 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:01:49.726034 systemd[1]: Mounting media.mount... Jul 2 08:01:49.726044 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:49.726057 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:01:49.726068 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:01:49.726081 systemd[1]: Mounting tmp.mount... Jul 2 08:01:49.726093 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:01:49.726104 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:01:49.726116 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:01:49.726125 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:01:49.726138 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:01:49.726151 systemd[1]: Starting modprobe@drm.service... Jul 2 08:01:49.726161 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:01:49.726172 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:01:49.726185 systemd[1]: Starting modprobe@loop.service... Jul 2 08:01:49.726198 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:01:49.726208 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:01:49.726220 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 08:01:49.726233 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:01:49.726242 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:01:49.726254 systemd[1]: Stopped systemd-journald.service. Jul 2 08:01:49.726264 systemd[1]: Starting systemd-journald.service... Jul 2 08:01:49.726276 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:01:49.726288 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:01:49.726300 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:01:49.726313 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:01:49.726322 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:01:49.726334 systemd[1]: Stopped verity-setup.service. Jul 2 08:01:49.726344 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:49.726357 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:01:49.726367 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:01:49.726387 systemd[1]: Mounted media.mount. Jul 2 08:01:49.726399 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:01:49.726409 kernel: loop: module loaded Jul 2 08:01:49.726432 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:01:49.726444 systemd[1]: Mounted tmp.mount. Jul 2 08:01:49.726456 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:01:49.726473 systemd-journald[1166]: Journal started Jul 2 08:01:49.726524 systemd-journald[1166]: Runtime Journal (/run/log/journal/183b781b98ea47a18d75128bdba14987) is 8.0M, max 159.0M, 151.0M free. Jul 2 08:01:36.346000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:01:37.183000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:01:37.200000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:01:37.200000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:01:37.225000 audit: BPF prog-id=10 op=LOAD Jul 2 08:01:37.225000 audit: BPF prog-id=10 op=UNLOAD Jul 2 08:01:37.230000 audit: BPF prog-id=11 op=LOAD Jul 2 08:01:37.230000 audit: BPF prog-id=11 op=UNLOAD Jul 2 08:01:38.924000 audit[1056]: AVC avc: denied { associate } for pid=1056 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:01:38.924000 audit[1056]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1039 pid=1056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:38.924000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:01:38.932000 audit[1056]: AVC avc: denied { associate } for pid=1056 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:01:38.932000 audit[1056]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1039 pid=1056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:38.932000 audit: CWD cwd="/" Jul 2 08:01:38.932000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:38.932000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:38.932000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:01:49.196000 audit: BPF prog-id=12 op=LOAD Jul 2 08:01:49.196000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:01:49.202000 audit: BPF prog-id=13 op=LOAD Jul 2 08:01:49.207000 audit: BPF prog-id=14 op=LOAD Jul 2 08:01:49.207000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:01:49.207000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:01:49.212000 audit: BPF prog-id=15 op=LOAD Jul 2 08:01:49.212000 audit: BPF prog-id=12 op=UNLOAD Jul 2 08:01:49.218000 audit: BPF prog-id=16 op=LOAD Jul 2 08:01:49.223000 audit: BPF prog-id=17 op=LOAD Jul 2 08:01:49.223000 audit: BPF prog-id=13 op=UNLOAD Jul 2 08:01:49.223000 audit: BPF prog-id=14 op=UNLOAD Jul 2 08:01:49.228000 audit: BPF prog-id=18 op=LOAD Jul 2 08:01:49.228000 audit: BPF prog-id=15 op=UNLOAD Jul 2 08:01:49.249000 audit: BPF prog-id=19 op=LOAD Jul 2 08:01:49.249000 audit: BPF prog-id=20 op=LOAD Jul 2 08:01:49.249000 audit: BPF prog-id=16 op=UNLOAD Jul 2 08:01:49.249000 audit: BPF prog-id=17 op=UNLOAD Jul 2 08:01:49.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.269000 audit: BPF prog-id=18 op=UNLOAD Jul 2 08:01:49.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.612000 audit: BPF prog-id=21 op=LOAD Jul 2 08:01:49.613000 audit: BPF prog-id=22 op=LOAD Jul 2 08:01:49.613000 audit: BPF prog-id=23 op=LOAD Jul 2 08:01:49.613000 audit: BPF prog-id=19 op=UNLOAD Jul 2 08:01:49.613000 audit: BPF prog-id=20 op=UNLOAD Jul 2 08:01:49.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.719000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:01:49.719000 audit[1166]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdffb6d1f0 a2=4000 a3=7ffdffb6d28c items=0 ppid=1 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:49.719000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:01:38.891727 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:01:49.195347 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:01:38.892237 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:01:49.250487 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:01:38.892259 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:01:38.892296 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 08:01:38.892307 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 08:01:38.892351 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 08:01:38.892366 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 08:01:38.892613 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 08:01:38.892671 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:01:38.892687 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:01:38.908662 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 08:01:38.908699 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 08:01:38.908716 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 08:01:38.908730 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 08:01:38.908746 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 08:01:38.908759 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 08:01:47.978316 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:47.978789 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:47.978916 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:47.979083 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:47.979130 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 08:01:47.979184 /usr/lib/systemd/system-generators/torcx-generator[1056]: time="2024-07-02T08:01:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 08:01:49.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.736580 systemd[1]: Started systemd-journald.service. Jul 2 08:01:49.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.739545 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:01:49.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.742133 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:01:49.742277 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:01:49.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.744915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:01:49.745053 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:01:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.747564 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:01:49.747704 systemd[1]: Finished modprobe@drm.service. Jul 2 08:01:49.760753 kernel: fuse: init (API version 7.34) Jul 2 08:01:49.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.750204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:01:49.750352 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:01:49.753337 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:01:49.753484 systemd[1]: Finished modprobe@loop.service. Jul 2 08:01:49.755894 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:01:49.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.761349 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:01:49.764777 systemd[1]: Reached target network-pre.target. Jul 2 08:01:49.768899 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:01:49.771352 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:01:49.774050 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:01:49.777696 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:01:49.779975 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:01:49.781563 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:01:49.783562 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:01:49.785382 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:01:49.791306 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:01:49.791646 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:01:49.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.793862 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:01:49.798997 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:01:49.805398 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:01:49.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.812868 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:01:49.816505 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:01:49.820279 systemd-journald[1166]: Time spent on flushing to /var/log/journal/183b781b98ea47a18d75128bdba14987 is 24.854ms for 1152 entries. Jul 2 08:01:49.820279 systemd-journald[1166]: System Journal (/var/log/journal/183b781b98ea47a18d75128bdba14987) is 8.0M, max 2.6G, 2.6G free. Jul 2 08:01:49.918453 systemd-journald[1166]: Received client request to flush runtime journal. Jul 2 08:01:49.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.831702 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:01:49.834422 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:01:49.860470 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:01:49.919684 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:01:49.864117 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:01:49.917968 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:01:49.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:49.921043 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:01:50.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:50.482199 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:01:50.486377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:01:50.935901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:01:50.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.027104 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:01:51.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.029000 audit: BPF prog-id=24 op=LOAD Jul 2 08:01:51.029000 audit: BPF prog-id=25 op=LOAD Jul 2 08:01:51.029000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:01:51.029000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:01:51.030914 systemd[1]: Starting systemd-udevd.service... Jul 2 08:01:51.049223 systemd-udevd[1185]: Using default interface naming scheme 'v252'. Jul 2 08:01:51.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.363000 audit: BPF prog-id=26 op=LOAD Jul 2 08:01:51.359497 systemd[1]: Started systemd-udevd.service. Jul 2 08:01:51.364801 systemd[1]: Starting systemd-networkd.service... Jul 2 08:01:51.404559 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 08:01:51.464000 audit[1208]: AVC avc: denied { confidentiality } for pid=1208 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:01:51.471544 kernel: hv_vmbus: registering driver hv_balloon Jul 2 08:01:51.493345 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:01:51.493685 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 08:01:51.464000 audit[1208]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fac0ffb1e0 a1=f884 a2=7f866d5babc5 a3=5 items=12 ppid=1185 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:51.464000 audit: CWD cwd="/" Jul 2 08:01:51.464000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=1 name=(null) inode=15072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=2 name=(null) inode=15072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=3 name=(null) inode=15073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=4 name=(null) inode=15072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=5 name=(null) inode=15074 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=6 name=(null) inode=15072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=7 name=(null) inode=15075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=8 name=(null) inode=15072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=9 name=(null) inode=15076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=10 name=(null) inode=15072 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PATH item=11 name=(null) inode=15077 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:51.464000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:01:51.514244 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 08:01:51.514342 kernel: hv_vmbus: registering driver hv_utils Jul 2 08:01:51.510000 audit: BPF prog-id=27 op=LOAD Jul 2 08:01:51.513000 audit: BPF prog-id=28 op=LOAD Jul 2 08:01:51.513000 audit: BPF prog-id=29 op=LOAD Jul 2 08:01:51.523624 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 08:01:51.528839 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 08:01:51.535441 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 08:01:51.535513 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 08:01:51.540929 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 08:01:51.540999 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 08:01:51.540893 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:01:51.979043 kernel: Console: switching to colour dummy device 80x25 Jul 2 08:01:51.986580 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 08:01:52.039904 systemd[1]: Started systemd-userdbd.service. Jul 2 08:01:52.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.124286 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1194) Jul 2 08:01:52.226914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:01:52.318298 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Jul 2 08:01:52.361749 systemd-networkd[1196]: lo: Link UP Jul 2 08:01:52.361760 systemd-networkd[1196]: lo: Gained carrier Jul 2 08:01:52.362368 systemd-networkd[1196]: Enumeration completed Jul 2 08:01:52.362501 systemd[1]: Started systemd-networkd.service. Jul 2 08:01:52.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.366211 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:01:52.393437 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:01:52.447298 kernel: mlx5_core 7703:00:02.0 enP30467s1: Link up Jul 2 08:01:52.468926 systemd-networkd[1196]: enP30467s1: Link UP Jul 2 08:01:52.469332 kernel: hv_netvsc 0022489d-201b-0022-489d-201b0022489d eth0: Data path switched to VF: enP30467s1 Jul 2 08:01:52.469070 systemd-networkd[1196]: eth0: Link UP Jul 2 08:01:52.469075 systemd-networkd[1196]: eth0: Gained carrier Jul 2 08:01:52.474536 systemd-networkd[1196]: enP30467s1: Gained carrier Jul 2 08:01:52.500397 systemd-networkd[1196]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 08:01:52.573679 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:01:52.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.577452 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:01:52.982392 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:01:53.012381 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:01:53.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.015363 systemd[1]: Reached target cryptsetup.target. Jul 2 08:01:53.018971 systemd[1]: Starting lvm2-activation.service... Jul 2 08:01:53.023608 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:01:53.042314 systemd[1]: Finished lvm2-activation.service. Jul 2 08:01:53.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.044687 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:01:53.046668 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:01:53.046703 systemd[1]: Reached target local-fs.target. Jul 2 08:01:53.048623 systemd[1]: Reached target machines.target. Jul 2 08:01:53.051922 systemd[1]: Starting ldconfig.service... Jul 2 08:01:53.054081 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.054183 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:01:53.055569 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:01:53.059256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:01:53.063368 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:01:53.066875 systemd[1]: Starting systemd-sysext.service... Jul 2 08:01:53.137607 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) Jul 2 08:01:53.139363 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:01:53.346914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:01:53.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.366729 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:01:53.488803 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:01:53.489072 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:01:53.585291 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 08:01:53.608587 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:01:53.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.609275 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:01:53.619300 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:01:53.639300 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 08:01:53.643714 (sd-sysext)[1280]: Using extensions 'kubernetes'. Jul 2 08:01:53.644140 (sd-sysext)[1280]: Merged extensions into '/usr'. Jul 2 08:01:53.660061 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:53.661614 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:01:53.663387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.667180 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:01:53.669461 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:01:53.673482 systemd[1]: Starting modprobe@loop.service... Jul 2 08:01:53.675234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.675413 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:01:53.675554 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:53.676925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:01:53.677337 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:01:53.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.678744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:01:53.678853 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:01:53.679765 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:01:53.679876 systemd[1]: Finished modprobe@loop.service. Jul 2 08:01:53.680782 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:01:53.680932 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.686209 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:01:53.689807 systemd[1]: Finished systemd-sysext.service. Jul 2 08:01:53.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.693419 systemd[1]: Starting ensure-sysext.service... Jul 2 08:01:53.695622 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:01:53.707464 systemd[1]: Reloading. Jul 2 08:01:53.714614 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:01:53.716484 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:01:53.748414 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:01:53.764986 /usr/lib/systemd/system-generators/torcx-generator[1306]: time="2024-07-02T08:01:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:01:53.765029 /usr/lib/systemd/system-generators/torcx-generator[1306]: time="2024-07-02T08:01:53Z" level=info msg="torcx already run" Jul 2 08:01:53.865695 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:01:53.865716 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:01:53.882285 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:01:53.949000 audit: BPF prog-id=30 op=LOAD Jul 2 08:01:53.949000 audit: BPF prog-id=21 op=UNLOAD Jul 2 08:01:53.949000 audit: BPF prog-id=31 op=LOAD Jul 2 08:01:53.949000 audit: BPF prog-id=32 op=LOAD Jul 2 08:01:53.949000 audit: BPF prog-id=22 op=UNLOAD Jul 2 08:01:53.949000 audit: BPF prog-id=23 op=UNLOAD Jul 2 08:01:53.950000 audit: BPF prog-id=33 op=LOAD Jul 2 08:01:53.950000 audit: BPF prog-id=27 op=UNLOAD Jul 2 08:01:53.950000 audit: BPF prog-id=34 op=LOAD Jul 2 08:01:53.950000 audit: BPF prog-id=35 op=LOAD Jul 2 08:01:53.950000 audit: BPF prog-id=28 op=UNLOAD Jul 2 08:01:53.950000 audit: BPF prog-id=29 op=UNLOAD Jul 2 08:01:53.952000 audit: BPF prog-id=36 op=LOAD Jul 2 08:01:53.952000 audit: BPF prog-id=26 op=UNLOAD Jul 2 08:01:53.952000 audit: BPF prog-id=37 op=LOAD Jul 2 08:01:53.952000 audit: BPF prog-id=38 op=LOAD Jul 2 08:01:53.952000 audit: BPF prog-id=24 op=UNLOAD Jul 2 08:01:53.952000 audit: BPF prog-id=25 op=UNLOAD Jul 2 08:01:53.965696 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:53.965964 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.967419 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:01:53.970827 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:01:53.973605 systemd[1]: Starting modprobe@loop.service... Jul 2 08:01:53.974777 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.974988 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:01:53.975197 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:53.976757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:01:53.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.977007 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:01:53.978608 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:01:53.978715 systemd[1]: Finished modprobe@loop.service. Jul 2 08:01:53.984667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:01:53.984814 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:01:53.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.987794 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:53.988086 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.989336 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:01:53.992778 systemd[1]: Starting modprobe@drm.service... Jul 2 08:01:53.995981 systemd[1]: Starting modprobe@loop.service... Jul 2 08:01:53.998187 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:01:53.998427 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:01:53.998580 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:01:53.998739 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:01:54.000018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:01:54.000177 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:01:54.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.003102 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:01:54.003248 systemd[1]: Finished modprobe@drm.service. Jul 2 08:01:54.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.005733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:01:54.005872 systemd[1]: Finished modprobe@loop.service. Jul 2 08:01:54.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.009196 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:01:54.010622 systemd[1]: Finished ensure-sysext.service. Jul 2 08:01:54.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.127095 systemd-fsck[1276]: fsck.fat 4.2 (2021-01-31) Jul 2 08:01:54.127095 systemd-fsck[1276]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 08:01:54.129306 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:01:54.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.135067 systemd[1]: Mounting boot.mount... Jul 2 08:01:54.148754 systemd[1]: Mounted boot.mount. Jul 2 08:01:54.161754 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:01:54.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.318532 systemd-networkd[1196]: eth0: Gained IPv6LL Jul 2 08:01:54.324173 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:01:54.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.974617 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:01:54.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.978945 systemd[1]: Starting audit-rules.service... Jul 2 08:01:54.984011 kernel: kauditd_printk_skb: 125 callbacks suppressed Jul 2 08:01:54.984071 kernel: audit: type=1130 audit(1719907314.976:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.995350 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:01:54.998850 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:01:55.003568 systemd[1]: Starting systemd-resolved.service... Jul 2 08:01:55.010379 kernel: audit: type=1334 audit(1719907315.001:209): prog-id=39 op=LOAD Jul 2 08:01:55.001000 audit: BPF prog-id=39 op=LOAD Jul 2 08:01:55.011189 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:01:55.008000 audit: BPF prog-id=40 op=LOAD Jul 2 08:01:55.017109 kernel: audit: type=1334 audit(1719907315.008:210): prog-id=40 op=LOAD Jul 2 08:01:55.018644 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:01:55.045000 audit[1386]: SYSTEM_BOOT pid=1386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.064463 kernel: audit: type=1127 audit(1719907315.045:211): pid=1386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.062888 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:01:55.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.080295 kernel: audit: type=1130 audit(1719907315.064:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.091083 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:01:55.093538 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:01:55.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.106477 kernel: audit: type=1130 audit(1719907315.092:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.166631 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:01:55.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.170130 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:01:55.183616 kernel: audit: type=1130 audit(1719907315.169:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.185724 systemd[1]: Reached target time-set.target. Jul 2 08:01:55.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.201375 kernel: audit: type=1130 audit(1719907315.184:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:55.248750 systemd-resolved[1384]: Positive Trust Anchors: Jul 2 08:01:55.248768 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:01:55.248808 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:01:55.316223 augenrules[1401]: No rules Jul 2 08:01:55.314000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:01:55.317203 systemd[1]: Finished audit-rules.service. Jul 2 08:01:55.314000 audit[1401]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc545f69b0 a2=420 a3=0 items=0 ppid=1380 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:55.344282 kernel: audit: type=1305 audit(1719907315.314:216): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:01:55.344369 kernel: audit: type=1300 audit(1719907315.314:216): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc545f69b0 a2=420 a3=0 items=0 ppid=1380 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:55.314000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:01:55.345734 systemd-resolved[1384]: Using system hostname 'ci-3510.3.5-a-a726d90360'. Jul 2 08:01:55.346576 systemd-timesyncd[1385]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Jul 2 08:01:55.346952 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2024-07-02 08:01:55.347729 UTC. Jul 2 08:01:55.347616 systemd[1]: Started systemd-resolved.service. Jul 2 08:01:55.350416 systemd[1]: Reached target network.target. Jul 2 08:01:55.352510 systemd[1]: Reached target network-online.target. Jul 2 08:01:55.355105 systemd[1]: Reached target nss-lookup.target. Jul 2 08:02:01.671514 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:02:01.684382 systemd[1]: Finished ldconfig.service. Jul 2 08:02:01.688559 systemd[1]: Starting systemd-update-done.service... Jul 2 08:02:01.695398 systemd[1]: Finished systemd-update-done.service. Jul 2 08:02:01.697714 systemd[1]: Reached target sysinit.target. Jul 2 08:02:01.699691 systemd[1]: Started motdgen.path. Jul 2 08:02:01.701550 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:02:01.704247 systemd[1]: Started logrotate.timer. Jul 2 08:02:01.706377 systemd[1]: Started mdadm.timer. Jul 2 08:02:01.708104 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:02:01.710233 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:02:01.710292 systemd[1]: Reached target paths.target. Jul 2 08:02:01.712200 systemd[1]: Reached target timers.target. Jul 2 08:02:01.714318 systemd[1]: Listening on dbus.socket. Jul 2 08:02:01.717359 systemd[1]: Starting docker.socket... Jul 2 08:02:01.721528 systemd[1]: Listening on sshd.socket. Jul 2 08:02:01.723578 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:01.724045 systemd[1]: Listening on docker.socket. Jul 2 08:02:01.726001 systemd[1]: Reached target sockets.target. Jul 2 08:02:01.727874 systemd[1]: Reached target basic.target. Jul 2 08:02:01.729798 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:02:01.729832 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:02:01.730918 systemd[1]: Starting containerd.service... Jul 2 08:02:01.734195 systemd[1]: Starting dbus.service... Jul 2 08:02:01.737341 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:02:01.740905 systemd[1]: Starting extend-filesystems.service... Jul 2 08:02:01.744025 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:02:01.745542 systemd[1]: Starting kubelet.service... Jul 2 08:02:01.748635 systemd[1]: Starting motdgen.service... Jul 2 08:02:01.751759 systemd[1]: Started nvidia.service. Jul 2 08:02:01.755358 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:02:01.759129 systemd[1]: Starting sshd-keygen.service... Jul 2 08:02:01.764451 systemd[1]: Starting systemd-logind.service... Jul 2 08:02:01.769849 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:01.769951 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:02:01.770477 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:02:01.771433 systemd[1]: Starting update-engine.service... Jul 2 08:02:01.775416 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:02:01.783093 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:02:01.783375 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:02:01.876294 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:02:01.876507 systemd[1]: Finished motdgen.service. Jul 2 08:02:01.881436 jq[1426]: true Jul 2 08:02:01.883825 jq[1411]: false Jul 2 08:02:01.884642 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:02:01.884847 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:02:01.907521 jq[1437]: true Jul 2 08:02:01.912075 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:02:01.918522 systemd-logind[1421]: New seat seat0. Jul 2 08:02:01.924678 extend-filesystems[1412]: Found loop1 Jul 2 08:02:01.940594 extend-filesystems[1412]: Found sda Jul 2 08:02:01.942476 extend-filesystems[1412]: Found sda1 Jul 2 08:02:01.942476 extend-filesystems[1412]: Found sda2 Jul 2 08:02:01.946666 extend-filesystems[1412]: Found sda3 Jul 2 08:02:01.946666 extend-filesystems[1412]: Found usr Jul 2 08:02:01.946666 extend-filesystems[1412]: Found sda4 Jul 2 08:02:01.946666 extend-filesystems[1412]: Found sda6 Jul 2 08:02:01.946666 extend-filesystems[1412]: Found sda7 Jul 2 08:02:01.946666 extend-filesystems[1412]: Found sda9 Jul 2 08:02:01.946666 extend-filesystems[1412]: Checking size of /dev/sda9 Jul 2 08:02:01.991719 env[1433]: time="2024-07-02T08:02:01.991581851Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:02:02.042471 extend-filesystems[1412]: Old size kept for /dev/sda9 Jul 2 08:02:02.044971 extend-filesystems[1412]: Found sr0 Jul 2 08:02:02.047423 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:02:02.047641 systemd[1]: Finished extend-filesystems.service. Jul 2 08:02:02.068411 env[1433]: time="2024-07-02T08:02:02.068207461Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:02:02.068411 env[1433]: time="2024-07-02T08:02:02.068411772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:02.072998 env[1433]: time="2024-07-02T08:02:02.072819506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:02:02.072998 env[1433]: time="2024-07-02T08:02:02.072865909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073181 env[1433]: time="2024-07-02T08:02:02.073149724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073181 env[1433]: time="2024-07-02T08:02:02.073177125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073290 env[1433]: time="2024-07-02T08:02:02.073195526Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:02:02.073290 env[1433]: time="2024-07-02T08:02:02.073212127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073369 env[1433]: time="2024-07-02T08:02:02.073331433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073915 env[1433]: time="2024-07-02T08:02:02.073595647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073915 env[1433]: time="2024-07-02T08:02:02.073780657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:02:02.073915 env[1433]: time="2024-07-02T08:02:02.073804559Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:02:02.073915 env[1433]: time="2024-07-02T08:02:02.073864962Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:02:02.073915 env[1433]: time="2024-07-02T08:02:02.073879663Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090012021Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090061723Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090080624Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090120026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090140127Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090162729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090182430Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090202031Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090220632Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090240033Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090257834Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090289235Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090423143Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:02:02.090739 env[1433]: time="2024-07-02T08:02:02.090513347Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.090847565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.090886567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.090907468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.090970872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.090990073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091007874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091026075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091043976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091061676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091077577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091093878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091323 env[1433]: time="2024-07-02T08:02:02.091113879Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091365793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091402495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091421596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091442497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091464698Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091482199Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091506100Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:02:02.091737 env[1433]: time="2024-07-02T08:02:02.091545302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:02:02.092010 env[1433]: time="2024-07-02T08:02:02.091802016Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:02:02.092010 env[1433]: time="2024-07-02T08:02:02.091876020Z" level=info msg="Connect containerd service" Jul 2 08:02:02.092010 env[1433]: time="2024-07-02T08:02:02.091923622Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108278092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108378298Z" level=info msg="Start subscribing containerd event" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108435801Z" level=info msg="Start recovering state" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108513605Z" level=info msg="Start event monitor" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108531306Z" level=info msg="Start snapshots syncer" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108548107Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.108562407Z" level=info msg="Start streaming server" Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.109111137Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:02:02.130498 env[1433]: time="2024-07-02T08:02:02.110641818Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:02:02.094211 systemd[1]: Started dbus.service. Jul 2 08:02:02.092619 dbus-daemon[1410]: [system] SELinux support is enabled Jul 2 08:02:02.099561 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:02:02.120369 dbus-daemon[1410]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:02:02.099594 systemd[1]: Reached target system-config.target. Jul 2 08:02:02.102354 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:02:02.102380 systemd[1]: Reached target user-config.target. Jul 2 08:02:02.110794 systemd[1]: Started containerd.service. Jul 2 08:02:02.115701 systemd[1]: Started systemd-logind.service. Jul 2 08:02:02.141633 env[1433]: time="2024-07-02T08:02:02.141405454Z" level=info msg="containerd successfully booted in 0.151318s" Jul 2 08:02:02.151928 bash[1460]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:02:02.152647 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:02:02.294824 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 08:02:02.717360 sshd_keygen[1424]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:02:02.743712 systemd[1]: Finished sshd-keygen.service. Jul 2 08:02:02.748368 systemd[1]: Starting issuegen.service... Jul 2 08:02:02.752505 systemd[1]: Started waagent.service. Jul 2 08:02:02.762656 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:02:02.762856 systemd[1]: Finished issuegen.service. Jul 2 08:02:02.767343 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:02:02.774914 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:02:02.780066 systemd[1]: Started getty@tty1.service. Jul 2 08:02:02.783955 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:02:02.786644 systemd[1]: Reached target getty.target. Jul 2 08:02:02.815408 update_engine[1425]: I0702 08:02:02.814416 1425 main.cc:92] Flatcar Update Engine starting Jul 2 08:02:02.856664 systemd[1]: Started kubelet.service. Jul 2 08:02:02.909710 systemd[1]: Started update-engine.service. Jul 2 08:02:02.912499 update_engine[1425]: I0702 08:02:02.910635 1425 update_check_scheduler.cc:74] Next update check in 2m11s Jul 2 08:02:02.914916 systemd[1]: Started locksmithd.service. Jul 2 08:02:02.917671 systemd[1]: Reached target multi-user.target. Jul 2 08:02:02.922756 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:02:02.941982 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:02:02.942189 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:02:02.944994 systemd[1]: Startup finished in 973ms (firmware) + 35.301s (loader) + 922ms (kernel) + 15.125s (initrd) + 26.672s (userspace) = 1min 18.995s. Jul 2 08:02:03.381298 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:02:03.381681 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:02:03.404774 kubelet[1526]: E0702 08:02:03.404716 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:02:03.406488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:02:03.406642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:02:03.426231 systemd[1]: Created slice user-500.slice. Jul 2 08:02:03.427909 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:02:03.431368 systemd-logind[1421]: New session 1 of user core. Jul 2 08:02:03.436132 systemd-logind[1421]: New session 2 of user core. Jul 2 08:02:03.440807 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:02:03.442711 systemd[1]: Starting user@500.service... Jul 2 08:02:03.461053 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:03.579552 systemd[1539]: Queued start job for default target default.target. Jul 2 08:02:03.580118 systemd[1539]: Reached target paths.target. Jul 2 08:02:03.580146 systemd[1539]: Reached target sockets.target. Jul 2 08:02:03.580163 systemd[1539]: Reached target timers.target. Jul 2 08:02:03.580177 systemd[1539]: Reached target basic.target. Jul 2 08:02:03.580315 systemd[1]: Started user@500.service. Jul 2 08:02:03.581537 systemd[1]: Started session-1.scope. Jul 2 08:02:03.582324 systemd[1]: Started session-2.scope. Jul 2 08:02:03.582328 systemd[1539]: Reached target default.target. Jul 2 08:02:03.582379 systemd[1539]: Startup finished in 114ms. Jul 2 08:02:04.599944 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:02:09.796517 waagent[1517]: 2024-07-02T08:02:09.796419Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 2 08:02:09.799183 waagent[1517]: 2024-07-02T08:02:09.799112Z INFO Daemon Daemon OS: flatcar 3510.3.5 Jul 2 08:02:09.800164 waagent[1517]: 2024-07-02T08:02:09.800114Z INFO Daemon Daemon Python: 3.9.16 Jul 2 08:02:09.801527 waagent[1517]: 2024-07-02T08:02:09.801471Z INFO Daemon Daemon Run daemon Jul 2 08:02:09.802790 waagent[1517]: 2024-07-02T08:02:09.802741Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.5' Jul 2 08:02:09.817490 waagent[1517]: 2024-07-02T08:02:09.817368Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 08:02:09.824399 waagent[1517]: 2024-07-02T08:02:09.824297Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.825635Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.826376Z INFO Daemon Daemon Using waagent for provisioning Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.827717Z INFO Daemon Daemon Activate resource disk Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.828454Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.836026Z INFO Daemon Daemon Found device: None Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.836918Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.837731Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.839507Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 08:02:09.853615 waagent[1517]: 2024-07-02T08:02:09.840443Z INFO Daemon Daemon Running default provisioning handler Jul 2 08:02:09.860470 waagent[1517]: 2024-07-02T08:02:09.860352Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 08:02:09.867560 waagent[1517]: 2024-07-02T08:02:09.867460Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 08:02:09.876097 waagent[1517]: 2024-07-02T08:02:09.868666Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 08:02:09.876097 waagent[1517]: 2024-07-02T08:02:09.869409Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 08:02:09.948448 waagent[1517]: 2024-07-02T08:02:09.948255Z INFO Daemon Daemon Successfully mounted dvd Jul 2 08:02:10.232795 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 08:02:10.261821 waagent[1517]: 2024-07-02T08:02:10.261661Z INFO Daemon Daemon Detect protocol endpoint Jul 2 08:02:10.275610 waagent[1517]: 2024-07-02T08:02:10.263085Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 08:02:10.275610 waagent[1517]: 2024-07-02T08:02:10.264034Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 08:02:10.275610 waagent[1517]: 2024-07-02T08:02:10.264761Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 08:02:10.275610 waagent[1517]: 2024-07-02T08:02:10.265764Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 08:02:10.275610 waagent[1517]: 2024-07-02T08:02:10.266366Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 08:02:10.397194 waagent[1517]: 2024-07-02T08:02:10.397107Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 08:02:10.405609 waagent[1517]: 2024-07-02T08:02:10.398990Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 08:02:10.405609 waagent[1517]: 2024-07-02T08:02:10.399992Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 08:02:11.226809 waagent[1517]: 2024-07-02T08:02:11.226654Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 08:02:11.237159 waagent[1517]: 2024-07-02T08:02:11.237081Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 2 08:02:11.242227 waagent[1517]: 2024-07-02T08:02:11.238272Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 2 08:02:11.488638 waagent[1517]: 2024-07-02T08:02:11.488450Z INFO Daemon Daemon Found private key matching thumbprint 5D25798E4FEC37DB33F1C4246338557E306962FA Jul 2 08:02:11.492968 waagent[1517]: 2024-07-02T08:02:11.492890Z INFO Daemon Daemon Certificate with thumbprint 759C80CB20FBCB3AA79D11170BB22CE67D54AA6D has no matching private key. Jul 2 08:02:11.497550 waagent[1517]: 2024-07-02T08:02:11.497476Z INFO Daemon Daemon Fetch goal state completed Jul 2 08:02:11.545100 waagent[1517]: 2024-07-02T08:02:11.545008Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: b708a2f3-b4eb-4d94-8851-eef63a60758f New eTag: 4081088594715474764] Jul 2 08:02:11.550736 waagent[1517]: 2024-07-02T08:02:11.550666Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 08:02:11.564533 waagent[1517]: 2024-07-02T08:02:11.564452Z INFO Daemon Daemon Starting provisioning Jul 2 08:02:11.567365 waagent[1517]: 2024-07-02T08:02:11.567304Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 08:02:11.569752 waagent[1517]: 2024-07-02T08:02:11.569692Z INFO Daemon Daemon Set hostname [ci-3510.3.5-a-a726d90360] Jul 2 08:02:11.592565 waagent[1517]: 2024-07-02T08:02:11.592423Z INFO Daemon Daemon Publish hostname [ci-3510.3.5-a-a726d90360] Jul 2 08:02:11.596101 waagent[1517]: 2024-07-02T08:02:11.596016Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 08:02:11.600181 waagent[1517]: 2024-07-02T08:02:11.600101Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 08:02:11.613172 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 2 08:02:11.613442 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 2 08:02:11.613518 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 2 08:02:11.613876 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:02:11.618309 systemd-networkd[1196]: eth0: DHCPv6 lease lost Jul 2 08:02:11.619604 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:02:11.619763 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:02:11.622112 systemd[1]: Starting systemd-networkd.service... Jul 2 08:02:11.653989 systemd-networkd[1580]: enP30467s1: Link UP Jul 2 08:02:11.654000 systemd-networkd[1580]: enP30467s1: Gained carrier Jul 2 08:02:11.655343 systemd-networkd[1580]: eth0: Link UP Jul 2 08:02:11.655352 systemd-networkd[1580]: eth0: Gained carrier Jul 2 08:02:11.655771 systemd-networkd[1580]: lo: Link UP Jul 2 08:02:11.655780 systemd-networkd[1580]: lo: Gained carrier Jul 2 08:02:11.656087 systemd-networkd[1580]: eth0: Gained IPv6LL Jul 2 08:02:11.656372 systemd-networkd[1580]: Enumeration completed Jul 2 08:02:11.656478 systemd[1]: Started systemd-networkd.service. Jul 2 08:02:11.658494 waagent[1517]: 2024-07-02T08:02:11.658090Z INFO Daemon Daemon Create user account if not exists Jul 2 08:02:11.661495 waagent[1517]: 2024-07-02T08:02:11.660168Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 08:02:11.661495 waagent[1517]: 2024-07-02T08:02:11.660856Z INFO Daemon Daemon Configure sudoer Jul 2 08:02:11.662193 waagent[1517]: 2024-07-02T08:02:11.662135Z INFO Daemon Daemon Configure sshd Jul 2 08:02:11.662595 waagent[1517]: 2024-07-02T08:02:11.662542Z INFO Daemon Daemon Deploy ssh public key. Jul 2 08:02:11.674719 systemd-networkd[1580]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:02:11.675514 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:02:11.704362 systemd-networkd[1580]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 08:02:11.707078 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:02:12.989076 waagent[1517]: 2024-07-02T08:02:12.988971Z INFO Daemon Daemon Provisioning complete Jul 2 08:02:13.005865 waagent[1517]: 2024-07-02T08:02:13.005787Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 08:02:13.009425 waagent[1517]: 2024-07-02T08:02:13.009347Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 08:02:13.014975 waagent[1517]: 2024-07-02T08:02:13.014897Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 2 08:02:13.280963 waagent[1589]: 2024-07-02T08:02:13.280856Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 2 08:02:13.281751 waagent[1589]: 2024-07-02T08:02:13.281682Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:02:13.281893 waagent[1589]: 2024-07-02T08:02:13.281838Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:02:13.292856 waagent[1589]: 2024-07-02T08:02:13.292782Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 2 08:02:13.293007 waagent[1589]: 2024-07-02T08:02:13.292956Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 2 08:02:13.353791 waagent[1589]: 2024-07-02T08:02:13.353668Z INFO ExtHandler ExtHandler Found private key matching thumbprint 5D25798E4FEC37DB33F1C4246338557E306962FA Jul 2 08:02:13.354007 waagent[1589]: 2024-07-02T08:02:13.353945Z INFO ExtHandler ExtHandler Certificate with thumbprint 759C80CB20FBCB3AA79D11170BB22CE67D54AA6D has no matching private key. Jul 2 08:02:13.354243 waagent[1589]: 2024-07-02T08:02:13.354193Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 2 08:02:13.368379 waagent[1589]: 2024-07-02T08:02:13.368323Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 48577289-a54e-46cf-8bb6-eaa2d325b9cf New eTag: 4081088594715474764] Jul 2 08:02:13.368892 waagent[1589]: 2024-07-02T08:02:13.368833Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 08:02:13.504257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:02:13.508613 waagent[1589]: 2024-07-02T08:02:13.504152Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 08:02:13.504590 systemd[1]: Stopped kubelet.service. Jul 2 08:02:13.506843 systemd[1]: Starting kubelet.service... Jul 2 08:02:13.519167 waagent[1589]: 2024-07-02T08:02:13.519089Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1589 Jul 2 08:02:13.524337 waagent[1589]: 2024-07-02T08:02:13.524252Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 08:02:13.525974 waagent[1589]: 2024-07-02T08:02:13.525909Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 08:02:13.648813 systemd[1]: Started kubelet.service. Jul 2 08:02:14.203523 kubelet[1606]: E0702 08:02:14.203474 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:02:14.206544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:02:14.206703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:02:14.216940 waagent[1589]: 2024-07-02T08:02:14.216882Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 08:02:14.217374 waagent[1589]: 2024-07-02T08:02:14.217311Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 08:02:14.225651 waagent[1589]: 2024-07-02T08:02:14.225596Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 08:02:14.226113 waagent[1589]: 2024-07-02T08:02:14.226054Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 08:02:14.227180 waagent[1589]: 2024-07-02T08:02:14.227115Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 2 08:02:14.228451 waagent[1589]: 2024-07-02T08:02:14.228393Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 08:02:14.228869 waagent[1589]: 2024-07-02T08:02:14.228803Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:02:14.229018 waagent[1589]: 2024-07-02T08:02:14.228971Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:02:14.229590 waagent[1589]: 2024-07-02T08:02:14.229534Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 08:02:14.229872 waagent[1589]: 2024-07-02T08:02:14.229815Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 08:02:14.229872 waagent[1589]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 08:02:14.229872 waagent[1589]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 08:02:14.229872 waagent[1589]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 08:02:14.229872 waagent[1589]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:02:14.229872 waagent[1589]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:02:14.229872 waagent[1589]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:02:14.232779 waagent[1589]: 2024-07-02T08:02:14.232577Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 08:02:14.233661 waagent[1589]: 2024-07-02T08:02:14.233603Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:02:14.233845 waagent[1589]: 2024-07-02T08:02:14.233794Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:02:14.234509 waagent[1589]: 2024-07-02T08:02:14.234451Z INFO EnvHandler ExtHandler Configure routes Jul 2 08:02:14.234656 waagent[1589]: 2024-07-02T08:02:14.234608Z INFO EnvHandler ExtHandler Gateway:None Jul 2 08:02:14.234784 waagent[1589]: 2024-07-02T08:02:14.234741Z INFO EnvHandler ExtHandler Routes:None Jul 2 08:02:14.235713 waagent[1589]: 2024-07-02T08:02:14.235654Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 08:02:14.235874 waagent[1589]: 2024-07-02T08:02:14.235824Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 08:02:14.236616 waagent[1589]: 2024-07-02T08:02:14.236551Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 08:02:14.236809 waagent[1589]: 2024-07-02T08:02:14.236756Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 08:02:14.236940 waagent[1589]: 2024-07-02T08:02:14.236890Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 08:02:14.248531 waagent[1589]: 2024-07-02T08:02:14.248473Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 2 08:02:14.249715 waagent[1589]: 2024-07-02T08:02:14.249673Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 08:02:14.250567 waagent[1589]: 2024-07-02T08:02:14.250520Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 2 08:02:14.278295 waagent[1589]: 2024-07-02T08:02:14.278173Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1580' Jul 2 08:02:14.293125 waagent[1589]: 2024-07-02T08:02:14.293043Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 2 08:02:14.402588 waagent[1589]: 2024-07-02T08:02:14.402470Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 08:02:14.402588 waagent[1589]: Executing ['ip', '-a', '-o', 'link']: Jul 2 08:02:14.402588 waagent[1589]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 08:02:14.402588 waagent[1589]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:20:1b brd ff:ff:ff:ff:ff:ff Jul 2 08:02:14.402588 waagent[1589]: 3: enP30467s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:20:1b brd ff:ff:ff:ff:ff:ff\ altname enP30467p0s2 Jul 2 08:02:14.402588 waagent[1589]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 08:02:14.402588 waagent[1589]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 08:02:14.402588 waagent[1589]: 2: eth0 inet 10.200.8.11/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 08:02:14.402588 waagent[1589]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 08:02:14.402588 waagent[1589]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 08:02:14.402588 waagent[1589]: 2: eth0 inet6 fe80::222:48ff:fe9d:201b/64 scope link \ valid_lft forever preferred_lft forever Jul 2 08:02:14.563973 waagent[1589]: 2024-07-02T08:02:14.563908Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.11.1.4 -- exiting Jul 2 08:02:15.019754 waagent[1517]: 2024-07-02T08:02:15.019581Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 2 08:02:15.025049 waagent[1517]: 2024-07-02T08:02:15.024975Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.11.1.4 to be the latest agent Jul 2 08:02:16.040441 waagent[1637]: 2024-07-02T08:02:16.040325Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.11.1.4) Jul 2 08:02:16.041161 waagent[1637]: 2024-07-02T08:02:16.041089Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.5 Jul 2 08:02:16.041325 waagent[1637]: 2024-07-02T08:02:16.041256Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 2 08:02:16.041478 waagent[1637]: 2024-07-02T08:02:16.041430Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 2 08:02:16.051132 waagent[1637]: 2024-07-02T08:02:16.051029Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 08:02:16.051542 waagent[1637]: 2024-07-02T08:02:16.051484Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:02:16.051705 waagent[1637]: 2024-07-02T08:02:16.051657Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:02:16.064071 waagent[1637]: 2024-07-02T08:02:16.063988Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 08:02:16.076604 waagent[1637]: 2024-07-02T08:02:16.076535Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 08:02:16.077610 waagent[1637]: 2024-07-02T08:02:16.077546Z INFO ExtHandler Jul 2 08:02:16.077769 waagent[1637]: 2024-07-02T08:02:16.077715Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 065e74dc-8479-4442-bb1b-6f1c43cba465 eTag: 4081088594715474764 source: Fabric] Jul 2 08:02:16.078514 waagent[1637]: 2024-07-02T08:02:16.078456Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 08:02:16.079607 waagent[1637]: 2024-07-02T08:02:16.079546Z INFO ExtHandler Jul 2 08:02:16.079742 waagent[1637]: 2024-07-02T08:02:16.079690Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 08:02:16.086859 waagent[1637]: 2024-07-02T08:02:16.086803Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 08:02:16.087323 waagent[1637]: 2024-07-02T08:02:16.087253Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 08:02:16.107349 waagent[1637]: 2024-07-02T08:02:16.107285Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 2 08:02:16.172257 waagent[1637]: 2024-07-02T08:02:16.172127Z INFO ExtHandler Downloaded certificate {'thumbprint': '759C80CB20FBCB3AA79D11170BB22CE67D54AA6D', 'hasPrivateKey': False} Jul 2 08:02:16.173294 waagent[1637]: 2024-07-02T08:02:16.173215Z INFO ExtHandler Downloaded certificate {'thumbprint': '5D25798E4FEC37DB33F1C4246338557E306962FA', 'hasPrivateKey': True} Jul 2 08:02:16.174257 waagent[1637]: 2024-07-02T08:02:16.174197Z INFO ExtHandler Fetch goal state completed Jul 2 08:02:16.193045 waagent[1637]: 2024-07-02T08:02:16.192946Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Jul 2 08:02:16.204499 waagent[1637]: 2024-07-02T08:02:16.204413Z INFO ExtHandler ExtHandler WALinuxAgent-2.11.1.4 running as process 1637 Jul 2 08:02:16.207786 waagent[1637]: 2024-07-02T08:02:16.207722Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 08:02:16.209158 waagent[1637]: 2024-07-02T08:02:16.209101Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 08:02:16.213881 waagent[1637]: 2024-07-02T08:02:16.213825Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 08:02:16.214242 waagent[1637]: 2024-07-02T08:02:16.214187Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 08:02:16.222372 waagent[1637]: 2024-07-02T08:02:16.222320Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 08:02:16.222830 waagent[1637]: 2024-07-02T08:02:16.222775Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 08:02:16.228955 waagent[1637]: 2024-07-02T08:02:16.228865Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 08:02:16.229934 waagent[1637]: 2024-07-02T08:02:16.229864Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 08:02:16.231453 waagent[1637]: 2024-07-02T08:02:16.231393Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 08:02:16.232015 waagent[1637]: 2024-07-02T08:02:16.231960Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:02:16.232176 waagent[1637]: 2024-07-02T08:02:16.232127Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:02:16.232760 waagent[1637]: 2024-07-02T08:02:16.232704Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 08:02:16.233226 waagent[1637]: 2024-07-02T08:02:16.233169Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 08:02:16.233834 waagent[1637]: 2024-07-02T08:02:16.233779Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 08:02:16.233834 waagent[1637]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 08:02:16.233834 waagent[1637]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 08:02:16.233834 waagent[1637]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 08:02:16.233834 waagent[1637]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:02:16.233834 waagent[1637]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:02:16.233834 waagent[1637]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 08:02:16.234125 waagent[1637]: 2024-07-02T08:02:16.233863Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 08:02:16.234125 waagent[1637]: 2024-07-02T08:02:16.234034Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 08:02:16.234477 waagent[1637]: 2024-07-02T08:02:16.234417Z INFO EnvHandler ExtHandler Configure routes Jul 2 08:02:16.235223 waagent[1637]: 2024-07-02T08:02:16.235165Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 08:02:16.237652 waagent[1637]: 2024-07-02T08:02:16.237541Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 08:02:16.238426 waagent[1637]: 2024-07-02T08:02:16.238355Z INFO EnvHandler ExtHandler Gateway:None Jul 2 08:02:16.238713 waagent[1637]: 2024-07-02T08:02:16.238650Z INFO EnvHandler ExtHandler Routes:None Jul 2 08:02:16.239147 waagent[1637]: 2024-07-02T08:02:16.239088Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 08:02:16.239367 waagent[1637]: 2024-07-02T08:02:16.239312Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 08:02:16.240759 waagent[1637]: 2024-07-02T08:02:16.240716Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 08:02:16.266139 waagent[1637]: 2024-07-02T08:02:16.266066Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 08:02:16.266139 waagent[1637]: Executing ['ip', '-a', '-o', 'link']: Jul 2 08:02:16.266139 waagent[1637]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 08:02:16.266139 waagent[1637]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:20:1b brd ff:ff:ff:ff:ff:ff Jul 2 08:02:16.266139 waagent[1637]: 3: enP30467s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:20:1b brd ff:ff:ff:ff:ff:ff\ altname enP30467p0s2 Jul 2 08:02:16.266139 waagent[1637]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 08:02:16.266139 waagent[1637]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 08:02:16.266139 waagent[1637]: 2: eth0 inet 10.200.8.11/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 08:02:16.266139 waagent[1637]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 08:02:16.266139 waagent[1637]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 08:02:16.266139 waagent[1637]: 2: eth0 inet6 fe80::222:48ff:fe9d:201b/64 scope link \ valid_lft forever preferred_lft forever Jul 2 08:02:16.268867 waagent[1637]: 2024-07-02T08:02:16.268811Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 2 08:02:16.313817 waagent[1637]: 2024-07-02T08:02:16.313664Z INFO ExtHandler ExtHandler Jul 2 08:02:16.313963 waagent[1637]: 2024-07-02T08:02:16.313907Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c68a0957-ffb1-497d-b3b5-a7bfa0b4c9f3 correlation 62fb08ad-3926-4bca-ba8b-e50cfeb28a32 created: 2024-07-02T08:00:31.172813Z] Jul 2 08:02:16.315033 waagent[1637]: 2024-07-02T08:02:16.314966Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 08:02:16.316861 waagent[1637]: 2024-07-02T08:02:16.316801Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jul 2 08:02:16.338043 waagent[1637]: 2024-07-02T08:02:16.337980Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 2 08:02:16.362841 waagent[1637]: 2024-07-02T08:02:16.362768Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.11.1.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 52295069-C5AC-44F4-AB29-1737E410E758;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Jul 2 08:02:16.415300 waagent[1637]: 2024-07-02T08:02:16.415156Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 2 08:02:16.415300 waagent[1637]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:02:16.415300 waagent[1637]: pkts bytes target prot opt in out source destination Jul 2 08:02:16.415300 waagent[1637]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:02:16.415300 waagent[1637]: pkts bytes target prot opt in out source destination Jul 2 08:02:16.415300 waagent[1637]: Chain OUTPUT (policy ACCEPT 10 packets, 1100 bytes) Jul 2 08:02:16.415300 waagent[1637]: pkts bytes target prot opt in out source destination Jul 2 08:02:16.415300 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 08:02:16.415300 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 08:02:16.415300 waagent[1637]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 08:02:16.422614 waagent[1637]: 2024-07-02T08:02:16.422507Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 08:02:16.422614 waagent[1637]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:02:16.422614 waagent[1637]: pkts bytes target prot opt in out source destination Jul 2 08:02:16.422614 waagent[1637]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 08:02:16.422614 waagent[1637]: pkts bytes target prot opt in out source destination Jul 2 08:02:16.422614 waagent[1637]: Chain OUTPUT (policy ACCEPT 10 packets, 1100 bytes) Jul 2 08:02:16.422614 waagent[1637]: pkts bytes target prot opt in out source destination Jul 2 08:02:16.422614 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 08:02:16.422614 waagent[1637]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 08:02:16.422614 waagent[1637]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 08:02:16.423161 waagent[1637]: 2024-07-02T08:02:16.423107Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 08:02:24.254078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:02:24.254419 systemd[1]: Stopped kubelet.service. Jul 2 08:02:24.256499 systemd[1]: Starting kubelet.service... Jul 2 08:02:24.336402 systemd[1]: Started kubelet.service. Jul 2 08:02:24.911976 kubelet[1689]: E0702 08:02:24.911926 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:02:24.913812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:02:24.913975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:02:35.004078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:02:35.004432 systemd[1]: Stopped kubelet.service. Jul 2 08:02:35.006504 systemd[1]: Starting kubelet.service... Jul 2 08:02:35.085848 systemd[1]: Started kubelet.service. Jul 2 08:02:35.640037 kubelet[1700]: E0702 08:02:35.639984 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:02:35.641675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:02:35.641837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:02:38.510705 systemd[1]: Created slice system-sshd.slice. Jul 2 08:02:38.512768 systemd[1]: Started sshd@0-10.200.8.11:22-10.200.16.10:56632.service. Jul 2 08:02:40.056276 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 2 08:02:40.322709 sshd[1708]: Accepted publickey for core from 10.200.16.10 port 56632 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 08:02:40.324398 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:40.328316 systemd-logind[1421]: New session 3 of user core. Jul 2 08:02:40.329406 systemd[1]: Started session-3.scope. Jul 2 08:02:40.878884 systemd[1]: Started sshd@1-10.200.8.11:22-10.200.16.10:56638.service. Jul 2 08:02:41.526415 sshd[1713]: Accepted publickey for core from 10.200.16.10 port 56638 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 08:02:41.528088 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:41.532897 systemd[1]: Started session-4.scope. Jul 2 08:02:41.533370 systemd-logind[1421]: New session 4 of user core. Jul 2 08:02:41.987650 sshd[1713]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:41.990723 systemd[1]: sshd@1-10.200.8.11:22-10.200.16.10:56638.service: Deactivated successfully. Jul 2 08:02:41.991556 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:02:41.992182 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:02:41.992942 systemd-logind[1421]: Removed session 4. Jul 2 08:02:42.111637 systemd[1]: Started sshd@2-10.200.8.11:22-10.200.16.10:56648.service. Jul 2 08:02:42.764254 sshd[1719]: Accepted publickey for core from 10.200.16.10 port 56648 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 08:02:42.765926 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:42.770759 systemd[1]: Started session-5.scope. Jul 2 08:02:42.771468 systemd-logind[1421]: New session 5 of user core. Jul 2 08:02:43.221722 sshd[1719]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:43.225334 systemd[1]: sshd@2-10.200.8.11:22-10.200.16.10:56648.service: Deactivated successfully. Jul 2 08:02:43.226252 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:02:43.226994 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:02:43.227905 systemd-logind[1421]: Removed session 5. Jul 2 08:02:43.330487 systemd[1]: Started sshd@3-10.200.8.11:22-10.200.16.10:56652.service. Jul 2 08:02:43.974606 sshd[1725]: Accepted publickey for core from 10.200.16.10 port 56652 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 08:02:43.976157 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:43.981229 systemd[1]: Started session-6.scope. Jul 2 08:02:43.981741 systemd-logind[1421]: New session 6 of user core. Jul 2 08:02:44.435404 sshd[1725]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:44.438443 systemd[1]: sshd@3-10.200.8.11:22-10.200.16.10:56652.service: Deactivated successfully. Jul 2 08:02:44.439218 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:02:44.439858 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:02:44.440627 systemd-logind[1421]: Removed session 6. Jul 2 08:02:44.543927 systemd[1]: Started sshd@4-10.200.8.11:22-10.200.16.10:56664.service. Jul 2 08:02:45.192209 sshd[1731]: Accepted publickey for core from 10.200.16.10 port 56664 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 08:02:45.193942 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:45.199371 systemd[1]: Started session-7.scope. Jul 2 08:02:45.200066 systemd-logind[1421]: New session 7 of user core. Jul 2 08:02:45.753820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:02:45.754050 systemd[1]: Stopped kubelet.service. Jul 2 08:02:45.755769 systemd[1]: Starting kubelet.service... Jul 2 08:02:46.442474 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:02:46.442854 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:02:46.473927 systemd[1]: Started kubelet.service. Jul 2 08:02:46.476609 systemd[1]: Starting coreos-metadata.service... Jul 2 08:02:46.527754 kubelet[1742]: E0702 08:02:46.527717 1742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:02:46.529662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:02:46.529827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:02:46.555783 coreos-metadata[1743]: Jul 02 08:02:46.555 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 08:02:46.558586 coreos-metadata[1743]: Jul 02 08:02:46.558 INFO Fetch successful Jul 2 08:02:46.558839 coreos-metadata[1743]: Jul 02 08:02:46.558 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 08:02:46.560501 coreos-metadata[1743]: Jul 02 08:02:46.560 INFO Fetch successful Jul 2 08:02:46.560907 coreos-metadata[1743]: Jul 02 08:02:46.560 INFO Fetching http://168.63.129.16/machine/abf9f662-ee4a-4a8b-a0f4-33f110d868c4/8860cb8c%2D2212%2D47e6%2D8fe4%2D79e1b4409195.%5Fci%2D3510.3.5%2Da%2Da726d90360?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 08:02:46.562616 coreos-metadata[1743]: Jul 02 08:02:46.562 INFO Fetch successful Jul 2 08:02:46.595694 coreos-metadata[1743]: Jul 02 08:02:46.595 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 08:02:46.608459 coreos-metadata[1743]: Jul 02 08:02:46.608 INFO Fetch successful Jul 2 08:02:46.617127 systemd[1]: Finished coreos-metadata.service. Jul 2 08:02:48.226399 update_engine[1425]: I0702 08:02:48.226318 1425 update_attempter.cc:509] Updating boot flags... Jul 2 08:02:51.148985 systemd[1]: Stopped kubelet.service. Jul 2 08:02:51.151795 systemd[1]: Starting kubelet.service... Jul 2 08:02:51.182546 systemd[1]: Reloading. Jul 2 08:02:51.294792 /usr/lib/systemd/system-generators/torcx-generator[1847]: time="2024-07-02T08:02:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:02:51.294831 /usr/lib/systemd/system-generators/torcx-generator[1847]: time="2024-07-02T08:02:51Z" level=info msg="torcx already run" Jul 2 08:02:51.397725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:02:51.397745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:02:51.414044 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:02:51.511900 systemd[1]: Started kubelet.service. Jul 2 08:02:51.514001 systemd[1]: Stopping kubelet.service... Jul 2 08:02:51.514840 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:02:51.515061 systemd[1]: Stopped kubelet.service. Jul 2 08:02:51.517461 systemd[1]: Starting kubelet.service... Jul 2 08:02:51.787024 systemd[1]: Started kubelet.service. Jul 2 08:02:51.827145 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:02:51.827528 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:02:51.827574 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:02:51.827719 kubelet[1916]: I0702 08:02:51.827688 1916 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:02:52.345224 kubelet[1916]: I0702 08:02:52.345182 1916 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:02:52.345224 kubelet[1916]: I0702 08:02:52.345211 1916 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:02:52.345505 kubelet[1916]: I0702 08:02:52.345484 1916 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:02:52.509409 kubelet[1916]: I0702 08:02:52.509198 1916 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:02:52.525750 kubelet[1916]: I0702 08:02:52.525717 1916 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:02:52.527640 kubelet[1916]: I0702 08:02:52.527599 1916 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:02:52.527850 kubelet[1916]: I0702 08:02:52.527639 1916 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:02:52.528298 kubelet[1916]: I0702 08:02:52.528279 1916 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:02:52.528359 kubelet[1916]: I0702 08:02:52.528305 1916 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:02:52.528455 kubelet[1916]: I0702 08:02:52.528439 1916 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:02:52.529446 kubelet[1916]: I0702 08:02:52.529425 1916 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:02:52.529446 kubelet[1916]: I0702 08:02:52.529446 1916 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:02:52.529583 kubelet[1916]: I0702 08:02:52.529471 1916 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:02:52.529583 kubelet[1916]: I0702 08:02:52.529489 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:02:52.529935 kubelet[1916]: E0702 08:02:52.529903 1916 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:52.530028 kubelet[1916]: E0702 08:02:52.529956 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:52.538840 kubelet[1916]: I0702 08:02:52.538823 1916 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:02:52.540632 kubelet[1916]: W0702 08:02:52.540381 1916 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.8.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 08:02:52.540632 kubelet[1916]: E0702 08:02:52.540414 1916 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 08:02:52.540632 kubelet[1916]: W0702 08:02:52.540511 1916 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 08:02:52.540632 kubelet[1916]: E0702 08:02:52.540526 1916 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 08:02:52.540984 kubelet[1916]: I0702 08:02:52.540968 1916 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:02:52.541110 kubelet[1916]: W0702 08:02:52.541087 1916 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:02:52.541760 kubelet[1916]: I0702 08:02:52.541742 1916 server.go:1264] "Started kubelet" Jul 2 08:02:52.547797 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:02:52.548474 kubelet[1916]: I0702 08:02:52.547953 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:02:52.553353 kubelet[1916]: I0702 08:02:52.553324 1916 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:02:52.554998 kubelet[1916]: I0702 08:02:52.554973 1916 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:02:52.555788 kubelet[1916]: I0702 08:02:52.555717 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:02:52.555938 kubelet[1916]: I0702 08:02:52.555918 1916 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:02:52.557645 kubelet[1916]: I0702 08:02:52.557319 1916 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:02:52.560398 kubelet[1916]: I0702 08:02:52.560376 1916 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:02:52.560483 kubelet[1916]: I0702 08:02:52.560437 1916 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:02:52.563196 kubelet[1916]: I0702 08:02:52.563166 1916 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:02:52.563323 kubelet[1916]: I0702 08:02:52.563299 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:02:52.565292 kubelet[1916]: E0702 08:02:52.564338 1916 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.11\" not found" node="10.200.8.11" Jul 2 08:02:52.568212 kubelet[1916]: I0702 08:02:52.568196 1916 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:02:52.572693 kubelet[1916]: E0702 08:02:52.572667 1916 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:02:52.579365 kubelet[1916]: I0702 08:02:52.579341 1916 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:02:52.579365 kubelet[1916]: I0702 08:02:52.579356 1916 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:02:52.579484 kubelet[1916]: I0702 08:02:52.579375 1916 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:02:52.587845 kubelet[1916]: I0702 08:02:52.587823 1916 policy_none.go:49] "None policy: Start" Jul 2 08:02:52.588442 kubelet[1916]: I0702 08:02:52.588418 1916 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:02:52.588541 kubelet[1916]: I0702 08:02:52.588534 1916 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:02:52.597097 systemd[1]: Created slice kubepods.slice. Jul 2 08:02:52.601782 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 08:02:52.604753 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 08:02:52.610821 kubelet[1916]: I0702 08:02:52.610805 1916 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:02:52.611024 kubelet[1916]: I0702 08:02:52.610997 1916 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:02:52.611150 kubelet[1916]: I0702 08:02:52.611143 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:02:52.614190 kubelet[1916]: E0702 08:02:52.614176 1916 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.11\" not found" Jul 2 08:02:52.658249 kubelet[1916]: I0702 08:02:52.658216 1916 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.11" Jul 2 08:02:52.661637 kubelet[1916]: I0702 08:02:52.661612 1916 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.11" Jul 2 08:02:52.668889 kubelet[1916]: I0702 08:02:52.668861 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:02:52.670436 kubelet[1916]: I0702 08:02:52.670411 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:02:52.670436 kubelet[1916]: I0702 08:02:52.670431 1916 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:02:52.670576 kubelet[1916]: I0702 08:02:52.670449 1916 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:02:52.670576 kubelet[1916]: E0702 08:02:52.670492 1916 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 08:02:52.676969 kubelet[1916]: E0702 08:02:52.676948 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:52.777313 kubelet[1916]: E0702 08:02:52.777245 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:52.825345 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 2 08:02:52.878055 kubelet[1916]: E0702 08:02:52.877918 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:52.946742 sshd[1731]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:52.950129 systemd[1]: sshd@4-10.200.8.11:22-10.200.16.10:56664.service: Deactivated successfully. Jul 2 08:02:52.950993 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:02:52.951689 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:02:52.952553 systemd-logind[1421]: Removed session 7. Jul 2 08:02:52.978747 kubelet[1916]: E0702 08:02:52.978706 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.078893 kubelet[1916]: E0702 08:02:53.078842 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.179737 kubelet[1916]: E0702 08:02:53.179605 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.280505 kubelet[1916]: E0702 08:02:53.280447 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.347779 kubelet[1916]: I0702 08:02:53.347725 1916 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 08:02:53.348016 kubelet[1916]: W0702 08:02:53.347970 1916 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 2 08:02:53.348174 kubelet[1916]: W0702 08:02:53.348070 1916 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 2 08:02:53.381438 kubelet[1916]: E0702 08:02:53.381396 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.482198 kubelet[1916]: E0702 08:02:53.482059 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.530655 kubelet[1916]: E0702 08:02:53.530593 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:53.582605 kubelet[1916]: E0702 08:02:53.582552 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.682824 kubelet[1916]: E0702 08:02:53.682785 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.783566 kubelet[1916]: E0702 08:02:53.783509 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.884351 kubelet[1916]: E0702 08:02:53.884296 1916 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Jul 2 08:02:53.985532 kubelet[1916]: I0702 08:02:53.985484 1916 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 08:02:53.986013 env[1433]: time="2024-07-02T08:02:53.985968069Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:02:53.986527 kubelet[1916]: I0702 08:02:53.986203 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 08:02:54.530753 kubelet[1916]: I0702 08:02:54.530657 1916 apiserver.go:52] "Watching apiserver" Jul 2 08:02:54.530753 kubelet[1916]: E0702 08:02:54.530691 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:54.539743 kubelet[1916]: I0702 08:02:54.539690 1916 topology_manager.go:215] "Topology Admit Handler" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" podNamespace="kube-system" podName="cilium-brm8r" Jul 2 08:02:54.539934 kubelet[1916]: I0702 08:02:54.539913 1916 topology_manager.go:215] "Topology Admit Handler" podUID="7c0cac63-1dd5-4f44-85a2-51f82cffb1f5" podNamespace="kube-system" podName="kube-proxy-5p42p" Jul 2 08:02:54.547182 systemd[1]: Created slice kubepods-besteffort-pod7c0cac63_1dd5_4f44_85a2_51f82cffb1f5.slice. Jul 2 08:02:54.556827 systemd[1]: Created slice kubepods-burstable-podffe85cec_733a_401d_9467_ffe86bf0044b.slice. Jul 2 08:02:54.561739 kubelet[1916]: I0702 08:02:54.561715 1916 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:02:54.573038 kubelet[1916]: I0702 08:02:54.573005 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-bpf-maps\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573038 kubelet[1916]: I0702 08:02:54.573049 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-lib-modules\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573227 kubelet[1916]: I0702 08:02:54.573075 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-xtables-lock\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573227 kubelet[1916]: I0702 08:02:54.573096 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-hubble-tls\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573227 kubelet[1916]: I0702 08:02:54.573115 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-config-path\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573227 kubelet[1916]: I0702 08:02:54.573137 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mpr\" (UniqueName: \"kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-kube-api-access-l8mpr\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573227 kubelet[1916]: I0702 08:02:54.573158 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c0cac63-1dd5-4f44-85a2-51f82cffb1f5-kube-proxy\") pod \"kube-proxy-5p42p\" (UID: \"7c0cac63-1dd5-4f44-85a2-51f82cffb1f5\") " pod="kube-system/kube-proxy-5p42p" Jul 2 08:02:54.573227 kubelet[1916]: I0702 08:02:54.573178 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0cac63-1dd5-4f44-85a2-51f82cffb1f5-lib-modules\") pod \"kube-proxy-5p42p\" (UID: \"7c0cac63-1dd5-4f44-85a2-51f82cffb1f5\") " pod="kube-system/kube-proxy-5p42p" Jul 2 08:02:54.573492 kubelet[1916]: I0702 08:02:54.573199 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-hostproc\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573492 kubelet[1916]: I0702 08:02:54.573222 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-cgroup\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573492 kubelet[1916]: I0702 08:02:54.573243 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-etc-cni-netd\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573492 kubelet[1916]: I0702 08:02:54.573290 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffe85cec-733a-401d-9467-ffe86bf0044b-clustermesh-secrets\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573492 kubelet[1916]: I0702 08:02:54.573312 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-run\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573492 kubelet[1916]: I0702 08:02:54.573336 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cni-path\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573700 kubelet[1916]: I0702 08:02:54.573357 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-net\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573700 kubelet[1916]: I0702 08:02:54.573381 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-kernel\") pod \"cilium-brm8r\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " pod="kube-system/cilium-brm8r" Jul 2 08:02:54.573700 kubelet[1916]: I0702 08:02:54.573414 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0cac63-1dd5-4f44-85a2-51f82cffb1f5-xtables-lock\") pod \"kube-proxy-5p42p\" (UID: \"7c0cac63-1dd5-4f44-85a2-51f82cffb1f5\") " pod="kube-system/kube-proxy-5p42p" Jul 2 08:02:54.573700 kubelet[1916]: I0702 08:02:54.573444 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6c67\" (UniqueName: \"kubernetes.io/projected/7c0cac63-1dd5-4f44-85a2-51f82cffb1f5-kube-api-access-h6c67\") pod \"kube-proxy-5p42p\" (UID: \"7c0cac63-1dd5-4f44-85a2-51f82cffb1f5\") " pod="kube-system/kube-proxy-5p42p" Jul 2 08:02:54.856755 env[1433]: time="2024-07-02T08:02:54.855448785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5p42p,Uid:7c0cac63-1dd5-4f44-85a2-51f82cffb1f5,Namespace:kube-system,Attempt:0,}" Jul 2 08:02:54.864038 env[1433]: time="2024-07-02T08:02:54.864000901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brm8r,Uid:ffe85cec-733a-401d-9467-ffe86bf0044b,Namespace:kube-system,Attempt:0,}" Jul 2 08:02:55.531660 kubelet[1916]: E0702 08:02:55.531611 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:55.935396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006631796.mount: Deactivated successfully. Jul 2 08:02:55.970237 env[1433]: time="2024-07-02T08:02:55.970174840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:55.974502 env[1433]: time="2024-07-02T08:02:55.974455048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:55.994713 env[1433]: time="2024-07-02T08:02:55.994668883Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:55.997283 env[1433]: time="2024-07-02T08:02:55.997239187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:56.001092 env[1433]: time="2024-07-02T08:02:56.001055894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:56.004974 env[1433]: time="2024-07-02T08:02:56.004938000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:56.008417 env[1433]: time="2024-07-02T08:02:56.008383806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:56.011682 env[1433]: time="2024-07-02T08:02:56.011647211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:02:56.102041 env[1433]: time="2024-07-02T08:02:56.101965259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:02:56.102213 env[1433]: time="2024-07-02T08:02:56.102020759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:02:56.102213 env[1433]: time="2024-07-02T08:02:56.102036359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:02:56.102213 env[1433]: time="2024-07-02T08:02:56.102171859Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b pid=1963 runtime=io.containerd.runc.v2 Jul 2 08:02:56.115173 env[1433]: time="2024-07-02T08:02:56.115087280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:02:56.115405 env[1433]: time="2024-07-02T08:02:56.115361981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:02:56.115521 env[1433]: time="2024-07-02T08:02:56.115498981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:02:56.115746 env[1433]: time="2024-07-02T08:02:56.115707981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95b94e84f0000cb2d5b82f7c920bf921cf720052c3035aaa1ea8378c3fdc5fad pid=1982 runtime=io.containerd.runc.v2 Jul 2 08:02:56.126617 systemd[1]: Started cri-containerd-b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b.scope. Jul 2 08:02:56.145451 systemd[1]: Started cri-containerd-95b94e84f0000cb2d5b82f7c920bf921cf720052c3035aaa1ea8378c3fdc5fad.scope. Jul 2 08:02:56.167897 env[1433]: time="2024-07-02T08:02:56.167370965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brm8r,Uid:ffe85cec-733a-401d-9467-ffe86bf0044b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\"" Jul 2 08:02:56.171465 env[1433]: time="2024-07-02T08:02:56.171423872Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:02:56.176632 env[1433]: time="2024-07-02T08:02:56.176595780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5p42p,Uid:7c0cac63-1dd5-4f44-85a2-51f82cffb1f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"95b94e84f0000cb2d5b82f7c920bf921cf720052c3035aaa1ea8378c3fdc5fad\"" Jul 2 08:02:56.531750 kubelet[1916]: E0702 08:02:56.531706 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:57.532498 kubelet[1916]: E0702 08:02:57.532446 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:58.533439 kubelet[1916]: E0702 08:02:58.533401 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:02:59.533703 kubelet[1916]: E0702 08:02:59.533640 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:00.534458 kubelet[1916]: E0702 08:03:00.534411 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:01.535147 kubelet[1916]: E0702 08:03:01.535108 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:02.132736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501724813.mount: Deactivated successfully. Jul 2 08:03:02.536012 kubelet[1916]: E0702 08:03:02.535949 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:03.536657 kubelet[1916]: E0702 08:03:03.536574 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:04.536879 kubelet[1916]: E0702 08:03:04.536842 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:04.838279 env[1433]: time="2024-07-02T08:03:04.838128013Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:04.843818 env[1433]: time="2024-07-02T08:03:04.843776740Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:04.848593 env[1433]: time="2024-07-02T08:03:04.848557148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:04.849329 env[1433]: time="2024-07-02T08:03:04.849288664Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:03:04.851416 env[1433]: time="2024-07-02T08:03:04.851391112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:03:04.852688 env[1433]: time="2024-07-02T08:03:04.852654240Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:03:04.882666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500174812.mount: Deactivated successfully. Jul 2 08:03:04.888858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300583989.mount: Deactivated successfully. Jul 2 08:03:04.909376 env[1433]: time="2024-07-02T08:03:04.909326717Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\"" Jul 2 08:03:04.910151 env[1433]: time="2024-07-02T08:03:04.910098135Z" level=info msg="StartContainer for \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\"" Jul 2 08:03:04.929123 systemd[1]: Started cri-containerd-b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8.scope. Jul 2 08:03:04.964023 env[1433]: time="2024-07-02T08:03:04.962550517Z" level=info msg="StartContainer for \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\" returns successfully" Jul 2 08:03:04.968645 systemd[1]: cri-containerd-b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8.scope: Deactivated successfully. Jul 2 08:03:05.943972 kubelet[1916]: E0702 08:03:05.536947 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:05.880766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8-rootfs.mount: Deactivated successfully. Jul 2 08:03:06.538045 kubelet[1916]: E0702 08:03:06.537978 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:07.539103 kubelet[1916]: E0702 08:03:07.539039 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:08.540226 kubelet[1916]: E0702 08:03:08.540160 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:08.793193 env[1433]: time="2024-07-02T08:03:08.792736964Z" level=info msg="shim disconnected" id=b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8 Jul 2 08:03:08.793193 env[1433]: time="2024-07-02T08:03:08.792795166Z" level=warning msg="cleaning up after shim disconnected" id=b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8 namespace=k8s.io Jul 2 08:03:08.793193 env[1433]: time="2024-07-02T08:03:08.792810566Z" level=info msg="cleaning up dead shim" Jul 2 08:03:08.801920 env[1433]: time="2024-07-02T08:03:08.801875049Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:03:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2089 runtime=io.containerd.runc.v2\n" Jul 2 08:03:09.540696 kubelet[1916]: E0702 08:03:09.540659 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:09.707243 env[1433]: time="2024-07-02T08:03:09.707190342Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:03:09.757508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860024872.mount: Deactivated successfully. Jul 2 08:03:09.770685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769309719.mount: Deactivated successfully. Jul 2 08:03:09.791400 env[1433]: time="2024-07-02T08:03:09.790882286Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\"" Jul 2 08:03:09.791769 env[1433]: time="2024-07-02T08:03:09.791736203Z" level=info msg="StartContainer for \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\"" Jul 2 08:03:09.820624 systemd[1]: Started cri-containerd-6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287.scope. Jul 2 08:03:09.882725 env[1433]: time="2024-07-02T08:03:09.882677189Z" level=info msg="StartContainer for \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\" returns successfully" Jul 2 08:03:09.888294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:03:09.888587 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:03:09.888812 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:03:09.890708 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:03:09.893986 systemd[1]: cri-containerd-6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287.scope: Deactivated successfully. Jul 2 08:03:09.905565 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:03:10.212861 env[1433]: time="2024-07-02T08:03:10.212154752Z" level=info msg="shim disconnected" id=6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287 Jul 2 08:03:10.212861 env[1433]: time="2024-07-02T08:03:10.212207953Z" level=warning msg="cleaning up after shim disconnected" id=6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287 namespace=k8s.io Jul 2 08:03:10.212861 env[1433]: time="2024-07-02T08:03:10.212218954Z" level=info msg="cleaning up dead shim" Jul 2 08:03:10.222276 env[1433]: time="2024-07-02T08:03:10.222216245Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:03:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2153 runtime=io.containerd.runc.v2\n" Jul 2 08:03:10.409656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714420090.mount: Deactivated successfully. Jul 2 08:03:10.541576 kubelet[1916]: E0702 08:03:10.541489 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:10.710201 env[1433]: time="2024-07-02T08:03:10.710150072Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:03:10.754392 env[1433]: time="2024-07-02T08:03:10.754340717Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\"" Jul 2 08:03:10.755132 env[1433]: time="2024-07-02T08:03:10.755097731Z" level=info msg="StartContainer for \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\"" Jul 2 08:03:10.782903 systemd[1]: Started cri-containerd-80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66.scope. Jul 2 08:03:10.822980 systemd[1]: cri-containerd-80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66.scope: Deactivated successfully. Jul 2 08:03:10.825223 env[1433]: time="2024-07-02T08:03:10.825175371Z" level=info msg="StartContainer for \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\" returns successfully" Jul 2 08:03:10.833590 env[1433]: time="2024-07-02T08:03:10.833542731Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:10.842394 env[1433]: time="2024-07-02T08:03:10.842349199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:11.285400 env[1433]: time="2024-07-02T08:03:11.141663953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:11.359889 env[1433]: time="2024-07-02T08:03:11.359826012Z" level=info msg="shim disconnected" id=80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66 Jul 2 08:03:11.360083 env[1433]: time="2024-07-02T08:03:11.359906214Z" level=warning msg="cleaning up after shim disconnected" id=80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66 namespace=k8s.io Jul 2 08:03:11.360083 env[1433]: time="2024-07-02T08:03:11.359923714Z" level=info msg="cleaning up dead shim" Jul 2 08:03:11.361767 env[1433]: time="2024-07-02T08:03:11.361721048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:11.362850 env[1433]: time="2024-07-02T08:03:11.362250258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 08:03:11.366344 env[1433]: time="2024-07-02T08:03:11.366310333Z" level=info msg="CreateContainer within sandbox \"95b94e84f0000cb2d5b82f7c920bf921cf720052c3035aaa1ea8378c3fdc5fad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:03:11.371310 env[1433]: time="2024-07-02T08:03:11.371282326Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:03:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2210 runtime=io.containerd.runc.v2\n" Jul 2 08:03:11.409105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66-rootfs.mount: Deactivated successfully. Jul 2 08:03:11.413308 env[1433]: time="2024-07-02T08:03:11.413227706Z" level=info msg="CreateContainer within sandbox \"95b94e84f0000cb2d5b82f7c920bf921cf720052c3035aaa1ea8378c3fdc5fad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"098b1f0c645bae3c86b3a1b752c83063601c28dc155bdc072a4019a94a6e73e1\"" Jul 2 08:03:11.413901 env[1433]: time="2024-07-02T08:03:11.413779816Z" level=info msg="StartContainer for \"098b1f0c645bae3c86b3a1b752c83063601c28dc155bdc072a4019a94a6e73e1\"" Jul 2 08:03:11.438767 systemd[1]: run-containerd-runc-k8s.io-098b1f0c645bae3c86b3a1b752c83063601c28dc155bdc072a4019a94a6e73e1-runc.sy5q8i.mount: Deactivated successfully. Jul 2 08:03:11.444834 systemd[1]: Started cri-containerd-098b1f0c645bae3c86b3a1b752c83063601c28dc155bdc072a4019a94a6e73e1.scope. Jul 2 08:03:11.479987 env[1433]: time="2024-07-02T08:03:11.479940547Z" level=info msg="StartContainer for \"098b1f0c645bae3c86b3a1b752c83063601c28dc155bdc072a4019a94a6e73e1\" returns successfully" Jul 2 08:03:11.542471 kubelet[1916]: E0702 08:03:11.542338 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:11.718011 env[1433]: time="2024-07-02T08:03:11.717964776Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:03:11.720743 kubelet[1916]: I0702 08:03:11.720685 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5p42p" podStartSLOduration=4.534042211 podStartE2EDuration="19.720668626s" podCreationTimestamp="2024-07-02 08:02:52 +0000 UTC" firstStartedPulling="2024-07-02 08:02:56.177744582 +0000 UTC m=+4.385955946" lastFinishedPulling="2024-07-02 08:03:11.364370997 +0000 UTC m=+19.572582361" observedRunningTime="2024-07-02 08:03:11.720540424 +0000 UTC m=+19.928751888" watchObservedRunningTime="2024-07-02 08:03:11.720668626 +0000 UTC m=+19.928879990" Jul 2 08:03:11.755681 env[1433]: time="2024-07-02T08:03:11.755215269Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\"" Jul 2 08:03:11.756115 env[1433]: time="2024-07-02T08:03:11.756083285Z" level=info msg="StartContainer for \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\"" Jul 2 08:03:11.776941 systemd[1]: Started cri-containerd-2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d.scope. Jul 2 08:03:11.807787 systemd[1]: cri-containerd-2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d.scope: Deactivated successfully. Jul 2 08:03:11.812426 env[1433]: time="2024-07-02T08:03:11.812341732Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffe85cec_733a_401d_9467_ffe86bf0044b.slice/cri-containerd-2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d.scope/memory.events\": no such file or directory" Jul 2 08:03:11.815388 env[1433]: time="2024-07-02T08:03:11.815226186Z" level=info msg="StartContainer for \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\" returns successfully" Jul 2 08:03:11.850859 env[1433]: time="2024-07-02T08:03:11.850805048Z" level=info msg="shim disconnected" id=2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d Jul 2 08:03:11.850859 env[1433]: time="2024-07-02T08:03:11.850851448Z" level=warning msg="cleaning up after shim disconnected" id=2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d namespace=k8s.io Jul 2 08:03:11.850859 env[1433]: time="2024-07-02T08:03:11.850864549Z" level=info msg="cleaning up dead shim" Jul 2 08:03:11.858389 env[1433]: time="2024-07-02T08:03:11.858347688Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:03:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2379 runtime=io.containerd.runc.v2\n" Jul 2 08:03:12.530522 kubelet[1916]: E0702 08:03:12.530464 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:12.542883 kubelet[1916]: E0702 08:03:12.542831 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:12.724312 env[1433]: time="2024-07-02T08:03:12.724183340Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:03:12.755904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743427978.mount: Deactivated successfully. Jul 2 08:03:12.763561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116867313.mount: Deactivated successfully. Jul 2 08:03:12.777516 env[1433]: time="2024-07-02T08:03:12.777459705Z" level=info msg="CreateContainer within sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\"" Jul 2 08:03:12.778043 env[1433]: time="2024-07-02T08:03:12.778013115Z" level=info msg="StartContainer for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\"" Jul 2 08:03:12.795317 systemd[1]: Started cri-containerd-ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b.scope. Jul 2 08:03:12.833357 env[1433]: time="2024-07-02T08:03:12.833314317Z" level=info msg="StartContainer for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" returns successfully" Jul 2 08:03:13.010098 kubelet[1916]: I0702 08:03:13.010068 1916 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:03:13.543287 kubelet[1916]: E0702 08:03:13.543229 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:13.552288 kernel: Initializing XFRM netlink socket Jul 2 08:03:13.771982 kubelet[1916]: I0702 08:03:13.771907 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-brm8r" podStartSLOduration=13.091846025 podStartE2EDuration="21.771865047s" podCreationTimestamp="2024-07-02 08:02:52 +0000 UTC" firstStartedPulling="2024-07-02 08:02:56.170551471 +0000 UTC m=+4.378762935" lastFinishedPulling="2024-07-02 08:03:04.850570593 +0000 UTC m=+13.058781957" observedRunningTime="2024-07-02 08:03:13.770849429 +0000 UTC m=+21.979060793" watchObservedRunningTime="2024-07-02 08:03:13.771865047 +0000 UTC m=+21.980076511" Jul 2 08:03:14.543842 kubelet[1916]: E0702 08:03:14.543776 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:15.197320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:03:15.198233 systemd-networkd[1580]: cilium_host: Link UP Jul 2 08:03:15.199245 systemd-networkd[1580]: cilium_net: Link UP Jul 2 08:03:15.199254 systemd-networkd[1580]: cilium_net: Gained carrier Jul 2 08:03:15.199501 systemd-networkd[1580]: cilium_host: Gained carrier Jul 2 08:03:15.400155 systemd-networkd[1580]: cilium_vxlan: Link UP Jul 2 08:03:15.400165 systemd-networkd[1580]: cilium_vxlan: Gained carrier Jul 2 08:03:15.470442 systemd-networkd[1580]: cilium_host: Gained IPv6LL Jul 2 08:03:15.544324 kubelet[1916]: E0702 08:03:15.544231 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:15.651315 kernel: NET: Registered PF_ALG protocol family Jul 2 08:03:15.982447 systemd-networkd[1580]: cilium_net: Gained IPv6LL Jul 2 08:03:16.533478 systemd-networkd[1580]: lxc_health: Link UP Jul 2 08:03:16.544845 kubelet[1916]: E0702 08:03:16.544810 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:16.550040 systemd-networkd[1580]: lxc_health: Gained carrier Jul 2 08:03:16.550282 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:03:16.814454 systemd-networkd[1580]: cilium_vxlan: Gained IPv6LL Jul 2 08:03:17.545993 kubelet[1916]: E0702 08:03:17.545938 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:18.542503 systemd-networkd[1580]: lxc_health: Gained IPv6LL Jul 2 08:03:18.546434 kubelet[1916]: E0702 08:03:18.546393 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:19.547335 kubelet[1916]: E0702 08:03:19.547283 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:20.516595 kubelet[1916]: I0702 08:03:20.516378 1916 topology_manager.go:215] "Topology Admit Handler" podUID="8b64fcc9-3167-421a-be45-ee8c936f97b4" podNamespace="default" podName="nginx-deployment-85f456d6dd-sfnc4" Jul 2 08:03:20.524478 systemd[1]: Created slice kubepods-besteffort-pod8b64fcc9_3167_421a_be45_ee8c936f97b4.slice. Jul 2 08:03:20.543647 kubelet[1916]: I0702 08:03:20.543612 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqt9\" (UniqueName: \"kubernetes.io/projected/8b64fcc9-3167-421a-be45-ee8c936f97b4-kube-api-access-jvqt9\") pod \"nginx-deployment-85f456d6dd-sfnc4\" (UID: \"8b64fcc9-3167-421a-be45-ee8c936f97b4\") " pod="default/nginx-deployment-85f456d6dd-sfnc4" Jul 2 08:03:20.548068 kubelet[1916]: E0702 08:03:20.548042 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:20.830616 env[1433]: time="2024-07-02T08:03:20.830480724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sfnc4,Uid:8b64fcc9-3167-421a-be45-ee8c936f97b4,Namespace:default,Attempt:0,}" Jul 2 08:03:20.895441 systemd-networkd[1580]: lxcc9ac9c92ab1e: Link UP Jul 2 08:03:20.903368 kernel: eth0: renamed from tmp96f1e Jul 2 08:03:20.913661 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:03:20.913753 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc9ac9c92ab1e: link becomes ready Jul 2 08:03:20.917348 systemd-networkd[1580]: lxcc9ac9c92ab1e: Gained carrier Jul 2 08:03:21.143708 env[1433]: time="2024-07-02T08:03:21.143343263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:03:21.144016 env[1433]: time="2024-07-02T08:03:21.143381864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:03:21.144016 env[1433]: time="2024-07-02T08:03:21.143395164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:03:21.144016 env[1433]: time="2024-07-02T08:03:21.143543366Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96f1ebb4def286852f75f5726c597bfcc494d0a44cca27ac085dad24ad0e5a1b pid=2953 runtime=io.containerd.runc.v2 Jul 2 08:03:21.164510 systemd[1]: Started cri-containerd-96f1ebb4def286852f75f5726c597bfcc494d0a44cca27ac085dad24ad0e5a1b.scope. Jul 2 08:03:21.204507 env[1433]: time="2024-07-02T08:03:21.204464038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sfnc4,Uid:8b64fcc9-3167-421a-be45-ee8c936f97b4,Namespace:default,Attempt:0,} returns sandbox id \"96f1ebb4def286852f75f5726c597bfcc494d0a44cca27ac085dad24ad0e5a1b\"" Jul 2 08:03:21.206720 env[1433]: time="2024-07-02T08:03:21.206690669Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 08:03:21.549576 kubelet[1916]: E0702 08:03:21.549511 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:21.656593 systemd[1]: run-containerd-runc-k8s.io-96f1ebb4def286852f75f5726c597bfcc494d0a44cca27ac085dad24ad0e5a1b-runc.Qahfpf.mount: Deactivated successfully. Jul 2 08:03:22.254601 systemd-networkd[1580]: lxcc9ac9c92ab1e: Gained IPv6LL Jul 2 08:03:22.550248 kubelet[1916]: E0702 08:03:22.550151 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:23.550399 kubelet[1916]: E0702 08:03:23.550338 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:24.550876 kubelet[1916]: E0702 08:03:24.550832 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:24.613161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132212288.mount: Deactivated successfully. Jul 2 08:03:25.551120 kubelet[1916]: E0702 08:03:25.551074 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:26.136352 env[1433]: time="2024-07-02T08:03:26.136297784Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:26.147176 env[1433]: time="2024-07-02T08:03:26.147122121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:26.153415 env[1433]: time="2024-07-02T08:03:26.153380900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:26.158521 env[1433]: time="2024-07-02T08:03:26.158481764Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:26.159163 env[1433]: time="2024-07-02T08:03:26.159127472Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 08:03:26.161799 env[1433]: time="2024-07-02T08:03:26.161769906Z" level=info msg="CreateContainer within sandbox \"96f1ebb4def286852f75f5726c597bfcc494d0a44cca27ac085dad24ad0e5a1b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 08:03:26.198737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740723291.mount: Deactivated successfully. Jul 2 08:03:26.216005 env[1433]: time="2024-07-02T08:03:26.215950789Z" level=info msg="CreateContainer within sandbox \"96f1ebb4def286852f75f5726c597bfcc494d0a44cca27ac085dad24ad0e5a1b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d3805ef58b1dd8c8c6fa636875ce2817e32f1ad197dffd7e5f746a6f1eb1abd1\"" Jul 2 08:03:26.216717 env[1433]: time="2024-07-02T08:03:26.216686299Z" level=info msg="StartContainer for \"d3805ef58b1dd8c8c6fa636875ce2817e32f1ad197dffd7e5f746a6f1eb1abd1\"" Jul 2 08:03:26.242974 systemd[1]: Started cri-containerd-d3805ef58b1dd8c8c6fa636875ce2817e32f1ad197dffd7e5f746a6f1eb1abd1.scope. Jul 2 08:03:26.286177 env[1433]: time="2024-07-02T08:03:26.286119375Z" level=info msg="StartContainer for \"d3805ef58b1dd8c8c6fa636875ce2817e32f1ad197dffd7e5f746a6f1eb1abd1\" returns successfully" Jul 2 08:03:26.551680 kubelet[1916]: E0702 08:03:26.551629 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:27.193380 systemd[1]: run-containerd-runc-k8s.io-d3805ef58b1dd8c8c6fa636875ce2817e32f1ad197dffd7e5f746a6f1eb1abd1-runc.Lcb1Pg.mount: Deactivated successfully. Jul 2 08:03:27.551921 kubelet[1916]: E0702 08:03:27.551859 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:28.552829 kubelet[1916]: E0702 08:03:28.552765 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:29.553533 kubelet[1916]: E0702 08:03:29.553474 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:30.553740 kubelet[1916]: E0702 08:03:30.553679 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:31.554846 kubelet[1916]: E0702 08:03:31.554785 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:32.141454 kubelet[1916]: I0702 08:03:32.141390 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-sfnc4" podStartSLOduration=7.187021679 podStartE2EDuration="12.141364807s" podCreationTimestamp="2024-07-02 08:03:20 +0000 UTC" firstStartedPulling="2024-07-02 08:03:21.206173462 +0000 UTC m=+29.414384926" lastFinishedPulling="2024-07-02 08:03:26.16051669 +0000 UTC m=+34.368728054" observedRunningTime="2024-07-02 08:03:26.780364411 +0000 UTC m=+34.988575875" watchObservedRunningTime="2024-07-02 08:03:32.141364807 +0000 UTC m=+40.349576171" Jul 2 08:03:32.141779 kubelet[1916]: I0702 08:03:32.141753 1916 topology_manager.go:215] "Topology Admit Handler" podUID="13aeddd4-2c08-415f-a9aa-8bc8c00870fc" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 08:03:32.147049 systemd[1]: Created slice kubepods-besteffort-pod13aeddd4_2c08_415f_a9aa_8bc8c00870fc.slice. Jul 2 08:03:32.311384 kubelet[1916]: I0702 08:03:32.311324 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/13aeddd4-2c08-415f-a9aa-8bc8c00870fc-data\") pod \"nfs-server-provisioner-0\" (UID: \"13aeddd4-2c08-415f-a9aa-8bc8c00870fc\") " pod="default/nfs-server-provisioner-0" Jul 2 08:03:32.311384 kubelet[1916]: I0702 08:03:32.311379 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdr96\" (UniqueName: \"kubernetes.io/projected/13aeddd4-2c08-415f-a9aa-8bc8c00870fc-kube-api-access-pdr96\") pod \"nfs-server-provisioner-0\" (UID: \"13aeddd4-2c08-415f-a9aa-8bc8c00870fc\") " pod="default/nfs-server-provisioner-0" Jul 2 08:03:32.451012 env[1433]: time="2024-07-02T08:03:32.450521982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:13aeddd4-2c08-415f-a9aa-8bc8c00870fc,Namespace:default,Attempt:0,}" Jul 2 08:03:32.521475 systemd-networkd[1580]: lxce1e2744efb44: Link UP Jul 2 08:03:32.529339 kernel: eth0: renamed from tmp5328b Jul 2 08:03:32.530491 kubelet[1916]: E0702 08:03:32.530440 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:32.543630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:03:32.543695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce1e2744efb44: link becomes ready Jul 2 08:03:32.543974 systemd-networkd[1580]: lxce1e2744efb44: Gained carrier Jul 2 08:03:32.555824 kubelet[1916]: E0702 08:03:32.555786 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:32.781454 env[1433]: time="2024-07-02T08:03:32.781364693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:03:32.781675 env[1433]: time="2024-07-02T08:03:32.781414893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:03:32.781675 env[1433]: time="2024-07-02T08:03:32.781428994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:03:32.781826 env[1433]: time="2024-07-02T08:03:32.781746997Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5328b02ad99424067d70149df0f7969f6d773a7c7805955fc418715b53653632 pid=3076 runtime=io.containerd.runc.v2 Jul 2 08:03:32.802700 systemd[1]: Started cri-containerd-5328b02ad99424067d70149df0f7969f6d773a7c7805955fc418715b53653632.scope. Jul 2 08:03:32.844901 env[1433]: time="2024-07-02T08:03:32.844854986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:13aeddd4-2c08-415f-a9aa-8bc8c00870fc,Namespace:default,Attempt:0,} returns sandbox id \"5328b02ad99424067d70149df0f7969f6d773a7c7805955fc418715b53653632\"" Jul 2 08:03:32.846664 env[1433]: time="2024-07-02T08:03:32.846626105Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 08:03:33.423304 systemd[1]: run-containerd-runc-k8s.io-5328b02ad99424067d70149df0f7969f6d773a7c7805955fc418715b53653632-runc.nbCtQy.mount: Deactivated successfully. Jul 2 08:03:33.556195 kubelet[1916]: E0702 08:03:33.556149 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:34.350577 systemd-networkd[1580]: lxce1e2744efb44: Gained IPv6LL Jul 2 08:03:34.557075 kubelet[1916]: E0702 08:03:34.557028 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:35.558093 kubelet[1916]: E0702 08:03:35.558025 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:35.628064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914382863.mount: Deactivated successfully. Jul 2 08:03:36.558811 kubelet[1916]: E0702 08:03:36.558746 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:37.559741 kubelet[1916]: E0702 08:03:37.559689 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:37.723381 env[1433]: time="2024-07-02T08:03:37.723322719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:37.731828 env[1433]: time="2024-07-02T08:03:37.731778401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:37.736918 env[1433]: time="2024-07-02T08:03:37.736878951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:37.741590 env[1433]: time="2024-07-02T08:03:37.741553196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:37.742164 env[1433]: time="2024-07-02T08:03:37.742127402Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 08:03:37.744896 env[1433]: time="2024-07-02T08:03:37.744857628Z" level=info msg="CreateContainer within sandbox \"5328b02ad99424067d70149df0f7969f6d773a7c7805955fc418715b53653632\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 08:03:37.773325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340461739.mount: Deactivated successfully. Jul 2 08:03:37.790861 env[1433]: time="2024-07-02T08:03:37.790808275Z" level=info msg="CreateContainer within sandbox \"5328b02ad99424067d70149df0f7969f6d773a7c7805955fc418715b53653632\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d09011ec779fd50b9235bc12680983f4dd993ba9d6bfffbfc3da96d84620ba5a\"" Jul 2 08:03:37.791506 env[1433]: time="2024-07-02T08:03:37.791377881Z" level=info msg="StartContainer for \"d09011ec779fd50b9235bc12680983f4dd993ba9d6bfffbfc3da96d84620ba5a\"" Jul 2 08:03:37.817850 systemd[1]: Started cri-containerd-d09011ec779fd50b9235bc12680983f4dd993ba9d6bfffbfc3da96d84620ba5a.scope. Jul 2 08:03:37.849123 env[1433]: time="2024-07-02T08:03:37.849074742Z" level=info msg="StartContainer for \"d09011ec779fd50b9235bc12680983f4dd993ba9d6bfffbfc3da96d84620ba5a\" returns successfully" Jul 2 08:03:38.560317 kubelet[1916]: E0702 08:03:38.560247 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:38.822746 kubelet[1916]: I0702 08:03:38.822573 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.9254808190000001 podStartE2EDuration="6.822557031s" podCreationTimestamp="2024-07-02 08:03:32 +0000 UTC" firstStartedPulling="2024-07-02 08:03:32.8461427 +0000 UTC m=+41.054354064" lastFinishedPulling="2024-07-02 08:03:37.743218812 +0000 UTC m=+45.951430276" observedRunningTime="2024-07-02 08:03:38.822128327 +0000 UTC m=+47.030339791" watchObservedRunningTime="2024-07-02 08:03:38.822557031 +0000 UTC m=+47.030768495" Jul 2 08:03:39.560761 kubelet[1916]: E0702 08:03:39.560695 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:40.561105 kubelet[1916]: E0702 08:03:40.561041 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:41.562018 kubelet[1916]: E0702 08:03:41.561968 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:42.562199 kubelet[1916]: E0702 08:03:42.562135 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:43.562808 kubelet[1916]: E0702 08:03:43.562749 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:44.563708 kubelet[1916]: E0702 08:03:44.563650 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:45.563879 kubelet[1916]: E0702 08:03:45.563828 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:46.564616 kubelet[1916]: E0702 08:03:46.564555 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:47.470572 kubelet[1916]: I0702 08:03:47.470526 1916 topology_manager.go:215] "Topology Admit Handler" podUID="39a77e24-765f-445e-ab62-3161d0c91f0d" podNamespace="default" podName="test-pod-1" Jul 2 08:03:47.476556 systemd[1]: Created slice kubepods-besteffort-pod39a77e24_765f_445e_ab62_3161d0c91f0d.slice. Jul 2 08:03:47.565099 kubelet[1916]: E0702 08:03:47.565044 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:47.593416 kubelet[1916]: I0702 08:03:47.593363 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a01dbe6-f5c3-4b74-a6a0-e62f43359804\" (UniqueName: \"kubernetes.io/nfs/39a77e24-765f-445e-ab62-3161d0c91f0d-pvc-9a01dbe6-f5c3-4b74-a6a0-e62f43359804\") pod \"test-pod-1\" (UID: \"39a77e24-765f-445e-ab62-3161d0c91f0d\") " pod="default/test-pod-1" Jul 2 08:03:47.593744 kubelet[1916]: I0702 08:03:47.593703 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqx6n\" (UniqueName: \"kubernetes.io/projected/39a77e24-765f-445e-ab62-3161d0c91f0d-kube-api-access-mqx6n\") pod \"test-pod-1\" (UID: \"39a77e24-765f-445e-ab62-3161d0c91f0d\") " pod="default/test-pod-1" Jul 2 08:03:48.015298 kernel: FS-Cache: Loaded Jul 2 08:03:48.168104 kernel: RPC: Registered named UNIX socket transport module. Jul 2 08:03:48.168246 kernel: RPC: Registered udp transport module. Jul 2 08:03:48.168290 kernel: RPC: Registered tcp transport module. Jul 2 08:03:48.173967 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 08:03:48.503409 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 08:03:48.565874 kubelet[1916]: E0702 08:03:48.565830 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:48.840394 kernel: NFS: Registering the id_resolver key type Jul 2 08:03:48.840542 kernel: Key type id_resolver registered Jul 2 08:03:48.840569 kernel: Key type id_legacy registered Jul 2 08:03:49.239133 nfsidmap[3194]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.5-a-a726d90360' Jul 2 08:03:49.261542 nfsidmap[3195]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.5-a-a726d90360' Jul 2 08:03:49.280221 env[1433]: time="2024-07-02T08:03:49.280178704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39a77e24-765f-445e-ab62-3161d0c91f0d,Namespace:default,Attempt:0,}" Jul 2 08:03:49.350195 systemd-networkd[1580]: lxc31839827b4be: Link UP Jul 2 08:03:49.358372 kernel: eth0: renamed from tmp6abd7 Jul 2 08:03:49.370795 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:03:49.370907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc31839827b4be: link becomes ready Jul 2 08:03:49.371763 systemd-networkd[1580]: lxc31839827b4be: Gained carrier Jul 2 08:03:49.567455 kubelet[1916]: E0702 08:03:49.567395 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:49.596500 env[1433]: time="2024-07-02T08:03:49.596407686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:03:49.596500 env[1433]: time="2024-07-02T08:03:49.596464986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:03:49.596500 env[1433]: time="2024-07-02T08:03:49.596479786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:03:49.597011 env[1433]: time="2024-07-02T08:03:49.596955690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6abd7e7221688acd2c1223144e55de2d2d9eb5b26c6c13795835501615ff7ca8 pid=3225 runtime=io.containerd.runc.v2 Jul 2 08:03:49.614836 systemd[1]: Started cri-containerd-6abd7e7221688acd2c1223144e55de2d2d9eb5b26c6c13795835501615ff7ca8.scope. Jul 2 08:03:49.658399 env[1433]: time="2024-07-02T08:03:49.658354952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39a77e24-765f-445e-ab62-3161d0c91f0d,Namespace:default,Attempt:0,} returns sandbox id \"6abd7e7221688acd2c1223144e55de2d2d9eb5b26c6c13795835501615ff7ca8\"" Jul 2 08:03:49.660498 env[1433]: time="2024-07-02T08:03:49.660467668Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 08:03:50.097965 env[1433]: time="2024-07-02T08:03:50.097918149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.107565 env[1433]: time="2024-07-02T08:03:50.107394019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.113156 env[1433]: time="2024-07-02T08:03:50.113116161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.119835 env[1433]: time="2024-07-02T08:03:50.119795010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.120413 env[1433]: time="2024-07-02T08:03:50.120374115Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 08:03:50.123295 env[1433]: time="2024-07-02T08:03:50.123250036Z" level=info msg="CreateContainer within sandbox \"6abd7e7221688acd2c1223144e55de2d2d9eb5b26c6c13795835501615ff7ca8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 08:03:50.167073 env[1433]: time="2024-07-02T08:03:50.167019659Z" level=info msg="CreateContainer within sandbox \"6abd7e7221688acd2c1223144e55de2d2d9eb5b26c6c13795835501615ff7ca8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f3d16e9439092e721ce7b1e4af35509b53f78a80d79908a3b526998fd1bc0e02\"" Jul 2 08:03:50.167930 env[1433]: time="2024-07-02T08:03:50.167877865Z" level=info msg="StartContainer for \"f3d16e9439092e721ce7b1e4af35509b53f78a80d79908a3b526998fd1bc0e02\"" Jul 2 08:03:50.185805 systemd[1]: Started cri-containerd-f3d16e9439092e721ce7b1e4af35509b53f78a80d79908a3b526998fd1bc0e02.scope. Jul 2 08:03:50.221307 env[1433]: time="2024-07-02T08:03:50.221236359Z" level=info msg="StartContainer for \"f3d16e9439092e721ce7b1e4af35509b53f78a80d79908a3b526998fd1bc0e02\" returns successfully" Jul 2 08:03:50.568505 kubelet[1916]: E0702 08:03:50.568447 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:50.734607 systemd-networkd[1580]: lxc31839827b4be: Gained IPv6LL Jul 2 08:03:50.847173 kubelet[1916]: I0702 08:03:50.847048 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.385157819 podStartE2EDuration="17.84703358s" podCreationTimestamp="2024-07-02 08:03:33 +0000 UTC" firstStartedPulling="2024-07-02 08:03:49.659949364 +0000 UTC m=+57.868160728" lastFinishedPulling="2024-07-02 08:03:50.121825125 +0000 UTC m=+58.330036489" observedRunningTime="2024-07-02 08:03:50.846898979 +0000 UTC m=+59.055110443" watchObservedRunningTime="2024-07-02 08:03:50.84703358 +0000 UTC m=+59.055244944" Jul 2 08:03:51.569676 kubelet[1916]: E0702 08:03:51.569611 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:52.529957 kubelet[1916]: E0702 08:03:52.529899 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:52.570735 kubelet[1916]: E0702 08:03:52.570704 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:53.571669 kubelet[1916]: E0702 08:03:53.571609 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:54.572403 kubelet[1916]: E0702 08:03:54.572350 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:55.572853 kubelet[1916]: E0702 08:03:55.572788 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:56.107640 env[1433]: time="2024-07-02T08:03:56.107575578Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:03:56.112809 env[1433]: time="2024-07-02T08:03:56.112774312Z" level=info msg="StopContainer for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" with timeout 2 (s)" Jul 2 08:03:56.113134 env[1433]: time="2024-07-02T08:03:56.113102514Z" level=info msg="Stop container \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" with signal terminated" Jul 2 08:03:56.120843 systemd-networkd[1580]: lxc_health: Link DOWN Jul 2 08:03:56.120852 systemd-networkd[1580]: lxc_health: Lost carrier Jul 2 08:03:56.148720 systemd[1]: cri-containerd-ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b.scope: Deactivated successfully. Jul 2 08:03:56.149028 systemd[1]: cri-containerd-ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b.scope: Consumed 6.242s CPU time. Jul 2 08:03:56.168467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b-rootfs.mount: Deactivated successfully. Jul 2 08:03:56.573045 kubelet[1916]: E0702 08:03:56.572985 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:57.573223 kubelet[1916]: E0702 08:03:57.573169 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:57.624096 kubelet[1916]: E0702 08:03:57.624035 1916 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:03:58.119015 env[1433]: time="2024-07-02T08:03:58.118941184Z" level=info msg="Kill container \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\"" Jul 2 08:03:58.574286 kubelet[1916]: E0702 08:03:58.574205 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:59.575436 kubelet[1916]: E0702 08:03:59.575369 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:03:59.703534 env[1433]: time="2024-07-02T08:03:59.703464177Z" level=info msg="shim disconnected" id=ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b Jul 2 08:03:59.703534 env[1433]: time="2024-07-02T08:03:59.703533777Z" level=warning msg="cleaning up after shim disconnected" id=ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b namespace=k8s.io Jul 2 08:03:59.704132 env[1433]: time="2024-07-02T08:03:59.703548477Z" level=info msg="cleaning up dead shim" Jul 2 08:03:59.712343 env[1433]: time="2024-07-02T08:03:59.712298632Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:03:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Jul 2 08:03:59.720170 env[1433]: time="2024-07-02T08:03:59.720131781Z" level=info msg="StopContainer for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" returns successfully" Jul 2 08:03:59.720904 env[1433]: time="2024-07-02T08:03:59.720870186Z" level=info msg="StopPodSandbox for \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\"" Jul 2 08:03:59.721023 env[1433]: time="2024-07-02T08:03:59.720940886Z" level=info msg="Container to stop \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:03:59.721023 env[1433]: time="2024-07-02T08:03:59.720960386Z" level=info msg="Container to stop \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:03:59.721023 env[1433]: time="2024-07-02T08:03:59.720974686Z" level=info msg="Container to stop \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:03:59.721023 env[1433]: time="2024-07-02T08:03:59.720988986Z" level=info msg="Container to stop \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:03:59.721023 env[1433]: time="2024-07-02T08:03:59.721002786Z" level=info msg="Container to stop \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:03:59.723488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b-shm.mount: Deactivated successfully. Jul 2 08:03:59.730234 systemd[1]: cri-containerd-b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b.scope: Deactivated successfully. Jul 2 08:03:59.752325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b-rootfs.mount: Deactivated successfully. Jul 2 08:03:59.768422 env[1433]: time="2024-07-02T08:03:59.768369182Z" level=info msg="shim disconnected" id=b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b Jul 2 08:03:59.768691 env[1433]: time="2024-07-02T08:03:59.768667184Z" level=warning msg="cleaning up after shim disconnected" id=b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b namespace=k8s.io Jul 2 08:03:59.768782 env[1433]: time="2024-07-02T08:03:59.768768085Z" level=info msg="cleaning up dead shim" Jul 2 08:03:59.776352 env[1433]: time="2024-07-02T08:03:59.776313932Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:03:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3391 runtime=io.containerd.runc.v2\n" Jul 2 08:03:59.776653 env[1433]: time="2024-07-02T08:03:59.776627034Z" level=info msg="TearDown network for sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" successfully" Jul 2 08:03:59.776735 env[1433]: time="2024-07-02T08:03:59.776652034Z" level=info msg="StopPodSandbox for \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" returns successfully" Jul 2 08:03:59.867672 kubelet[1916]: I0702 08:03:59.866742 1916 scope.go:117] "RemoveContainer" containerID="ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b" Jul 2 08:03:59.868301 env[1433]: time="2024-07-02T08:03:59.868248106Z" level=info msg="RemoveContainer for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\"" Jul 2 08:03:59.872835 kubelet[1916]: I0702 08:03:59.871599 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-config-path\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.872835 kubelet[1916]: I0702 08:03:59.871633 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-hostproc\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.872835 kubelet[1916]: I0702 08:03:59.871660 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-cgroup\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.872835 kubelet[1916]: I0702 08:03:59.871694 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-etc-cni-netd\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.872835 kubelet[1916]: I0702 08:03:59.871712 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-run\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.872835 kubelet[1916]: I0702 08:03:59.871734 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-lib-modules\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873175 kubelet[1916]: I0702 08:03:59.871767 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-xtables-lock\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873175 kubelet[1916]: I0702 08:03:59.871791 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8mpr\" (UniqueName: \"kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-kube-api-access-l8mpr\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873175 kubelet[1916]: I0702 08:03:59.871819 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffe85cec-733a-401d-9467-ffe86bf0044b-clustermesh-secrets\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873175 kubelet[1916]: I0702 08:03:59.871856 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cni-path\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873175 kubelet[1916]: I0702 08:03:59.871877 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-net\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873175 kubelet[1916]: I0702 08:03:59.871898 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-kernel\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873450 kubelet[1916]: I0702 08:03:59.871933 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-bpf-maps\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.873450 kubelet[1916]: I0702 08:03:59.871956 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-hubble-tls\") pod \"ffe85cec-733a-401d-9467-ffe86bf0044b\" (UID: \"ffe85cec-733a-401d-9467-ffe86bf0044b\") " Jul 2 08:03:59.874360 kubelet[1916]: I0702 08:03:59.874248 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:03:59.874453 kubelet[1916]: I0702 08:03:59.874393 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874453 kubelet[1916]: I0702 08:03:59.874430 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-hostproc" (OuterVolumeSpecName: "hostproc") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874555 kubelet[1916]: I0702 08:03:59.874449 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874555 kubelet[1916]: I0702 08:03:59.874473 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874555 kubelet[1916]: I0702 08:03:59.874494 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874555 kubelet[1916]: I0702 08:03:59.874512 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874555 kubelet[1916]: I0702 08:03:59.874532 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cni-path" (OuterVolumeSpecName: "cni-path") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874943 kubelet[1916]: I0702 08:03:59.874841 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874943 kubelet[1916]: I0702 08:03:59.874874 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.874943 kubelet[1916]: I0702 08:03:59.874895 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:03:59.882065 systemd[1]: var-lib-kubelet-pods-ffe85cec\x2d733a\x2d401d\x2d9467\x2dffe86bf0044b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:03:59.884692 kubelet[1916]: I0702 08:03:59.884666 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:03:59.885777 systemd[1]: var-lib-kubelet-pods-ffe85cec\x2d733a\x2d401d\x2d9467\x2dffe86bf0044b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl8mpr.mount: Deactivated successfully. Jul 2 08:03:59.885889 systemd[1]: var-lib-kubelet-pods-ffe85cec\x2d733a\x2d401d\x2d9467\x2dffe86bf0044b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:03:59.888161 env[1433]: time="2024-07-02T08:03:59.888002229Z" level=info msg="RemoveContainer for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" returns successfully" Jul 2 08:03:59.888441 kubelet[1916]: I0702 08:03:59.888422 1916 scope.go:117] "RemoveContainer" containerID="2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d" Jul 2 08:03:59.889465 kubelet[1916]: I0702 08:03:59.889151 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-kube-api-access-l8mpr" (OuterVolumeSpecName: "kube-api-access-l8mpr") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "kube-api-access-l8mpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:03:59.889465 kubelet[1916]: I0702 08:03:59.889454 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe85cec-733a-401d-9467-ffe86bf0044b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ffe85cec-733a-401d-9467-ffe86bf0044b" (UID: "ffe85cec-733a-401d-9467-ffe86bf0044b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:03:59.890106 env[1433]: time="2024-07-02T08:03:59.890080142Z" level=info msg="RemoveContainer for \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\"" Jul 2 08:03:59.905026 kubelet[1916]: I0702 08:03:59.904990 1916 topology_manager.go:215] "Topology Admit Handler" podUID="d9a27c97-7891-4e03-acaf-7d7526037072" podNamespace="kube-system" podName="cilium-operator-599987898-q77vl" Jul 2 08:03:59.905160 kubelet[1916]: E0702 08:03:59.905047 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" containerName="mount-bpf-fs" Jul 2 08:03:59.905160 kubelet[1916]: E0702 08:03:59.905059 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" containerName="cilium-agent" Jul 2 08:03:59.905160 kubelet[1916]: E0702 08:03:59.905068 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" containerName="clean-cilium-state" Jul 2 08:03:59.905160 kubelet[1916]: E0702 08:03:59.905075 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" containerName="mount-cgroup" Jul 2 08:03:59.905160 kubelet[1916]: E0702 08:03:59.905084 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" containerName="apply-sysctl-overwrites" Jul 2 08:03:59.905160 kubelet[1916]: I0702 08:03:59.905108 1916 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" containerName="cilium-agent" Jul 2 08:03:59.908620 env[1433]: time="2024-07-02T08:03:59.908581058Z" level=info msg="RemoveContainer for \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\" returns successfully" Jul 2 08:03:59.908841 kubelet[1916]: I0702 08:03:59.908821 1916 scope.go:117] "RemoveContainer" containerID="80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66" Jul 2 08:03:59.911504 env[1433]: time="2024-07-02T08:03:59.911252974Z" level=info msg="RemoveContainer for \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\"" Jul 2 08:03:59.911332 systemd[1]: Created slice kubepods-besteffort-podd9a27c97_7891_4e03_acaf_7d7526037072.slice. Jul 2 08:03:59.921151 env[1433]: time="2024-07-02T08:03:59.921104036Z" level=info msg="RemoveContainer for \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\" returns successfully" Jul 2 08:03:59.921424 kubelet[1916]: I0702 08:03:59.921401 1916 scope.go:117] "RemoveContainer" containerID="6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287" Jul 2 08:03:59.922638 env[1433]: time="2024-07-02T08:03:59.922606345Z" level=info msg="RemoveContainer for \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\"" Jul 2 08:03:59.932433 env[1433]: time="2024-07-02T08:03:59.932389906Z" level=info msg="RemoveContainer for \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\" returns successfully" Jul 2 08:03:59.932691 kubelet[1916]: I0702 08:03:59.932663 1916 scope.go:117] "RemoveContainer" containerID="b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8" Jul 2 08:03:59.933816 env[1433]: time="2024-07-02T08:03:59.933784115Z" level=info msg="RemoveContainer for \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\"" Jul 2 08:03:59.946575 env[1433]: time="2024-07-02T08:03:59.946525495Z" level=info msg="RemoveContainer for \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\" returns successfully" Jul 2 08:03:59.946906 kubelet[1916]: I0702 08:03:59.946882 1916 scope.go:117] "RemoveContainer" containerID="ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b" Jul 2 08:03:59.947339 env[1433]: time="2024-07-02T08:03:59.947238699Z" level=error msg="ContainerStatus for \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\": not found" Jul 2 08:03:59.947664 kubelet[1916]: E0702 08:03:59.947617 1916 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\": not found" containerID="ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b" Jul 2 08:03:59.947777 kubelet[1916]: I0702 08:03:59.947673 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b"} err="failed to get container status \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ceb1af17c7d4ae883a9a0db4f7431077c08557eade0f9c29124cbb965361410b\": not found" Jul 2 08:03:59.947832 kubelet[1916]: I0702 08:03:59.947785 1916 scope.go:117] "RemoveContainer" containerID="2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d" Jul 2 08:03:59.948045 env[1433]: time="2024-07-02T08:03:59.947982804Z" level=error msg="ContainerStatus for \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\": not found" Jul 2 08:03:59.948207 kubelet[1916]: E0702 08:03:59.948173 1916 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\": not found" containerID="2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d" Jul 2 08:03:59.948301 kubelet[1916]: I0702 08:03:59.948215 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d"} err="failed to get container status \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a72d3e97203a06ec7cc1c5cbb057fc3e09c2f61ee09600f78fc9377da09761d\": not found" Jul 2 08:03:59.948301 kubelet[1916]: I0702 08:03:59.948239 1916 scope.go:117] "RemoveContainer" containerID="80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66" Jul 2 08:03:59.948611 env[1433]: time="2024-07-02T08:03:59.948554907Z" level=error msg="ContainerStatus for \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\": not found" Jul 2 08:03:59.948747 kubelet[1916]: E0702 08:03:59.948710 1916 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\": not found" containerID="80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66" Jul 2 08:03:59.948822 kubelet[1916]: I0702 08:03:59.948754 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66"} err="failed to get container status \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\": rpc error: code = NotFound desc = an error occurred when try to find container \"80befa92db220f6f8db634cbda95dc663576cb84871093659069ede861bc1f66\": not found" Jul 2 08:03:59.948822 kubelet[1916]: I0702 08:03:59.948774 1916 scope.go:117] "RemoveContainer" containerID="6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287" Jul 2 08:03:59.949000 env[1433]: time="2024-07-02T08:03:59.948954810Z" level=error msg="ContainerStatus for \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\": not found" Jul 2 08:03:59.949172 kubelet[1916]: E0702 08:03:59.949144 1916 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\": not found" containerID="6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287" Jul 2 08:03:59.949252 kubelet[1916]: I0702 08:03:59.949177 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287"} err="failed to get container status \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f18c70ae045a17ad3a492ea8863c10579bddf489d0440fc1c46d8f4f6608287\": not found" Jul 2 08:03:59.949252 kubelet[1916]: I0702 08:03:59.949196 1916 scope.go:117] "RemoveContainer" containerID="b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8" Jul 2 08:03:59.949443 env[1433]: time="2024-07-02T08:03:59.949397313Z" level=error msg="ContainerStatus for \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\": not found" Jul 2 08:03:59.949583 kubelet[1916]: E0702 08:03:59.949556 1916 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\": not found" containerID="b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8" Jul 2 08:03:59.949666 kubelet[1916]: I0702 08:03:59.949591 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8"} err="failed to get container status \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9c15c91ac51c22e1a95ba38302fb4990db7e26a1a1e3d9a90033b97fc7706d8\": not found" Jul 2 08:03:59.961459 kubelet[1916]: I0702 08:03:59.961432 1916 topology_manager.go:215] "Topology Admit Handler" podUID="8a982a27-91ca-422a-be67-62f65349035b" podNamespace="kube-system" podName="cilium-bk4k2" Jul 2 08:03:59.966184 systemd[1]: Created slice kubepods-burstable-pod8a982a27_91ca_422a_be67_62f65349035b.slice. Jul 2 08:03:59.972224 kubelet[1916]: I0702 08:03:59.972195 1916 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-net\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972224 kubelet[1916]: I0702 08:03:59.972218 1916 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-host-proc-sys-kernel\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972231 1916 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-bpf-maps\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972246 1916 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-hubble-tls\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972258 1916 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffe85cec-733a-401d-9467-ffe86bf0044b-clustermesh-secrets\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972285 1916 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cni-path\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972296 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-config-path\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972309 1916 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-hostproc\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972320 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-cgroup\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972377 kubelet[1916]: I0702 08:03:59.972330 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-cilium-run\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972591 kubelet[1916]: I0702 08:03:59.972340 1916 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-lib-modules\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972591 kubelet[1916]: I0702 08:03:59.972351 1916 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-xtables-lock\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972591 kubelet[1916]: I0702 08:03:59.972364 1916 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l8mpr\" (UniqueName: \"kubernetes.io/projected/ffe85cec-733a-401d-9467-ffe86bf0044b-kube-api-access-l8mpr\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:03:59.972591 kubelet[1916]: I0702 08:03:59.972376 1916 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffe85cec-733a-401d-9467-ffe86bf0044b-etc-cni-netd\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:00.072828 kubelet[1916]: I0702 08:04:00.072746 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cni-path\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.072828 kubelet[1916]: I0702 08:04:00.072827 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-clustermesh-secrets\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073123 kubelet[1916]: I0702 08:04:00.072861 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a982a27-91ca-422a-be67-62f65349035b-cilium-config-path\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073123 kubelet[1916]: I0702 08:04:00.072885 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-bpf-maps\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073123 kubelet[1916]: I0702 08:04:00.072908 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-hostproc\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073123 kubelet[1916]: I0702 08:04:00.072933 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-cilium-ipsec-secrets\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073123 kubelet[1916]: I0702 08:04:00.072963 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgw8\" (UniqueName: \"kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-kube-api-access-qcgw8\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073123 kubelet[1916]: I0702 08:04:00.072989 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-etc-cni-netd\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073494 kubelet[1916]: I0702 08:04:00.073018 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-xtables-lock\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073494 kubelet[1916]: I0702 08:04:00.073044 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-net\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073494 kubelet[1916]: I0702 08:04:00.073072 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9a27c97-7891-4e03-acaf-7d7526037072-cilium-config-path\") pod \"cilium-operator-599987898-q77vl\" (UID: \"d9a27c97-7891-4e03-acaf-7d7526037072\") " pod="kube-system/cilium-operator-599987898-q77vl" Jul 2 08:04:00.073494 kubelet[1916]: I0702 08:04:00.073101 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-cgroup\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073494 kubelet[1916]: I0702 08:04:00.073126 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-kernel\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073673 kubelet[1916]: I0702 08:04:00.073155 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-hubble-tls\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073673 kubelet[1916]: I0702 08:04:00.073188 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklph\" (UniqueName: \"kubernetes.io/projected/d9a27c97-7891-4e03-acaf-7d7526037072-kube-api-access-bklph\") pod \"cilium-operator-599987898-q77vl\" (UID: \"d9a27c97-7891-4e03-acaf-7d7526037072\") " pod="kube-system/cilium-operator-599987898-q77vl" Jul 2 08:04:00.073673 kubelet[1916]: I0702 08:04:00.073216 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-run\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.073673 kubelet[1916]: I0702 08:04:00.073244 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-lib-modules\") pod \"cilium-bk4k2\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " pod="kube-system/cilium-bk4k2" Jul 2 08:04:00.171419 systemd[1]: Removed slice kubepods-burstable-podffe85cec_733a_401d_9467_ffe86bf0044b.slice. Jul 2 08:04:00.171573 systemd[1]: kubepods-burstable-podffe85cec_733a_401d_9467_ffe86bf0044b.slice: Consumed 6.350s CPU time. Jul 2 08:04:00.215138 env[1433]: time="2024-07-02T08:04:00.215081450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-q77vl,Uid:d9a27c97-7891-4e03-acaf-7d7526037072,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:00.253898 env[1433]: time="2024-07-02T08:04:00.253800287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:00.253898 env[1433]: time="2024-07-02T08:04:00.253849787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:00.254152 env[1433]: time="2024-07-02T08:04:00.253876088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:00.254221 env[1433]: time="2024-07-02T08:04:00.254108089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c pid=3419 runtime=io.containerd.runc.v2 Jul 2 08:04:00.268111 systemd[1]: Started cri-containerd-8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c.scope. Jul 2 08:04:00.270077 env[1433]: time="2024-07-02T08:04:00.270028587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bk4k2,Uid:8a982a27-91ca-422a-be67-62f65349035b,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:00.308250 env[1433]: time="2024-07-02T08:04:00.308172821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:00.308440 env[1433]: time="2024-07-02T08:04:00.308257521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:00.308440 env[1433]: time="2024-07-02T08:04:00.308393822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:00.308591 env[1433]: time="2024-07-02T08:04:00.308555123Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079 pid=3455 runtime=io.containerd.runc.v2 Jul 2 08:04:00.320475 env[1433]: time="2024-07-02T08:04:00.320425696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-q77vl,Uid:d9a27c97-7891-4e03-acaf-7d7526037072,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c\"" Jul 2 08:04:00.322413 env[1433]: time="2024-07-02T08:04:00.322380308Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:04:00.327665 systemd[1]: Started cri-containerd-bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079.scope. Jul 2 08:04:00.358645 env[1433]: time="2024-07-02T08:04:00.358602830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bk4k2,Uid:8a982a27-91ca-422a-be67-62f65349035b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\"" Jul 2 08:04:00.361633 env[1433]: time="2024-07-02T08:04:00.361589849Z" level=info msg="CreateContainer within sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:04:00.411913 env[1433]: time="2024-07-02T08:04:00.411851657Z" level=info msg="CreateContainer within sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\"" Jul 2 08:04:00.412664 env[1433]: time="2024-07-02T08:04:00.412625762Z" level=info msg="StartContainer for \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\"" Jul 2 08:04:00.430626 systemd[1]: Started cri-containerd-1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4.scope. Jul 2 08:04:00.443223 systemd[1]: cri-containerd-1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4.scope: Deactivated successfully. Jul 2 08:04:00.494850 env[1433]: time="2024-07-02T08:04:00.494790066Z" level=info msg="shim disconnected" id=1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4 Jul 2 08:04:00.494850 env[1433]: time="2024-07-02T08:04:00.494844566Z" level=warning msg="cleaning up after shim disconnected" id=1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4 namespace=k8s.io Jul 2 08:04:00.494850 env[1433]: time="2024-07-02T08:04:00.494855066Z" level=info msg="cleaning up dead shim" Jul 2 08:04:00.503079 env[1433]: time="2024-07-02T08:04:00.503028617Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3520 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:04:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:04:00.503429 env[1433]: time="2024-07-02T08:04:00.503329418Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Jul 2 08:04:00.506362 env[1433]: time="2024-07-02T08:04:00.506307837Z" level=error msg="Failed to pipe stdout of container \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\"" error="reading from a closed fifo" Jul 2 08:04:00.506463 env[1433]: time="2024-07-02T08:04:00.506390437Z" level=error msg="Failed to pipe stderr of container \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\"" error="reading from a closed fifo" Jul 2 08:04:00.511345 env[1433]: time="2024-07-02T08:04:00.511294367Z" level=error msg="StartContainer for \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:04:00.511585 kubelet[1916]: E0702 08:04:00.511549 1916 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4" Jul 2 08:04:00.511760 kubelet[1916]: E0702 08:04:00.511739 1916 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:04:00.511760 kubelet[1916]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:04:00.511760 kubelet[1916]: rm /hostbin/cilium-mount Jul 2 08:04:00.511913 kubelet[1916]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcgw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bk4k2_kube-system(8a982a27-91ca-422a-be67-62f65349035b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:04:00.511913 kubelet[1916]: E0702 08:04:00.511783 1916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bk4k2" podUID="8a982a27-91ca-422a-be67-62f65349035b" Jul 2 08:04:00.575758 kubelet[1916]: E0702 08:04:00.575671 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:00.674769 kubelet[1916]: I0702 08:04:00.674720 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe85cec-733a-401d-9467-ffe86bf0044b" path="/var/lib/kubelet/pods/ffe85cec-733a-401d-9467-ffe86bf0044b/volumes" Jul 2 08:04:00.874294 env[1433]: time="2024-07-02T08:04:00.874222295Z" level=info msg="CreateContainer within sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 08:04:00.948758 env[1433]: time="2024-07-02T08:04:00.948703552Z" level=info msg="CreateContainer within sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\"" Jul 2 08:04:00.949551 env[1433]: time="2024-07-02T08:04:00.949298955Z" level=info msg="StartContainer for \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\"" Jul 2 08:04:00.970252 systemd[1]: Started cri-containerd-5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5.scope. Jul 2 08:04:00.984889 systemd[1]: cri-containerd-5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5.scope: Deactivated successfully. Jul 2 08:04:01.009385 env[1433]: time="2024-07-02T08:04:01.009310123Z" level=info msg="shim disconnected" id=5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5 Jul 2 08:04:01.009385 env[1433]: time="2024-07-02T08:04:01.009382123Z" level=warning msg="cleaning up after shim disconnected" id=5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5 namespace=k8s.io Jul 2 08:04:01.009385 env[1433]: time="2024-07-02T08:04:01.009394523Z" level=info msg="cleaning up dead shim" Jul 2 08:04:01.016968 env[1433]: time="2024-07-02T08:04:01.016918869Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3557 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:04:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:04:01.017283 env[1433]: time="2024-07-02T08:04:01.017195570Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Jul 2 08:04:01.019071 env[1433]: time="2024-07-02T08:04:01.019020981Z" level=error msg="Failed to pipe stdout of container \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\"" error="reading from a closed fifo" Jul 2 08:04:01.019207 env[1433]: time="2024-07-02T08:04:01.019117282Z" level=error msg="Failed to pipe stderr of container \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\"" error="reading from a closed fifo" Jul 2 08:04:01.023730 env[1433]: time="2024-07-02T08:04:01.023686210Z" level=error msg="StartContainer for \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:04:01.023969 kubelet[1916]: E0702 08:04:01.023935 1916 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5" Jul 2 08:04:01.024459 kubelet[1916]: E0702 08:04:01.024432 1916 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:04:01.024459 kubelet[1916]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:04:01.024459 kubelet[1916]: rm /hostbin/cilium-mount Jul 2 08:04:01.024459 kubelet[1916]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcgw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bk4k2_kube-system(8a982a27-91ca-422a-be67-62f65349035b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:04:01.024701 kubelet[1916]: E0702 08:04:01.024478 1916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bk4k2" podUID="8a982a27-91ca-422a-be67-62f65349035b" Jul 2 08:04:01.576450 kubelet[1916]: E0702 08:04:01.576386 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:01.726621 systemd[1]: run-containerd-runc-k8s.io-5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5-runc.Y3xN0n.mount: Deactivated successfully. Jul 2 08:04:01.726752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5-rootfs.mount: Deactivated successfully. Jul 2 08:04:01.877940 kubelet[1916]: I0702 08:04:01.877324 1916 scope.go:117] "RemoveContainer" containerID="1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4" Jul 2 08:04:01.878085 env[1433]: time="2024-07-02T08:04:01.877599761Z" level=info msg="StopPodSandbox for \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\"" Jul 2 08:04:01.878085 env[1433]: time="2024-07-02T08:04:01.877686962Z" level=info msg="Container to stop \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:04:01.878085 env[1433]: time="2024-07-02T08:04:01.877707662Z" level=info msg="Container to stop \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:04:01.881798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079-shm.mount: Deactivated successfully. Jul 2 08:04:01.885930 env[1433]: time="2024-07-02T08:04:01.885886511Z" level=info msg="RemoveContainer for \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\"" Jul 2 08:04:01.895758 systemd[1]: cri-containerd-bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079.scope: Deactivated successfully. Jul 2 08:04:01.918780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079-rootfs.mount: Deactivated successfully. Jul 2 08:04:01.960406 env[1433]: time="2024-07-02T08:04:01.960349660Z" level=info msg="shim disconnected" id=bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079 Jul 2 08:04:01.960406 env[1433]: time="2024-07-02T08:04:01.960406861Z" level=warning msg="cleaning up after shim disconnected" id=bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079 namespace=k8s.io Jul 2 08:04:01.960649 env[1433]: time="2024-07-02T08:04:01.960419861Z" level=info msg="cleaning up dead shim" Jul 2 08:04:01.969600 env[1433]: time="2024-07-02T08:04:01.969559016Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3588 runtime=io.containerd.runc.v2\n" Jul 2 08:04:01.969882 env[1433]: time="2024-07-02T08:04:01.969850618Z" level=info msg="TearDown network for sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" successfully" Jul 2 08:04:01.969882 env[1433]: time="2024-07-02T08:04:01.969877018Z" level=info msg="StopPodSandbox for \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" returns successfully" Jul 2 08:04:02.004636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410101432.mount: Deactivated successfully. Jul 2 08:04:02.025686 env[1433]: time="2024-07-02T08:04:02.025628052Z" level=info msg="RemoveContainer for \"1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4\" returns successfully" Jul 2 08:04:02.089615 kubelet[1916]: I0702 08:04:02.089566 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-hostproc\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089627 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-xtables-lock\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089672 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-cilium-ipsec-secrets\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089702 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcgw8\" (UniqueName: \"kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-kube-api-access-qcgw8\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089726 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-lib-modules\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089747 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-bpf-maps\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089768 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-net\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089792 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-run\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089815 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-kernel\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.089863 kubelet[1916]: I0702 08:04:02.089841 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cni-path\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.089868 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-clustermesh-secrets\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.089897 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a982a27-91ca-422a-be67-62f65349035b-cilium-config-path\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.089923 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-etc-cni-netd\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.089950 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-cgroup\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.089980 1916 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-hubble-tls\") pod \"8a982a27-91ca-422a-be67-62f65349035b\" (UID: \"8a982a27-91ca-422a-be67-62f65349035b\") " Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.090338 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.090416 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-hostproc" (OuterVolumeSpecName: "hostproc") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.090440 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.090855 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.090914 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.091788 kubelet[1916]: I0702 08:04:02.090942 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cni-path" (OuterVolumeSpecName: "cni-path") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.095081 kubelet[1916]: I0702 08:04:02.095054 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.095246 kubelet[1916]: I0702 08:04:02.095228 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.096420 kubelet[1916]: I0702 08:04:02.096394 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:04:02.096563 kubelet[1916]: I0702 08:04:02.096417 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.096650 kubelet[1916]: I0702 08:04:02.096436 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:04:02.096726 kubelet[1916]: I0702 08:04:02.096502 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:04:02.103832 kubelet[1916]: I0702 08:04:02.103797 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:04:02.104043 kubelet[1916]: I0702 08:04:02.103987 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-kube-api-access-qcgw8" (OuterVolumeSpecName: "kube-api-access-qcgw8") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "kube-api-access-qcgw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:04:02.104168 kubelet[1916]: I0702 08:04:02.104142 1916 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a982a27-91ca-422a-be67-62f65349035b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a982a27-91ca-422a-be67-62f65349035b" (UID: "8a982a27-91ca-422a-be67-62f65349035b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190899 1916 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qcgw8\" (UniqueName: \"kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-kube-api-access-qcgw8\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190941 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-cilium-ipsec-secrets\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190952 1916 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-net\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190963 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-run\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190973 1916 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-lib-modules\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190984 1916 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-bpf-maps\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.190995 1916 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a982a27-91ca-422a-be67-62f65349035b-clustermesh-secrets\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191004 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a982a27-91ca-422a-be67-62f65349035b-cilium-config-path\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191014 1916 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-etc-cni-netd\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191024 1916 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cilium-cgroup\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191033 1916 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-host-proc-sys-kernel\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191076 1916 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-cni-path\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191086 1916 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a982a27-91ca-422a-be67-62f65349035b-hubble-tls\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191097 1916 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-hostproc\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.192902 kubelet[1916]: I0702 08:04:02.191108 1916 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a982a27-91ca-422a-be67-62f65349035b-xtables-lock\") on node \"10.200.8.11\" DevicePath \"\"" Jul 2 08:04:02.577075 kubelet[1916]: E0702 08:04:02.577002 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:02.624706 kubelet[1916]: E0702 08:04:02.624637 1916 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:04:02.677595 systemd[1]: Removed slice kubepods-burstable-pod8a982a27_91ca_422a_be67_62f65349035b.slice. Jul 2 08:04:02.725429 systemd[1]: var-lib-kubelet-pods-8a982a27\x2d91ca\x2d422a\x2dbe67\x2d62f65349035b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqcgw8.mount: Deactivated successfully. Jul 2 08:04:02.725545 systemd[1]: var-lib-kubelet-pods-8a982a27\x2d91ca\x2d422a\x2dbe67\x2d62f65349035b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:04:02.725625 systemd[1]: var-lib-kubelet-pods-8a982a27\x2d91ca\x2d422a\x2dbe67\x2d62f65349035b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:04:02.725700 systemd[1]: var-lib-kubelet-pods-8a982a27\x2d91ca\x2d422a\x2dbe67\x2d62f65349035b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:04:02.776558 env[1433]: time="2024-07-02T08:04:02.776503306Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:02.783490 env[1433]: time="2024-07-02T08:04:02.783441747Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:02.787625 env[1433]: time="2024-07-02T08:04:02.787585471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:02.788038 env[1433]: time="2024-07-02T08:04:02.788002974Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:04:02.790495 env[1433]: time="2024-07-02T08:04:02.790456788Z" level=info msg="CreateContainer within sandbox \"8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:04:02.820246 env[1433]: time="2024-07-02T08:04:02.820191665Z" level=info msg="CreateContainer within sandbox \"8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d\"" Jul 2 08:04:02.820842 env[1433]: time="2024-07-02T08:04:02.820758968Z" level=info msg="StartContainer for \"90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d\"" Jul 2 08:04:02.850525 systemd[1]: Started cri-containerd-90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d.scope. Jul 2 08:04:02.881621 env[1433]: time="2024-07-02T08:04:02.881563529Z" level=info msg="StartContainer for \"90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d\" returns successfully" Jul 2 08:04:02.887126 kubelet[1916]: I0702 08:04:02.884481 1916 scope.go:117] "RemoveContainer" containerID="5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5" Jul 2 08:04:02.890548 env[1433]: time="2024-07-02T08:04:02.890502182Z" level=info msg="RemoveContainer for \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\"" Jul 2 08:04:02.904677 env[1433]: time="2024-07-02T08:04:02.904626266Z" level=info msg="RemoveContainer for \"5186fd9dfcd68f6cd52184c3a09cb068c34cc7116fd559779cab9b7d743e17c5\" returns successfully" Jul 2 08:04:02.907024 kubelet[1916]: I0702 08:04:02.906961 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-q77vl" podStartSLOduration=1.439638303 podStartE2EDuration="3.906943879s" podCreationTimestamp="2024-07-02 08:03:59 +0000 UTC" firstStartedPulling="2024-07-02 08:04:00.321720504 +0000 UTC m=+68.529931868" lastFinishedPulling="2024-07-02 08:04:02.78902598 +0000 UTC m=+70.997237444" observedRunningTime="2024-07-02 08:04:02.906661278 +0000 UTC m=+71.114872642" watchObservedRunningTime="2024-07-02 08:04:02.906943879 +0000 UTC m=+71.115155243" Jul 2 08:04:02.958174 kubelet[1916]: I0702 08:04:02.957923 1916 topology_manager.go:215] "Topology Admit Handler" podUID="1a07734e-ae09-4170-a83b-ae9f92d7bcba" podNamespace="kube-system" podName="cilium-m7lkk" Jul 2 08:04:02.958174 kubelet[1916]: E0702 08:04:02.957996 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a982a27-91ca-422a-be67-62f65349035b" containerName="mount-cgroup" Jul 2 08:04:02.958174 kubelet[1916]: I0702 08:04:02.958029 1916 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a982a27-91ca-422a-be67-62f65349035b" containerName="mount-cgroup" Jul 2 08:04:02.958174 kubelet[1916]: I0702 08:04:02.958039 1916 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a982a27-91ca-422a-be67-62f65349035b" containerName="mount-cgroup" Jul 2 08:04:02.958174 kubelet[1916]: E0702 08:04:02.958065 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a982a27-91ca-422a-be67-62f65349035b" containerName="mount-cgroup" Jul 2 08:04:02.963378 systemd[1]: Created slice kubepods-burstable-pod1a07734e_ae09_4170_a83b_ae9f92d7bcba.slice. Jul 2 08:04:03.095707 kubelet[1916]: I0702 08:04:03.095648 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a07734e-ae09-4170-a83b-ae9f92d7bcba-hubble-tls\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095707 kubelet[1916]: I0702 08:04:03.095707 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwk9f\" (UniqueName: \"kubernetes.io/projected/1a07734e-ae09-4170-a83b-ae9f92d7bcba-kube-api-access-gwk9f\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095742 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-hostproc\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095766 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-etc-cni-netd\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095788 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a07734e-ae09-4170-a83b-ae9f92d7bcba-clustermesh-secrets\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095813 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-host-proc-sys-kernel\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095839 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-xtables-lock\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095866 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-host-proc-sys-net\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095894 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-cilium-run\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095917 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-bpf-maps\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095948 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a07734e-ae09-4170-a83b-ae9f92d7bcba-cilium-config-path\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.095983 kubelet[1916]: I0702 08:04:03.095979 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a07734e-ae09-4170-a83b-ae9f92d7bcba-cilium-ipsec-secrets\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.096541 kubelet[1916]: I0702 08:04:03.096005 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-cilium-cgroup\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.096541 kubelet[1916]: I0702 08:04:03.096033 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-cni-path\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.096541 kubelet[1916]: I0702 08:04:03.096064 1916 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a07734e-ae09-4170-a83b-ae9f92d7bcba-lib-modules\") pod \"cilium-m7lkk\" (UID: \"1a07734e-ae09-4170-a83b-ae9f92d7bcba\") " pod="kube-system/cilium-m7lkk" Jul 2 08:04:03.272741 env[1433]: time="2024-07-02T08:04:03.272688822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7lkk,Uid:1a07734e-ae09-4170-a83b-ae9f92d7bcba,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:03.312068 env[1433]: time="2024-07-02T08:04:03.311969051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:03.312068 env[1433]: time="2024-07-02T08:04:03.312020952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:03.312068 env[1433]: time="2024-07-02T08:04:03.312039752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:03.312582 env[1433]: time="2024-07-02T08:04:03.312535655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0 pid=3654 runtime=io.containerd.runc.v2 Jul 2 08:04:03.325097 systemd[1]: Started cri-containerd-aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0.scope. Jul 2 08:04:03.349303 env[1433]: time="2024-07-02T08:04:03.349248069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7lkk,Uid:1a07734e-ae09-4170-a83b-ae9f92d7bcba,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\"" Jul 2 08:04:03.352204 env[1433]: time="2024-07-02T08:04:03.352162986Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:04:03.401373 env[1433]: time="2024-07-02T08:04:03.401320473Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f\"" Jul 2 08:04:03.401931 env[1433]: time="2024-07-02T08:04:03.401877976Z" level=info msg="StartContainer for \"6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f\"" Jul 2 08:04:03.418669 systemd[1]: Started cri-containerd-6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f.scope. Jul 2 08:04:03.451735 env[1433]: time="2024-07-02T08:04:03.451688567Z" level=info msg="StartContainer for \"6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f\" returns successfully" Jul 2 08:04:03.452764 systemd[1]: cri-containerd-6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f.scope: Deactivated successfully. Jul 2 08:04:03.938123 kubelet[1916]: E0702 08:04:03.577645 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:03.938123 kubelet[1916]: W0702 08:04:03.600445 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a982a27_91ca_422a_be67_62f65349035b.slice/cri-containerd-1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4.scope WatchSource:0}: container "1e7c5d6860389a5f61c868e35f3e3a49e0d67e13ec368d886b6fdb3dcc95fcf4" in namespace "k8s.io": not found Jul 2 08:04:03.938123 kubelet[1916]: I0702 08:04:03.842045 1916 setters.go:580] "Node became not ready" node="10.200.8.11" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:04:03Z","lastTransitionTime":"2024-07-02T08:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:04:03.734010 systemd[1]: run-containerd-runc-k8s.io-90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d-runc.NtAnIB.mount: Deactivated successfully. Jul 2 08:04:03.956030 env[1433]: time="2024-07-02T08:04:03.955346305Z" level=info msg="shim disconnected" id=6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f Jul 2 08:04:03.956030 env[1433]: time="2024-07-02T08:04:03.955405705Z" level=warning msg="cleaning up after shim disconnected" id=6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f namespace=k8s.io Jul 2 08:04:03.956030 env[1433]: time="2024-07-02T08:04:03.955417805Z" level=info msg="cleaning up dead shim" Jul 2 08:04:03.961160 kubelet[1916]: E0702 08:04:03.961105 1916 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a07734e_ae09_4170_a83b_ae9f92d7bcba.slice/cri-containerd-6ecc105f98978d921dfc98b10cf615fbe896809057fab4bf646abc4da78c5d1f.scope\": RecentStats: unable to find data in memory cache]" Jul 2 08:04:03.965997 env[1433]: time="2024-07-02T08:04:03.965951667Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3741 runtime=io.containerd.runc.v2\n" Jul 2 08:04:04.578769 kubelet[1916]: E0702 08:04:04.578701 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:04.673649 kubelet[1916]: I0702 08:04:04.673603 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a982a27-91ca-422a-be67-62f65349035b" path="/var/lib/kubelet/pods/8a982a27-91ca-422a-be67-62f65349035b/volumes" Jul 2 08:04:04.903180 env[1433]: time="2024-07-02T08:04:04.902881047Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:04:04.940049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164541951.mount: Deactivated successfully. Jul 2 08:04:04.963987 env[1433]: time="2024-07-02T08:04:04.963942498Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367\"" Jul 2 08:04:04.964800 env[1433]: time="2024-07-02T08:04:04.964765802Z" level=info msg="StartContainer for \"76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367\"" Jul 2 08:04:04.984772 systemd[1]: Started cri-containerd-76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367.scope. Jul 2 08:04:05.022824 env[1433]: time="2024-07-02T08:04:05.022777633Z" level=info msg="StartContainer for \"76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367\" returns successfully" Jul 2 08:04:05.023620 systemd[1]: cri-containerd-76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367.scope: Deactivated successfully. Jul 2 08:04:05.054669 env[1433]: time="2024-07-02T08:04:05.054610913Z" level=info msg="shim disconnected" id=76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367 Jul 2 08:04:05.054669 env[1433]: time="2024-07-02T08:04:05.054665913Z" level=warning msg="cleaning up after shim disconnected" id=76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367 namespace=k8s.io Jul 2 08:04:05.054972 env[1433]: time="2024-07-02T08:04:05.054677513Z" level=info msg="cleaning up dead shim" Jul 2 08:04:05.062470 env[1433]: time="2024-07-02T08:04:05.062428457Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\n" Jul 2 08:04:05.579398 kubelet[1916]: E0702 08:04:05.579335 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:05.906690 env[1433]: time="2024-07-02T08:04:05.906574524Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:04:05.938831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76007b0a38ae215c94bdde54baf302e3c70029a11836f377f9f6a20d3c806367-rootfs.mount: Deactivated successfully. Jul 2 08:04:05.956042 env[1433]: time="2024-07-02T08:04:05.955993903Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f\"" Jul 2 08:04:05.956625 env[1433]: time="2024-07-02T08:04:05.956596707Z" level=info msg="StartContainer for \"fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f\"" Jul 2 08:04:05.987361 systemd[1]: Started cri-containerd-fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f.scope. Jul 2 08:04:06.020634 systemd[1]: cri-containerd-fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f.scope: Deactivated successfully. Jul 2 08:04:06.027279 env[1433]: time="2024-07-02T08:04:06.027227503Z" level=info msg="StartContainer for \"fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f\" returns successfully" Jul 2 08:04:06.057195 env[1433]: time="2024-07-02T08:04:06.057143470Z" level=info msg="shim disconnected" id=fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f Jul 2 08:04:06.057195 env[1433]: time="2024-07-02T08:04:06.057191970Z" level=warning msg="cleaning up after shim disconnected" id=fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f namespace=k8s.io Jul 2 08:04:06.057195 env[1433]: time="2024-07-02T08:04:06.057202870Z" level=info msg="cleaning up dead shim" Jul 2 08:04:06.065966 env[1433]: time="2024-07-02T08:04:06.065917119Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3858 runtime=io.containerd.runc.v2\n" Jul 2 08:04:06.580081 kubelet[1916]: E0702 08:04:06.580020 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:06.911886 env[1433]: time="2024-07-02T08:04:06.911759920Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:04:06.936616 systemd[1]: run-containerd-runc-k8s.io-fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f-runc.roKyCY.mount: Deactivated successfully. Jul 2 08:04:06.936745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fae0a2a785d5e10737f38a86897562afa46f6effbea357e6038a9d140891037f-rootfs.mount: Deactivated successfully. Jul 2 08:04:06.980756 env[1433]: time="2024-07-02T08:04:06.980699403Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845\"" Jul 2 08:04:06.981821 env[1433]: time="2024-07-02T08:04:06.981781109Z" level=info msg="StartContainer for \"e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845\"" Jul 2 08:04:07.009333 systemd[1]: Started cri-containerd-e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845.scope. Jul 2 08:04:07.038417 systemd[1]: cri-containerd-e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845.scope: Deactivated successfully. Jul 2 08:04:07.039469 env[1433]: time="2024-07-02T08:04:07.039385926Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a07734e_ae09_4170_a83b_ae9f92d7bcba.slice/cri-containerd-e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845.scope/memory.events\": no such file or directory" Jul 2 08:04:07.045041 env[1433]: time="2024-07-02T08:04:07.044998757Z" level=info msg="StartContainer for \"e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845\" returns successfully" Jul 2 08:04:07.075670 env[1433]: time="2024-07-02T08:04:07.075609224Z" level=info msg="shim disconnected" id=e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845 Jul 2 08:04:07.075670 env[1433]: time="2024-07-02T08:04:07.075664625Z" level=warning msg="cleaning up after shim disconnected" id=e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845 namespace=k8s.io Jul 2 08:04:07.075670 env[1433]: time="2024-07-02T08:04:07.075676925Z" level=info msg="cleaning up dead shim" Jul 2 08:04:07.083823 env[1433]: time="2024-07-02T08:04:07.083779069Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3916 runtime=io.containerd.runc.v2\n" Jul 2 08:04:07.580815 kubelet[1916]: E0702 08:04:07.580755 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:07.625635 kubelet[1916]: E0702 08:04:07.625589 1916 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:04:07.917584 env[1433]: time="2024-07-02T08:04:07.917454131Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:04:07.936892 systemd[1]: run-containerd-runc-k8s.io-e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845-runc.V7B806.mount: Deactivated successfully. Jul 2 08:04:07.937035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1416649ea394b9d5d7079fbffaeb5ed9ec52884d37d9df02ac1470866d23845-rootfs.mount: Deactivated successfully. Jul 2 08:04:07.966229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129275559.mount: Deactivated successfully. Jul 2 08:04:07.983057 env[1433]: time="2024-07-02T08:04:07.983008590Z" level=info msg="CreateContainer within sandbox \"aaa789a105010db17dc0ecbc7df9ebe1d1bb17f5982c704fd7561e68eede0da0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97bb79d1e25dd8fe13dbf83119b2d703c69c61a73a6795022d7c2c7644037e97\"" Jul 2 08:04:07.983770 env[1433]: time="2024-07-02T08:04:07.983652893Z" level=info msg="StartContainer for \"97bb79d1e25dd8fe13dbf83119b2d703c69c61a73a6795022d7c2c7644037e97\"" Jul 2 08:04:08.002534 systemd[1]: Started cri-containerd-97bb79d1e25dd8fe13dbf83119b2d703c69c61a73a6795022d7c2c7644037e97.scope. Jul 2 08:04:08.035021 env[1433]: time="2024-07-02T08:04:08.034973271Z" level=info msg="StartContainer for \"97bb79d1e25dd8fe13dbf83119b2d703c69c61a73a6795022d7c2c7644037e97\" returns successfully" Jul 2 08:04:08.450341 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 08:04:08.581349 kubelet[1916]: E0702 08:04:08.581287 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:08.932851 kubelet[1916]: I0702 08:04:08.932804 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m7lkk" podStartSLOduration=6.93278711 podStartE2EDuration="6.93278711s" podCreationTimestamp="2024-07-02 08:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:08.932684109 +0000 UTC m=+77.140895473" watchObservedRunningTime="2024-07-02 08:04:08.93278711 +0000 UTC m=+77.140998474" Jul 2 08:04:09.581890 kubelet[1916]: E0702 08:04:09.581823 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:10.582626 kubelet[1916]: E0702 08:04:10.582584 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:10.984586 systemd-networkd[1580]: lxc_health: Link UP Jul 2 08:04:11.014189 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:04:11.013568 systemd-networkd[1580]: lxc_health: Gained carrier Jul 2 08:04:11.583783 kubelet[1916]: E0702 08:04:11.583730 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:12.530055 kubelet[1916]: E0702 08:04:12.530009 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:12.584603 kubelet[1916]: E0702 08:04:12.584563 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:12.622674 systemd-networkd[1580]: lxc_health: Gained IPv6LL Jul 2 08:04:13.586009 kubelet[1916]: E0702 08:04:13.585965 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:14.230423 update_engine[1425]: I0702 08:04:14.230371 1425 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 08:04:14.230423 update_engine[1425]: I0702 08:04:14.230423 1425 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 08:04:14.230900 update_engine[1425]: I0702 08:04:14.230585 1425 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 08:04:14.231205 update_engine[1425]: I0702 08:04:14.231130 1425 omaha_request_params.cc:62] Current group set to lts Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231491 1425 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231505 1425 update_attempter.cc:643] Scheduling an action processor start. Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231523 1425 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231556 1425 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231621 1425 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231627 1425 omaha_request_action.cc:271] Request: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: Jul 2 08:04:14.231706 update_engine[1425]: I0702 08:04:14.231633 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:04:14.232354 locksmithd[1527]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 08:04:14.328778 update_engine[1425]: I0702 08:04:14.328426 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:04:14.328778 update_engine[1425]: I0702 08:04:14.328724 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:04:14.375233 update_engine[1425]: E0702 08:04:14.375191 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:04:14.375597 update_engine[1425]: I0702 08:04:14.375571 1425 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 08:04:14.587085 kubelet[1916]: E0702 08:04:14.586999 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:15.284619 systemd[1]: run-containerd-runc-k8s.io-97bb79d1e25dd8fe13dbf83119b2d703c69c61a73a6795022d7c2c7644037e97-runc.2FBirp.mount: Deactivated successfully. Jul 2 08:04:15.588234 kubelet[1916]: E0702 08:04:15.588072 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:16.588643 kubelet[1916]: E0702 08:04:16.588586 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:17.588797 kubelet[1916]: E0702 08:04:17.588747 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:18.589319 kubelet[1916]: E0702 08:04:18.589240 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:19.590427 kubelet[1916]: E0702 08:04:19.590360 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:20.591172 kubelet[1916]: E0702 08:04:20.591111 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:21.591577 kubelet[1916]: E0702 08:04:21.591520 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:22.592240 kubelet[1916]: E0702 08:04:22.592180 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:23.592661 kubelet[1916]: E0702 08:04:23.592595 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:24.233477 update_engine[1425]: I0702 08:04:24.233408 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:04:24.233930 update_engine[1425]: I0702 08:04:24.233717 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:04:24.233994 update_engine[1425]: I0702 08:04:24.233958 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:04:24.256805 update_engine[1425]: E0702 08:04:24.256748 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:04:24.256996 update_engine[1425]: I0702 08:04:24.256899 1425 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 08:04:24.593320 kubelet[1916]: E0702 08:04:24.593244 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:25.594216 kubelet[1916]: E0702 08:04:25.594154 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:26.595032 kubelet[1916]: E0702 08:04:26.594965 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:27.595197 kubelet[1916]: E0702 08:04:27.595121 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:28.595655 kubelet[1916]: E0702 08:04:28.595592 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:29.595860 kubelet[1916]: E0702 08:04:29.595802 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:30.596549 kubelet[1916]: E0702 08:04:30.596489 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:31.597285 kubelet[1916]: E0702 08:04:31.597208 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:32.529988 kubelet[1916]: E0702 08:04:32.529926 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:32.597661 kubelet[1916]: E0702 08:04:32.597599 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:32.974786 systemd[1]: cri-containerd-90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d.scope: Deactivated successfully. Jul 2 08:04:32.998448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d-rootfs.mount: Deactivated successfully. Jul 2 08:04:33.052593 env[1433]: time="2024-07-02T08:04:33.052515190Z" level=info msg="shim disconnected" id=90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d Jul 2 08:04:33.052593 env[1433]: time="2024-07-02T08:04:33.052585890Z" level=warning msg="cleaning up after shim disconnected" id=90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d namespace=k8s.io Jul 2 08:04:33.052593 env[1433]: time="2024-07-02T08:04:33.052602490Z" level=info msg="cleaning up dead shim" Jul 2 08:04:33.061768 env[1433]: time="2024-07-02T08:04:33.061725326Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4611 runtime=io.containerd.runc.v2\n" Jul 2 08:04:33.277503 kubelet[1916]: E0702 08:04:33.277466 1916 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.15:58176->10.200.8.22:2379: read: connection timed out" Jul 2 08:04:33.598653 kubelet[1916]: E0702 08:04:33.598520 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:33.969012 kubelet[1916]: I0702 08:04:33.968562 1916 scope.go:117] "RemoveContainer" containerID="90b9bb7ba41be54f61482cc32807dfc07c273e5fba0d76a58aee13435294288d" Jul 2 08:04:33.971073 env[1433]: time="2024-07-02T08:04:33.971028738Z" level=info msg="CreateContainer within sandbox \"8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jul 2 08:04:34.016138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744481839.mount: Deactivated successfully. Jul 2 08:04:34.031722 env[1433]: time="2024-07-02T08:04:34.031679277Z" level=info msg="CreateContainer within sandbox \"8b60d90dd8041a51c6ee345fbf06ba265674146118f4bfcc4efbc05dc477441c\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"a02d1da79ccfdd6f9f66ad08a529fab6899e7886dec8b10d784bc0c3cc7caf9f\"" Jul 2 08:04:34.032061 env[1433]: time="2024-07-02T08:04:34.032030979Z" level=info msg="StartContainer for \"a02d1da79ccfdd6f9f66ad08a529fab6899e7886dec8b10d784bc0c3cc7caf9f\"" Jul 2 08:04:34.055690 systemd[1]: Started cri-containerd-a02d1da79ccfdd6f9f66ad08a529fab6899e7886dec8b10d784bc0c3cc7caf9f.scope. Jul 2 08:04:34.093367 env[1433]: time="2024-07-02T08:04:34.093319920Z" level=info msg="StartContainer for \"a02d1da79ccfdd6f9f66ad08a529fab6899e7886dec8b10d784bc0c3cc7caf9f\" returns successfully" Jul 2 08:04:34.221995 kubelet[1916]: E0702 08:04:34.221835 1916 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T08:04:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T08:04:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T08:04:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T08:04:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":70999878},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\\\",\\\"registry.k8s.io/kube-proxy:v1.30.2\\\"],\\\"sizeBytes\\\":29034457},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"10.200.8.11\": Patch \"https://10.200.8.15:6443/api/v1/nodes/10.200.8.11/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:04:34.224568 update_engine[1425]: I0702 08:04:34.224524 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:04:34.224995 update_engine[1425]: I0702 08:04:34.224818 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:04:34.225088 update_engine[1425]: I0702 08:04:34.225059 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:04:34.283212 update_engine[1425]: E0702 08:04:34.283159 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:04:34.283400 update_engine[1425]: I0702 08:04:34.283319 1425 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 08:04:34.491244 kubelet[1916]: E0702 08:04:34.491093 1916 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.200.8.11\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.15:58084->10.200.8.22:2379: read: connection timed out" Jul 2 08:04:34.599177 kubelet[1916]: E0702 08:04:34.599135 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:35.600092 kubelet[1916]: E0702 08:04:35.600029 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:36.600849 kubelet[1916]: E0702 08:04:36.600786 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:36.795727 kubelet[1916]: E0702 08:04:36.795581 1916 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.15:57996->10.200.8.22:2379: read: connection timed out" event="&Event{ObjectMeta:{cilium-operator-599987898-q77vl.17de56bc307e3bb4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-599987898-q77vl,UID:d9a27c97-7891-4e03-acaf-7d7526037072,APIVersion:v1,ResourceVersion:1092,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:10.200.8.11,},FirstTimestamp:2024-07-02 08:04:33.969527732 +0000 UTC m=+102.177739196,LastTimestamp:2024-07-02 08:04:33.969527732 +0000 UTC m=+102.177739196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.11,}" Jul 2 08:04:37.601870 kubelet[1916]: E0702 08:04:37.601814 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:38.602233 kubelet[1916]: E0702 08:04:38.602173 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:39.602416 kubelet[1916]: E0702 08:04:39.602362 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:40.603123 kubelet[1916]: E0702 08:04:40.603066 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:41.603514 kubelet[1916]: E0702 08:04:41.603450 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:42.604283 kubelet[1916]: E0702 08:04:42.604216 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:43.278278 kubelet[1916]: E0702 08:04:43.278200 1916 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:04:43.605237 kubelet[1916]: E0702 08:04:43.605102 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:43.707024 kubelet[1916]: I0702 08:04:43.706969 1916 status_manager.go:853] "Failed to get status for pod" podUID="d9a27c97-7891-4e03-acaf-7d7526037072" pod="kube-system/cilium-operator-599987898-q77vl" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.15:58094->10.200.8.22:2379: read: connection timed out" Jul 2 08:04:44.234491 update_engine[1425]: I0702 08:04:44.234411 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:04:44.235007 update_engine[1425]: I0702 08:04:44.234761 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:04:44.235078 update_engine[1425]: I0702 08:04:44.235038 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:04:44.361008 update_engine[1425]: E0702 08:04:44.360930 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:04:44.362990 update_engine[1425]: I0702 08:04:44.361091 1425 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 08:04:44.362990 update_engine[1425]: I0702 08:04:44.361107 1425 omaha_request_action.cc:621] Omaha request response: Jul 2 08:04:44.362990 update_engine[1425]: E0702 08:04:44.362954 1425 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 2 08:04:44.362990 update_engine[1425]: I0702 08:04:44.362980 1425 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 08:04:44.362990 update_engine[1425]: I0702 08:04:44.362987 1425 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:04:44.362990 update_engine[1425]: I0702 08:04:44.362993 1425 update_attempter.cc:306] Processing Done. Jul 2 08:04:44.363347 update_engine[1425]: E0702 08:04:44.363013 1425 update_attempter.cc:619] Update failed. Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363019 1425 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363025 1425 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363032 1425 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363135 1425 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363163 1425 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363170 1425 omaha_request_action.cc:271] Request: Jul 2 08:04:44.363347 update_engine[1425]: Jul 2 08:04:44.363347 update_engine[1425]: Jul 2 08:04:44.363347 update_engine[1425]: Jul 2 08:04:44.363347 update_engine[1425]: Jul 2 08:04:44.363347 update_engine[1425]: Jul 2 08:04:44.363347 update_engine[1425]: Jul 2 08:04:44.363347 update_engine[1425]: I0702 08:04:44.363178 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 08:04:44.363982 update_engine[1425]: I0702 08:04:44.363436 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 08:04:44.363982 update_engine[1425]: I0702 08:04:44.363673 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 08:04:44.364083 locksmithd[1527]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 08:04:44.379976 update_engine[1425]: E0702 08:04:44.379937 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380048 1425 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380060 1425 omaha_request_action.cc:621] Omaha request response: Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380069 1425 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380074 1425 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380079 1425 update_attempter.cc:306] Processing Done. Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380085 1425 update_attempter.cc:310] Error event sent. Jul 2 08:04:44.380111 update_engine[1425]: I0702 08:04:44.380095 1425 update_check_scheduler.cc:74] Next update check in 41m36s Jul 2 08:04:44.380559 locksmithd[1527]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 08:04:44.491906 kubelet[1916]: E0702 08:04:44.491635 1916 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.200.8.11\": Get \"https://10.200.8.15:6443/api/v1/nodes/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:04:44.606030 kubelet[1916]: E0702 08:04:44.605976 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:45.607079 kubelet[1916]: E0702 08:04:45.607019 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:46.608241 kubelet[1916]: E0702 08:04:46.608182 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:47.608419 kubelet[1916]: E0702 08:04:47.608360 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:48.611743 kubelet[1916]: E0702 08:04:48.611703 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:49.612562 kubelet[1916]: E0702 08:04:49.612509 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:50.613159 kubelet[1916]: E0702 08:04:50.613049 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:51.613219 kubelet[1916]: E0702 08:04:51.613161 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:52.529903 kubelet[1916]: E0702 08:04:52.529846 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:52.569938 env[1433]: time="2024-07-02T08:04:52.569881728Z" level=info msg="StopPodSandbox for \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\"" Jul 2 08:04:52.570393 env[1433]: time="2024-07-02T08:04:52.570016628Z" level=info msg="TearDown network for sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" successfully" Jul 2 08:04:52.570393 env[1433]: time="2024-07-02T08:04:52.570077029Z" level=info msg="StopPodSandbox for \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" returns successfully" Jul 2 08:04:52.570836 env[1433]: time="2024-07-02T08:04:52.570803131Z" level=info msg="RemovePodSandbox for \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\"" Jul 2 08:04:52.570964 env[1433]: time="2024-07-02T08:04:52.570836031Z" level=info msg="Forcibly stopping sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\"" Jul 2 08:04:52.570964 env[1433]: time="2024-07-02T08:04:52.570922432Z" level=info msg="TearDown network for sandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" successfully" Jul 2 08:04:52.582680 env[1433]: time="2024-07-02T08:04:52.582639872Z" level=info msg="RemovePodSandbox \"b6a61d0aeb567631f901fe642c3372704268cb0adae4e3ac43fe1058760df86b\" returns successfully" Jul 2 08:04:52.583141 env[1433]: time="2024-07-02T08:04:52.583101774Z" level=info msg="StopPodSandbox for \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\"" Jul 2 08:04:52.583242 env[1433]: time="2024-07-02T08:04:52.583182674Z" level=info msg="TearDown network for sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" successfully" Jul 2 08:04:52.583242 env[1433]: time="2024-07-02T08:04:52.583224674Z" level=info msg="StopPodSandbox for \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" returns successfully" Jul 2 08:04:52.583575 env[1433]: time="2024-07-02T08:04:52.583540675Z" level=info msg="RemovePodSandbox for \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\"" Jul 2 08:04:52.583647 env[1433]: time="2024-07-02T08:04:52.583566575Z" level=info msg="Forcibly stopping sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\"" Jul 2 08:04:52.583696 env[1433]: time="2024-07-02T08:04:52.583644275Z" level=info msg="TearDown network for sandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" successfully" Jul 2 08:04:52.592444 env[1433]: time="2024-07-02T08:04:52.592392705Z" level=info msg="RemovePodSandbox \"bfe9f9a125e538c914c8586c06bd750fa4a2b63588e6b7732e89bdc668aff079\" returns successfully" Jul 2 08:04:52.613998 kubelet[1916]: E0702 08:04:52.613952 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:53.279528 kubelet[1916]: E0702 08:04:53.279460 1916 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:04:53.614841 kubelet[1916]: E0702 08:04:53.614700 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:54.492246 kubelet[1916]: E0702 08:04:54.492200 1916 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.200.8.11\": Get \"https://10.200.8.15:6443/api/v1/nodes/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:04:54.615512 kubelet[1916]: E0702 08:04:54.615449 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:55.615871 kubelet[1916]: E0702 08:04:55.615802 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:56.616643 kubelet[1916]: E0702 08:04:56.616586 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:57.617381 kubelet[1916]: E0702 08:04:57.617320 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:58.618495 kubelet[1916]: E0702 08:04:58.618431 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:04:59.618637 kubelet[1916]: E0702 08:04:59.618577 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:00.619147 kubelet[1916]: E0702 08:05:00.619086 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:01.619657 kubelet[1916]: E0702 08:05:01.619595 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:02.620626 kubelet[1916]: E0702 08:05:02.620568 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:03.279728 kubelet[1916]: E0702 08:05:03.279679 1916 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.11?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 2 08:05:03.621743 kubelet[1916]: E0702 08:05:03.621448 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:04.492722 kubelet[1916]: E0702 08:05:04.492663 1916 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.200.8.11\": Get \"https://10.200.8.15:6443/api/v1/nodes/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:05:04.492722 kubelet[1916]: E0702 08:05:04.492707 1916 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Jul 2 08:05:04.621744 kubelet[1916]: E0702 08:05:04.621683 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:05.622115 kubelet[1916]: E0702 08:05:05.622052 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:06.623294 kubelet[1916]: E0702 08:05:06.623193 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:07.624070 kubelet[1916]: E0702 08:05:07.624009 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:08.624944 kubelet[1916]: E0702 08:05:08.624885 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:09.625248 kubelet[1916]: E0702 08:05:09.625168 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:10.625630 kubelet[1916]: E0702 08:05:10.625572 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:11.626398 kubelet[1916]: E0702 08:05:11.626345 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:12.530337 kubelet[1916]: E0702 08:05:12.530283 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:12.626831 kubelet[1916]: E0702 08:05:12.626781 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:13.280186 kubelet[1916]: E0702 08:05:13.280123 1916 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:05:13.280186 kubelet[1916]: I0702 08:05:13.280178 1916 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 2 08:05:13.627968 kubelet[1916]: E0702 08:05:13.627729 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:14.628597 kubelet[1916]: E0702 08:05:14.628540 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:15.628792 kubelet[1916]: E0702 08:05:15.628731 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:16.629683 kubelet[1916]: E0702 08:05:16.629633 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:17.630613 kubelet[1916]: E0702 08:05:17.630554 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:18.631780 kubelet[1916]: E0702 08:05:18.631725 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:19.632721 kubelet[1916]: E0702 08:05:19.632664 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:20.633737 kubelet[1916]: E0702 08:05:20.633682 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:21.634064 kubelet[1916]: E0702 08:05:21.634008 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:22.634197 kubelet[1916]: E0702 08:05:22.634156 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:23.280964 kubelet[1916]: E0702 08:05:23.280900 1916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.11?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Jul 2 08:05:23.635103 kubelet[1916]: E0702 08:05:23.634959 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:24.618085 kubelet[1916]: E0702 08:05:24.618034 1916 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.200.8.11\": Get \"https://10.200.8.15:6443/api/v1/nodes/10.200.8.11?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:05:24.635234 kubelet[1916]: E0702 08:05:24.635179 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:25.635867 kubelet[1916]: E0702 08:05:25.635808 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:25.975294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:25.975737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:25.985903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:25.986190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:25.996977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:25.997302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.008340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.008591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.027499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.027772 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.039680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.039935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.051276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.051512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.063147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.063405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.088255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.088540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.088680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.099638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.099865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.111459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.111700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.119271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.136677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.136936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.147678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.147896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.159217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.159468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.170740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.170960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.190401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.190676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.201217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.201452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.212073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.212284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.223103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.223330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.242146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.242420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.253213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.253450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.264311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.264533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.275161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.275366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.286258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.286478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.296960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.297162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.307801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.308000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.318671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.318854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.335566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.335786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.335924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.346292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.346496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.357274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.357484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.368101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.374320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.374522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.385062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.385271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.395833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.396027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.406900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.407096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.423709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.423906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.424037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.434876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.435068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.445716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.445902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.456409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.462496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.462692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.473146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.473345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.483894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.484086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.494588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.494772 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.511389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.511599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.511735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.521981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.522184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.532969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.533185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.543921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.549984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.550189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.560808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.560998 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.571746 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.571949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.582707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.582903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.599230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.599432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.599564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.610318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.610522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.621226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.621431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.632072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.636252 kubelet[1916]: E0702 08:05:26.636191 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:26.638295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.638492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.649374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.649569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.660431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.660631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.673320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.673524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.688413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.688625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.688771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.699039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.699237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.709884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.710094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.720529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.726642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.726852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.737321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.737518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.747995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.748201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.758966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.759163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.775323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.775534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.775673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.786194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.786420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.797076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.797290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.808225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.814730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.814925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.825753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.825943 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.836750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.836949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.847801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.848002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.864595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.864788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.864929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.875788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.875997 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.886614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.886824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.897869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.904102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.904324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.915034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.915228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.926693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.926894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.937919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.938125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.954811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.955058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.955177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.965952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.966167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.977326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.977537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.988337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.994657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:26.994892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.005974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.006204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.017010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.017233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.028764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.028977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.045347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.045582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.045731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.056287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.056555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.068781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.069028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.075336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.086365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.086597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.097465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.097670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.108563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.108792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.122917 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.123138 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.139981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.140217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.140372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.151412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.151628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.162522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.162725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.173849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.180632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.180856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.191274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.191477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.201900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.202093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.212798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.213009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.229555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.229820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.229960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.235385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.246656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.246885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.258024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.258279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.270666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.270914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.281865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.282124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.293139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.293394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.304353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.304614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.321144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.321437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.321583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.331999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.332241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.337646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.343020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.353829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.360042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.360252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.371070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.371283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.382147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.382372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.393524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.393739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.410159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.410377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.410511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.421030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.421244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.426282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.432159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.442958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.449438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.449641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.460445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.460650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.472081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.472297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.482614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.482804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.499096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.499307 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.499448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.509985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.510182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.520325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.520529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.530963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.537294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.537492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.548219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.548425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.559521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.559722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.570193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.570404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.587188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.587424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.587558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.598323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.598553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.609149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.609375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.620251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.626666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.626877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.636864 kubelet[1916]: E0702 08:05:27.636801 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:05:27.637823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.638027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.648990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.649208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.660059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.660270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.677022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.677257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.677421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.689086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.689335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.700280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.700528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.711165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.717637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.717858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.728388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.728608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.739351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.739575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.750282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.750510 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.766595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.766851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.766991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.777067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.777309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.788088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.788313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.799477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.806078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.806297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.817650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.817853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.829445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.829643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.841072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.841296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.852464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.852670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.863705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.863919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.880356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.880547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.880682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.891467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.897962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.898185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.909308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.909522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.920131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.920363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.931134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.931390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.948253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.948502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.948651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.959332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.959563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.970274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.970499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.981533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.982333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.992091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:27.992297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.001983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.002191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.012104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.012329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.021541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.021844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.026569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.037123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.037342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.046906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.047111 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.056463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.056680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.069108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.069351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.080639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.080906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.091957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.092188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.102768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.102989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.118951 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.119204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 08:05:28.119354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001