Feb 9 19:04:25.061887 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:04:25.061913 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:25.061923 kernel: BIOS-provided physical RAM map: Feb 9 19:04:25.061929 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:04:25.061936 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:04:25.061943 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:04:25.061952 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:04:25.061960 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:04:25.061966 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:04:25.061972 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:04:25.061980 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:04:25.061986 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:04:25.061993 kernel: NX (Execute Disable) protection: active Feb 9 19:04:25.062000 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:04:25.062009 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:04:25.062019 kernel: random: crng init done Feb 9 19:04:25.062025 kernel: SMBIOS 3.1.0 present. Feb 9 19:04:25.062033 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:04:25.062040 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:04:25.062046 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:04:25.062054 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:04:25.062061 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:04:25.062070 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:04:25.062079 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:04:25.062085 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:04:25.062091 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:04:25.062101 kernel: tsc: Detected 2593.905 MHz processor Feb 9 19:04:25.062107 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:04:25.062117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:04:25.062123 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:04:25.062129 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:04:25.062138 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:04:25.062147 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:04:25.062156 kernel: Using GB pages for direct mapping Feb 9 19:04:25.062162 kernel: Secure boot disabled Feb 9 19:04:25.062169 kernel: ACPI: Early table checksum verification disabled Feb 9 19:04:25.062178 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:04:25.062184 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062194 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062200 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:04:25.062213 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:04:25.062221 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062234 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062240 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062247 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062257 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062266 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062275 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:25.062282 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:04:25.062289 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:04:25.062299 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:04:25.062306 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:04:25.062315 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:04:25.062322 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:04:25.062332 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:04:25.062340 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:04:25.062348 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:04:25.062357 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:04:25.062363 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:04:25.062372 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:04:25.062380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:04:25.062388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:04:25.062396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:04:25.062405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:04:25.062415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:04:25.062422 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:04:25.062431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:04:25.062438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:04:25.062445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:04:25.062455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:04:25.062462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:04:25.062471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:04:25.062480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:04:25.062489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:04:25.062497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:04:25.062504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:04:25.062513 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:04:25.062520 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:04:25.062528 kernel: Zone ranges: Feb 9 19:04:25.062536 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:04:25.062544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:04:25.062555 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:04:25.062561 kernel: Movable zone start for each node Feb 9 19:04:25.062570 kernel: Early memory node ranges Feb 9 19:04:25.062578 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:04:25.062586 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:04:25.062594 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:04:25.062601 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:04:25.062609 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:04:25.062625 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:04:25.062635 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:04:25.062642 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:04:25.062652 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:04:25.062659 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:04:25.062668 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:04:25.062676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:04:25.062682 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:04:25.062692 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:04:25.062699 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:04:25.062710 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:04:25.062717 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:04:25.062724 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:04:25.062734 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:04:25.062741 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:04:25.062751 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:04:25.062757 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:04:25.062764 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:04:25.062774 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:04:25.062784 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:04:25.062792 kernel: Policy zone: Normal Feb 9 19:04:25.062800 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:25.062811 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:04:25.062818 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:04:25.062827 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:04:25.062834 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:04:25.062841 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:04:25.062853 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:04:25.062860 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:04:25.062877 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:04:25.062889 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:04:25.062897 kernel: rcu: RCU event tracing is enabled. Feb 9 19:04:25.062907 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:04:25.062915 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:04:25.062923 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:04:25.062932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:04:25.062941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:04:25.062950 kernel: Using NULL legacy PIC Feb 9 19:04:25.062959 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:04:25.062970 kernel: Console: colour dummy device 80x25 Feb 9 19:04:25.062977 kernel: printk: console [tty1] enabled Feb 9 19:04:25.062987 kernel: printk: console [ttyS0] enabled Feb 9 19:04:25.062994 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:04:25.063006 kernel: ACPI: Core revision 20210730 Feb 9 19:04:25.063013 kernel: Failed to register legacy timer interrupt Feb 9 19:04:25.063023 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:04:25.063031 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:04:25.063039 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 9 19:04:25.063048 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:04:25.063057 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:04:25.063066 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:04:25.063073 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:04:25.063082 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:04:25.063092 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:04:25.063102 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:04:25.063109 kernel: RETBleed: Vulnerable Feb 9 19:04:25.063116 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:04:25.063126 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:04:25.063134 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:04:25.063143 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:04:25.063150 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:04:25.063159 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:04:25.063167 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:04:25.063179 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:04:25.063187 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:04:25.063194 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:04:25.063204 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:04:25.063211 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:04:25.063221 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:04:25.063228 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:04:25.063236 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:04:25.063245 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:04:25.063253 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:04:25.063262 kernel: LSM: Security Framework initializing Feb 9 19:04:25.063269 kernel: SELinux: Initializing. Feb 9 19:04:25.063281 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:04:25.063289 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:04:25.063299 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:04:25.063307 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:04:25.063315 kernel: signal: max sigframe size: 3632 Feb 9 19:04:25.063324 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:04:25.063333 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:04:25.063342 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:04:25.063349 kernel: x86: Booting SMP configuration: Feb 9 19:04:25.063358 kernel: .... node #0, CPUs: #1 Feb 9 19:04:25.063368 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:04:25.063379 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:04:25.063386 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:04:25.063394 kernel: smpboot: Max logical packages: 1 Feb 9 19:04:25.063403 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:04:25.063412 kernel: devtmpfs: initialized Feb 9 19:04:25.063420 kernel: x86/mm: Memory block size: 128MB Feb 9 19:04:25.063427 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:04:25.063440 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:04:25.063447 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:04:25.063457 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:04:25.063465 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:04:25.063473 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:04:25.063482 kernel: audit: type=2000 audit(1707505464.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:04:25.063489 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:04:25.063496 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:04:25.063503 kernel: cpuidle: using governor menu Feb 9 19:04:25.063512 kernel: ACPI: bus type PCI registered Feb 9 19:04:25.063520 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:04:25.063527 kernel: dca service started, version 1.12.1 Feb 9 19:04:25.063534 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:04:25.063541 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:04:25.063548 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:04:25.063555 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:04:25.063563 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:04:25.063570 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:04:25.063579 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:04:25.063586 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:04:25.063593 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:04:25.063600 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:04:25.063607 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:04:25.063619 kernel: ACPI: Interpreter enabled Feb 9 19:04:25.063627 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:04:25.063634 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:04:25.063641 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:04:25.063650 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:04:25.063657 kernel: iommu: Default domain type: Translated Feb 9 19:04:25.063664 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:04:25.063672 kernel: vgaarb: loaded Feb 9 19:04:25.063679 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:04:25.063686 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:04:25.063694 kernel: PTP clock support registered Feb 9 19:04:25.063701 kernel: Registered efivars operations Feb 9 19:04:25.063708 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:04:25.063715 kernel: PCI: System does not support PCI Feb 9 19:04:25.063724 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:04:25.063731 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:04:25.063738 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:04:25.063745 kernel: pnp: PnP ACPI init Feb 9 19:04:25.063752 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:04:25.063759 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:04:25.063767 kernel: NET: Registered PF_INET protocol family Feb 9 19:04:25.063774 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:04:25.063783 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:04:25.063790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:04:25.063798 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:04:25.063805 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:04:25.063812 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:04:25.063819 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:04:25.063826 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:04:25.063833 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:04:25.063840 kernel: NET: Registered PF_XDP protocol family Feb 9 19:04:25.063849 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:04:25.063856 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:04:25.063864 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:04:25.063871 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:04:25.063878 kernel: Initialise system trusted keyrings Feb 9 19:04:25.063885 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:04:25.063892 kernel: Key type asymmetric registered Feb 9 19:04:25.063899 kernel: Asymmetric key parser 'x509' registered Feb 9 19:04:25.063906 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:04:25.063915 kernel: io scheduler mq-deadline registered Feb 9 19:04:25.063922 kernel: io scheduler kyber registered Feb 9 19:04:25.063929 kernel: io scheduler bfq registered Feb 9 19:04:25.063936 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:04:25.063943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:04:25.063950 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:04:25.063957 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:04:25.063965 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:04:25.064090 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:04:25.064176 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:04:24 UTC (1707505464) Feb 9 19:04:25.064246 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:04:25.064256 kernel: fail to initialize ptp_kvm Feb 9 19:04:25.064263 kernel: intel_pstate: CPU model not supported Feb 9 19:04:25.064270 kernel: efifb: probing for efifb Feb 9 19:04:25.064277 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:04:25.064285 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:04:25.064292 kernel: efifb: scrolling: redraw Feb 9 19:04:25.064301 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:04:25.064308 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:04:25.064315 kernel: fb0: EFI VGA frame buffer device Feb 9 19:04:25.064325 kernel: pstore: Registered efi as persistent store backend Feb 9 19:04:25.064333 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:04:25.064341 kernel: Segment Routing with IPv6 Feb 9 19:04:25.064348 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:04:25.064355 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:04:25.064362 kernel: Key type dns_resolver registered Feb 9 19:04:25.064371 kernel: IPI shorthand broadcast: enabled Feb 9 19:04:25.064382 kernel: sched_clock: Marking stable (778527200, 22708000)->(985640700, -184405500) Feb 9 19:04:25.064389 kernel: registered taskstats version 1 Feb 9 19:04:25.064396 kernel: Loading compiled-in X.509 certificates Feb 9 19:04:25.064403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:04:25.064410 kernel: Key type .fscrypt registered Feb 9 19:04:25.064417 kernel: Key type fscrypt-provisioning registered Feb 9 19:04:25.064426 kernel: pstore: Using crash dump compression: deflate Feb 9 19:04:25.064437 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:04:25.064444 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:04:25.064451 kernel: ima: No architecture policies found Feb 9 19:04:25.064458 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:04:25.064465 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:04:25.064473 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:04:25.064480 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:04:25.064488 kernel: Run /init as init process Feb 9 19:04:25.064497 kernel: with arguments: Feb 9 19:04:25.064504 kernel: /init Feb 9 19:04:25.064514 kernel: with environment: Feb 9 19:04:25.064524 kernel: HOME=/ Feb 9 19:04:25.064531 kernel: TERM=linux Feb 9 19:04:25.064537 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:04:25.064546 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:04:25.064556 systemd[1]: Detected virtualization microsoft. Feb 9 19:04:25.064567 systemd[1]: Detected architecture x86-64. Feb 9 19:04:25.064577 systemd[1]: Running in initrd. Feb 9 19:04:25.064584 systemd[1]: No hostname configured, using default hostname. Feb 9 19:04:25.064591 systemd[1]: Hostname set to . Feb 9 19:04:25.064602 systemd[1]: Initializing machine ID from random generator. Feb 9 19:04:25.064610 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:04:25.064624 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:04:25.064633 systemd[1]: Reached target cryptsetup.target. Feb 9 19:04:25.064642 systemd[1]: Reached target paths.target. Feb 9 19:04:25.064651 systemd[1]: Reached target slices.target. Feb 9 19:04:25.064662 systemd[1]: Reached target swap.target. Feb 9 19:04:25.064670 systemd[1]: Reached target timers.target. Feb 9 19:04:25.064677 systemd[1]: Listening on iscsid.socket. Feb 9 19:04:25.064687 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:04:25.064696 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:04:25.064707 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:04:25.064714 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:04:25.064724 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:04:25.064734 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:04:25.064742 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:04:25.064750 systemd[1]: Reached target sockets.target. Feb 9 19:04:25.064757 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:04:25.064765 systemd[1]: Finished network-cleanup.service. Feb 9 19:04:25.064776 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:04:25.064783 systemd[1]: Starting systemd-journald.service... Feb 9 19:04:25.064791 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:04:25.064803 systemd[1]: Starting systemd-resolved.service... Feb 9 19:04:25.064811 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:04:25.064820 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:04:25.064829 kernel: audit: type=1130 audit(1707505465.059:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.064841 systemd-journald[183]: Journal started Feb 9 19:04:25.064885 systemd-journald[183]: Runtime Journal (/run/log/journal/4eeb0245775d4a338d8405683b5ccbaa) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:04:25.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.056009 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:04:25.082657 systemd[1]: Started systemd-journald.service. Feb 9 19:04:25.081842 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:04:25.086153 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:04:25.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.092848 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:04:25.112487 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:04:25.112511 kernel: audit: type=1130 audit(1707505465.081:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.121751 kernel: Bridge firewalling registered Feb 9 19:04:25.119608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:04:25.131703 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:04:25.135366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:04:25.141995 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:04:25.145438 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:04:25.162045 dracut-cmdline[201]: dracut-dracut-053 Feb 9 19:04:25.180159 kernel: audit: type=1130 audit(1707505465.085:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.176531 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:04:25.244388 kernel: SCSI subsystem initialized Feb 9 19:04:25.244418 kernel: audit: type=1130 audit(1707505465.090:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.244433 kernel: audit: type=1130 audit(1707505465.134:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.244445 kernel: audit: type=1130 audit(1707505465.144:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.244457 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:04:25.244469 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:04:25.244490 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:04:25.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.244584 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:25.176546 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:04:25.176594 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:04:25.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.180215 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:04:25.293112 kernel: audit: type=1130 audit(1707505465.264:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.185585 systemd[1]: Started systemd-resolved.service. Feb 9 19:04:25.243689 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:04:25.293167 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:04:25.301093 systemd[1]: Reached target nss-lookup.target. Feb 9 19:04:25.319067 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:04:25.319104 kernel: audit: type=1130 audit(1707505465.300:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.319099 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:25.331215 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:25.351872 kernel: iscsi: registered transport (tcp) Feb 9 19:04:25.351920 kernel: audit: type=1130 audit(1707505465.335:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.377557 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:04:25.377610 kernel: QLogic iSCSI HBA Driver Feb 9 19:04:25.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.406722 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:04:25.412592 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:04:25.462639 kernel: raid6: avx512x4 gen() 18524 MB/s Feb 9 19:04:25.482630 kernel: raid6: avx512x4 xor() 7391 MB/s Feb 9 19:04:25.502626 kernel: raid6: avx512x2 gen() 18464 MB/s Feb 9 19:04:25.522632 kernel: raid6: avx512x2 xor() 29876 MB/s Feb 9 19:04:25.542625 kernel: raid6: avx512x1 gen() 18180 MB/s Feb 9 19:04:25.562635 kernel: raid6: avx512x1 xor() 26933 MB/s Feb 9 19:04:25.582630 kernel: raid6: avx2x4 gen() 18379 MB/s Feb 9 19:04:25.602626 kernel: raid6: avx2x4 xor() 6844 MB/s Feb 9 19:04:25.622626 kernel: raid6: avx2x2 gen() 18410 MB/s Feb 9 19:04:25.642632 kernel: raid6: avx2x2 xor() 22267 MB/s Feb 9 19:04:25.662625 kernel: raid6: avx2x1 gen() 13888 MB/s Feb 9 19:04:25.682626 kernel: raid6: avx2x1 xor() 19445 MB/s Feb 9 19:04:25.702629 kernel: raid6: sse2x4 gen() 11699 MB/s Feb 9 19:04:25.722627 kernel: raid6: sse2x4 xor() 6148 MB/s Feb 9 19:04:25.742627 kernel: raid6: sse2x2 gen() 12921 MB/s Feb 9 19:04:25.762627 kernel: raid6: sse2x2 xor() 7525 MB/s Feb 9 19:04:25.781625 kernel: raid6: sse2x1 gen() 11648 MB/s Feb 9 19:04:25.805014 kernel: raid6: sse2x1 xor() 5917 MB/s Feb 9 19:04:25.805036 kernel: raid6: using algorithm avx512x4 gen() 18524 MB/s Feb 9 19:04:25.805047 kernel: raid6: .... xor() 7391 MB/s, rmw enabled Feb 9 19:04:25.808316 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:04:25.827638 kernel: xor: automatically using best checksumming function avx Feb 9 19:04:25.924648 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:04:25.933140 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:04:25.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.936000 audit: BPF prog-id=7 op=LOAD Feb 9 19:04:25.936000 audit: BPF prog-id=8 op=LOAD Feb 9 19:04:25.937951 systemd[1]: Starting systemd-udevd.service... Feb 9 19:04:25.953604 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 19:04:25.960537 systemd[1]: Started systemd-udevd.service. Feb 9 19:04:25.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:25.965507 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:04:25.982225 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Feb 9 19:04:26.012886 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:04:26.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:26.016412 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:04:26.053420 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:04:26.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:26.098640 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:04:26.105640 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:04:26.127637 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:04:26.127710 kernel: AES CTR mode by8 optimization enabled Feb 9 19:04:26.143633 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:04:26.154641 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:04:26.167664 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:04:26.167715 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:04:26.176631 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:04:26.184659 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:04:26.198670 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:04:26.198714 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:04:26.204635 kernel: scsi host1: storvsc_host_t Feb 9 19:04:26.208639 kernel: scsi host0: storvsc_host_t Feb 9 19:04:26.215638 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:04:26.221703 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:04:26.248631 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:04:26.248913 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:04:26.255633 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:04:26.255811 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:04:26.255934 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:04:26.261174 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:04:26.261323 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:04:26.266406 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:04:26.271632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:26.275634 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:04:26.387117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:04:26.390735 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (439) Feb 9 19:04:26.390766 kernel: hv_netvsc 000d3ad9-84db-000d-3ad9-84db000d3ad9 eth0: VF slot 1 added Feb 9 19:04:26.403631 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:04:26.412636 kernel: hv_pci cb06b5c4-ebcd-4165-b02a-c46496a69101: PCI VMBus probing: Using version 0x10004 Feb 9 19:04:26.423717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:04:26.435312 kernel: hv_pci cb06b5c4-ebcd-4165-b02a-c46496a69101: PCI host bridge to bus ebcd:00 Feb 9 19:04:26.435488 kernel: pci_bus ebcd:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:04:26.435633 kernel: pci_bus ebcd:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:04:26.445892 kernel: pci ebcd:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:04:26.454937 kernel: pci ebcd:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:04:26.465737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:04:26.475780 kernel: pci ebcd:00:02.0: enabling Extended Tags Feb 9 19:04:26.477669 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:04:26.497857 kernel: pci ebcd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ebcd:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:04:26.498104 kernel: pci_bus ebcd:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:04:26.498217 kernel: pci ebcd:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:04:26.505646 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:04:26.512601 systemd[1]: Starting disk-uuid.service... Feb 9 19:04:26.530641 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:26.543636 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:26.630642 kernel: mlx5_core ebcd:00:02.0: firmware version: 14.30.1224 Feb 9 19:04:26.809638 kernel: mlx5_core ebcd:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:04:26.949090 kernel: mlx5_core ebcd:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:04:26.949354 kernel: mlx5_core ebcd:00:02.0: mlx5e_tc_post_act_init:40:(pid 361): firmware level support is missing Feb 9 19:04:26.960932 kernel: hv_netvsc 000d3ad9-84db-000d-3ad9-84db000d3ad9 eth0: VF registering: eth1 Feb 9 19:04:26.961103 kernel: mlx5_core ebcd:00:02.0 eth1: joined to eth0 Feb 9 19:04:26.974638 kernel: mlx5_core ebcd:00:02.0 enP60365s1: renamed from eth1 Feb 9 19:04:27.543727 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:27.543799 disk-uuid[551]: The operation has completed successfully. Feb 9 19:04:27.609286 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:04:27.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.609392 systemd[1]: Finished disk-uuid.service. Feb 9 19:04:27.625055 systemd[1]: Starting verity-setup.service... Feb 9 19:04:27.646629 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:04:27.727969 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:04:27.735070 systemd[1]: Finished verity-setup.service. Feb 9 19:04:27.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.740131 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:04:27.816436 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:04:27.820397 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:04:27.820524 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:04:27.824844 systemd[1]: Starting ignition-setup.service... Feb 9 19:04:27.827904 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:04:27.845949 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:27.845988 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:27.846007 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:27.883978 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:04:27.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.904991 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:04:27.910000 audit: BPF prog-id=9 op=LOAD Feb 9 19:04:27.911749 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:27.923999 systemd[1]: Finished ignition-setup.service. Feb 9 19:04:27.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.928918 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:04:27.945289 systemd-networkd[806]: lo: Link UP Feb 9 19:04:27.945299 systemd-networkd[806]: lo: Gained carrier Feb 9 19:04:27.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.946222 systemd-networkd[806]: Enumeration completed Feb 9 19:04:27.946293 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:27.949523 systemd[1]: Reached target network.target. Feb 9 19:04:27.951551 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:27.955689 systemd[1]: Starting iscsiuio.service... Feb 9 19:04:27.965455 systemd[1]: Started iscsiuio.service. Feb 9 19:04:27.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.971150 systemd[1]: Starting iscsid.service... Feb 9 19:04:27.979603 systemd[1]: Started iscsid.service. Feb 9 19:04:27.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.983953 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:04:27.987274 iscsid[813]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:04:27.987274 iscsid[813]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:04:27.987274 iscsid[813]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:04:27.987274 iscsid[813]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:04:27.987274 iscsid[813]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:04:27.987274 iscsid[813]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:04:27.987274 iscsid[813]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:04:28.019868 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:04:28.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.024221 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:04:28.028559 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:04:28.038696 kernel: mlx5_core ebcd:00:02.0 enP60365s1: Link up Feb 9 19:04:28.033126 systemd[1]: Reached target remote-fs.target. Feb 9 19:04:28.043530 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:04:28.055847 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:04:28.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.116741 kernel: hv_netvsc 000d3ad9-84db-000d-3ad9-84db000d3ad9 eth0: Data path switched to VF: enP60365s1 Feb 9 19:04:28.117022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:04:28.117365 systemd-networkd[806]: enP60365s1: Link UP Feb 9 19:04:28.117513 systemd-networkd[806]: eth0: Link UP Feb 9 19:04:28.117879 systemd-networkd[806]: eth0: Gained carrier Feb 9 19:04:28.126791 systemd-networkd[806]: enP60365s1: Gained carrier Feb 9 19:04:28.155706 systemd-networkd[806]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:28.692119 ignition[808]: Ignition 2.14.0 Feb 9 19:04:28.692131 ignition[808]: Stage: fetch-offline Feb 9 19:04:28.692202 ignition[808]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:28.692253 ignition[808]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:28.722024 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:28.725067 ignition[808]: parsed url from cmdline: "" Feb 9 19:04:28.725075 ignition[808]: no config URL provided Feb 9 19:04:28.725083 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:04:28.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.726866 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:04:28.725099 ignition[808]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:04:28.730160 systemd[1]: Starting ignition-fetch.service... Feb 9 19:04:28.725109 ignition[808]: failed to fetch config: resource requires networking Feb 9 19:04:28.725348 ignition[808]: Ignition finished successfully Feb 9 19:04:28.741025 ignition[833]: Ignition 2.14.0 Feb 9 19:04:28.741031 ignition[833]: Stage: fetch Feb 9 19:04:28.741136 ignition[833]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:28.741160 ignition[833]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:28.744451 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:28.750758 ignition[833]: parsed url from cmdline: "" Feb 9 19:04:28.750765 ignition[833]: no config URL provided Feb 9 19:04:28.750774 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:04:28.750798 ignition[833]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:04:28.750835 ignition[833]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:04:28.849348 ignition[833]: GET result: OK Feb 9 19:04:28.849511 ignition[833]: config has been read from IMDS userdata Feb 9 19:04:28.849554 ignition[833]: parsing config with SHA512: f9f8ea3a28d3df1e5d054f1fd418e910a265c56434d47a872f7ab17115b16f704b90182568b220bc8b7ec443e4f74fa89430ec26d41cbff3a715959c2f7eb739 Feb 9 19:04:28.868898 unknown[833]: fetched base config from "system" Feb 9 19:04:28.868911 unknown[833]: fetched base config from "system" Feb 9 19:04:28.868925 unknown[833]: fetched user config from "azure" Feb 9 19:04:28.875544 ignition[833]: fetch: fetch complete Feb 9 19:04:28.875553 ignition[833]: fetch: fetch passed Feb 9 19:04:28.875608 ignition[833]: Ignition finished successfully Feb 9 19:04:28.880632 systemd[1]: Finished ignition-fetch.service. Feb 9 19:04:28.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.884533 systemd[1]: Starting ignition-kargs.service... Feb 9 19:04:28.896422 ignition[839]: Ignition 2.14.0 Feb 9 19:04:28.896433 ignition[839]: Stage: kargs Feb 9 19:04:28.896572 ignition[839]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:28.896607 ignition[839]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:28.903589 systemd[1]: Finished ignition-kargs.service. Feb 9 19:04:28.900522 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:28.902403 ignition[839]: kargs: kargs passed Feb 9 19:04:28.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.902456 ignition[839]: Ignition finished successfully Feb 9 19:04:28.914846 systemd[1]: Starting ignition-disks.service... Feb 9 19:04:28.923012 ignition[845]: Ignition 2.14.0 Feb 9 19:04:28.923021 ignition[845]: Stage: disks Feb 9 19:04:28.923139 ignition[845]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:28.923170 ignition[845]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:28.931687 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:28.932971 ignition[845]: disks: disks passed Feb 9 19:04:28.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.933669 systemd[1]: Finished ignition-disks.service. Feb 9 19:04:28.933007 ignition[845]: Ignition finished successfully Feb 9 19:04:28.936690 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:04:28.940515 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:04:28.942809 systemd[1]: Reached target local-fs.target. Feb 9 19:04:28.944949 systemd[1]: Reached target sysinit.target. Feb 9 19:04:28.949062 systemd[1]: Reached target basic.target. Feb 9 19:04:28.951756 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:04:28.975207 systemd-fsck[853]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:04:28.979325 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:04:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.982367 systemd[1]: Mounting sysroot.mount... Feb 9 19:04:29.001639 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:04:29.001604 systemd[1]: Mounted sysroot.mount. Feb 9 19:04:29.005382 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:04:29.018329 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:04:29.023594 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:04:29.028146 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:04:29.028181 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:04:29.036215 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:04:29.048872 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:04:29.053966 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:04:29.070426 initrd-setup-root[868]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:04:29.077439 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (863) Feb 9 19:04:29.077475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:29.081579 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:29.081624 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:29.086966 initrd-setup-root[885]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:04:29.094872 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:04:29.097307 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:04:29.103579 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:04:29.219665 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:04:29.226959 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:04:29.226986 kernel: audit: type=1130 audit(1707505469.222:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.228465 systemd[1]: Starting ignition-mount.service... Feb 9 19:04:29.243456 systemd[1]: Starting sysroot-boot.service... Feb 9 19:04:29.248572 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:04:29.251223 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:04:29.266361 systemd[1]: Finished sysroot-boot.service. Feb 9 19:04:29.284065 kernel: audit: type=1130 audit(1707505469.268:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.286011 ignition[932]: INFO : Ignition 2.14.0 Feb 9 19:04:29.286011 ignition[932]: INFO : Stage: mount Feb 9 19:04:29.290338 ignition[932]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:29.290338 ignition[932]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:29.301931 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:29.301931 ignition[932]: INFO : mount: mount passed Feb 9 19:04:29.301931 ignition[932]: INFO : Ignition finished successfully Feb 9 19:04:29.321568 kernel: audit: type=1130 audit(1707505469.301:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.299277 systemd[1]: Finished ignition-mount.service. Feb 9 19:04:29.466843 coreos-metadata[862]: Feb 09 19:04:29.466 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:04:29.472775 coreos-metadata[862]: Feb 09 19:04:29.472 INFO Fetch successful Feb 9 19:04:29.507166 coreos-metadata[862]: Feb 09 19:04:29.507 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:04:29.523974 coreos-metadata[862]: Feb 09 19:04:29.523 INFO Fetch successful Feb 9 19:04:29.531703 coreos-metadata[862]: Feb 09 19:04:29.531 INFO wrote hostname ci-3510.3.2-a-97ddcae7e2 to /sysroot/etc/hostname Feb 9 19:04:29.533369 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:04:29.539803 systemd[1]: Starting ignition-files.service... Feb 9 19:04:29.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.555071 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:04:29.561792 kernel: audit: type=1130 audit(1707505469.538:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:29.568635 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (941) Feb 9 19:04:29.577473 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:29.577499 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:29.577510 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:29.585800 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:04:29.597547 ignition[960]: INFO : Ignition 2.14.0 Feb 9 19:04:29.597547 ignition[960]: INFO : Stage: files Feb 9 19:04:29.601288 ignition[960]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:29.601288 ignition[960]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:29.617440 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:29.625463 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:04:29.628742 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:04:29.628742 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:04:29.645350 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:04:29.649090 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:04:29.656237 unknown[960]: wrote ssh authorized keys file for user: core Feb 9 19:04:29.658871 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:04:29.662611 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:04:29.667748 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:04:29.847886 systemd-networkd[806]: eth0: Gained IPv6LL Feb 9 19:04:30.303843 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:04:30.480519 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:04:30.488890 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:04:30.488890 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:04:30.488890 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:04:30.989237 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:04:31.080329 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:04:31.088378 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:04:31.088378 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:04:31.088378 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:04:31.324946 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:04:31.633706 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:04:31.642811 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:04:31.642811 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:04:31.642811 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:04:31.764975 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:04:32.548516 ignition[960]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:04:32.562703 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (962) Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596598462" Feb 9 19:04:32.562736 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596598462": device or resource busy Feb 9 19:04:32.562736 ignition[960]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem596598462", trying btrfs: device or resource busy Feb 9 19:04:32.562736 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596598462" Feb 9 19:04:32.648612 kernel: audit: type=1130 audit(1707505472.603:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596598462" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem596598462" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem596598462" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem822622496" Feb 9 19:04:32.648754 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem822622496": device or resource busy Feb 9 19:04:32.648754 ignition[960]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem822622496", trying btrfs: device or resource busy Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem822622496" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem822622496" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem822622496" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem822622496" Feb 9 19:04:32.648754 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:04:32.648754 ignition[960]: INFO : files: op(12): [started] processing unit "nvidia.service" Feb 9 19:04:32.648754 ignition[960]: INFO : files: op(12): [finished] processing unit "nvidia.service" Feb 9 19:04:32.802683 kernel: audit: type=1130 audit(1707505472.668:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.802728 kernel: audit: type=1130 audit(1707505472.699:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.802749 kernel: audit: type=1131 audit(1707505472.699:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.802768 kernel: audit: type=1130 audit(1707505472.738:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.802792 kernel: audit: type=1131 audit(1707505472.738:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.574692 systemd[1]: mnt-oem596598462.mount: Deactivated successfully. Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(13): [started] processing unit "waagent.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(13): [finished] processing unit "waagent.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:04:32.805787 ignition[960]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:04:32.805787 ignition[960]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:04:32.805787 ignition[960]: INFO : files: files passed Feb 9 19:04:32.594658 systemd[1]: mnt-oem822622496.mount: Deactivated successfully. Feb 9 19:04:32.889235 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:04:32.893388 ignition[960]: INFO : Ignition finished successfully Feb 9 19:04:32.600600 systemd[1]: Finished ignition-files.service. Feb 9 19:04:32.605482 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:04:32.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.630351 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:04:32.663779 systemd[1]: Starting ignition-quench.service... Feb 9 19:04:32.666210 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:04:32.669426 systemd[1]: Reached target ignition-complete.target. Feb 9 19:04:32.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.672764 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:04:32.689527 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:04:32.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.689669 systemd[1]: Finished ignition-quench.service. Feb 9 19:04:32.706419 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:04:32.706514 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:04:32.739518 systemd[1]: Reached target initrd-fs.target. Feb 9 19:04:32.769725 systemd[1]: Reached target initrd.target. Feb 9 19:04:32.777522 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:04:32.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.778569 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:04:32.898252 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:04:32.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.901348 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:04:32.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.917088 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:04:32.917178 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:04:32.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.920484 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:04:32.924037 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:04:33.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.013247 iscsid[813]: iscsid shutting down. Feb 9 19:04:32.926251 systemd[1]: Stopped target timers.target. Feb 9 19:04:33.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.024701 ignition[998]: INFO : Ignition 2.14.0 Feb 9 19:04:33.024701 ignition[998]: INFO : Stage: umount Feb 9 19:04:33.024701 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:33.024701 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:33.024701 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:33.024701 ignition[998]: INFO : umount: umount passed Feb 9 19:04:33.024701 ignition[998]: INFO : Ignition finished successfully Feb 9 19:04:33.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.928167 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:04:33.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.928227 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:04:33.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:32.933123 systemd[1]: Stopped target initrd.target. Feb 9 19:04:32.935055 systemd[1]: Stopped target basic.target. Feb 9 19:04:32.938974 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:04:32.941262 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:04:32.946235 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:04:32.948598 systemd[1]: Stopped target remote-fs.target. Feb 9 19:04:32.952778 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:04:32.955018 systemd[1]: Stopped target sysinit.target. Feb 9 19:04:32.959069 systemd[1]: Stopped target local-fs.target. Feb 9 19:04:32.961165 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:04:32.965173 systemd[1]: Stopped target swap.target. Feb 9 19:04:32.966979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:04:32.967051 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:04:32.973262 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:04:32.977836 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:04:32.977902 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:04:32.982107 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:04:32.982155 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:04:32.986668 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:04:32.986716 systemd[1]: Stopped ignition-files.service. Feb 9 19:04:32.991656 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:04:32.991700 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:04:32.996735 systemd[1]: Stopping ignition-mount.service... Feb 9 19:04:33.005655 systemd[1]: Stopping iscsid.service... Feb 9 19:04:33.007504 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:04:33.007568 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:04:33.013532 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:04:33.015563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:04:33.015640 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:04:33.022104 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:04:33.022156 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:04:33.027257 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:04:33.027368 systemd[1]: Stopped iscsid.service. Feb 9 19:04:33.029878 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:04:33.029953 systemd[1]: Stopped ignition-mount.service. Feb 9 19:04:33.035388 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:04:33.035695 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:04:33.035731 systemd[1]: Stopped ignition-disks.service. Feb 9 19:04:33.050844 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:04:33.053075 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:04:33.057776 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:04:33.057828 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:04:33.062159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:04:33.062210 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:04:33.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.148589 systemd[1]: Stopped target paths.target. Feb 9 19:04:33.150948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:04:33.152644 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:04:33.155536 systemd[1]: Stopped target slices.target. Feb 9 19:04:33.157597 systemd[1]: Stopped target sockets.target. Feb 9 19:04:33.159876 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:04:33.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.159929 systemd[1]: Closed iscsid.socket. Feb 9 19:04:33.164630 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:04:33.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.164697 systemd[1]: Stopped ignition-setup.service. Feb 9 19:04:33.172246 systemd[1]: Stopping iscsiuio.service... Feb 9 19:04:33.179136 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:04:33.179248 systemd[1]: Stopped iscsiuio.service. Feb 9 19:04:33.183519 systemd[1]: Stopped target network.target. Feb 9 19:04:33.197800 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:04:33.197859 systemd[1]: Closed iscsiuio.socket. Feb 9 19:04:33.203913 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:04:33.208352 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:04:33.213077 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:04:33.214689 systemd-networkd[806]: eth0: DHCPv6 lease lost Feb 9 19:04:33.215777 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:04:33.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.223379 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:04:33.225877 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:04:33.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.230000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:04:33.231161 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:04:33.233000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:04:33.231208 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:04:33.238919 systemd[1]: Stopping network-cleanup.service... Feb 9 19:04:33.243198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:04:33.243272 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:04:33.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.250679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:04:33.250737 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:04:33.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.257064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:04:33.257123 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:04:33.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.264668 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:04:33.277123 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:04:33.277295 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:04:33.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.284380 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:04:33.284435 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:04:33.291754 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:04:33.291800 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:04:33.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.296692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:04:33.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.296745 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:04:33.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.300833 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:04:33.300887 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:04:33.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.305444 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:04:33.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.305492 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:04:33.310650 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:04:33.319965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:04:33.320019 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:04:33.347129 kernel: hv_netvsc 000d3ad9-84db-000d-3ad9-84db000d3ad9 eth0: Data path switched from VF: enP60365s1 Feb 9 19:04:33.325263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:04:33.325358 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:04:33.366158 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:04:33.368538 systemd[1]: Stopped network-cleanup.service. Feb 9 19:04:33.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.568532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:04:33.882290 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:04:33.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.882426 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:04:33.890483 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:04:33.895386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:04:33.895464 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:04:33.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.902827 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:04:33.914089 systemd[1]: Switching root. Feb 9 19:04:33.938358 systemd-journald[183]: Journal stopped Feb 9 19:04:38.487176 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:04:38.487208 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:04:38.487220 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:04:38.487231 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:04:38.487242 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:04:38.487250 kernel: SELinux: policy capability open_perms=1 Feb 9 19:04:38.487263 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:04:38.487272 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:04:38.487283 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:04:38.487292 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:04:38.487301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:04:38.487310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:04:38.487321 kernel: kauditd_printk_skb: 37 callbacks suppressed Feb 9 19:04:38.487330 kernel: audit: type=1403 audit(1707505474.524:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:04:38.487345 systemd[1]: Successfully loaded SELinux policy in 119.141ms. Feb 9 19:04:38.487357 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.236ms. Feb 9 19:04:38.487369 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:04:38.487380 systemd[1]: Detected virtualization microsoft. Feb 9 19:04:38.487392 systemd[1]: Detected architecture x86-64. Feb 9 19:04:38.487408 systemd[1]: Detected first boot. Feb 9 19:04:38.487418 systemd[1]: Hostname set to . Feb 9 19:04:38.487429 systemd[1]: Initializing machine ID from random generator. Feb 9 19:04:38.487438 kernel: audit: type=1400 audit(1707505474.742:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:04:38.487451 kernel: audit: type=1400 audit(1707505474.757:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:04:38.487462 kernel: audit: type=1400 audit(1707505474.757:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:04:38.487474 kernel: audit: type=1334 audit(1707505474.770:85): prog-id=10 op=LOAD Feb 9 19:04:38.487483 kernel: audit: type=1334 audit(1707505474.770:86): prog-id=10 op=UNLOAD Feb 9 19:04:38.487494 kernel: audit: type=1334 audit(1707505474.783:87): prog-id=11 op=LOAD Feb 9 19:04:38.487505 kernel: audit: type=1334 audit(1707505474.783:88): prog-id=11 op=UNLOAD Feb 9 19:04:38.487514 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:04:38.487525 kernel: audit: type=1400 audit(1707505475.126:89): avc: denied { associate } for pid=1031 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:04:38.487536 kernel: audit: type=1300 audit(1707505475.126:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:38.487550 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:04:38.487559 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:04:38.487572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:04:38.487582 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:04:38.487591 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:04:38.487602 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:04:38.487622 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:04:38.487635 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:04:38.487650 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:04:38.487660 systemd[1]: Created slice system-getty.slice. Feb 9 19:04:38.487674 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:04:38.487685 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:04:38.487696 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:04:38.487707 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:04:38.487719 systemd[1]: Created slice user.slice. Feb 9 19:04:38.487733 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:04:38.487743 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:04:38.487755 systemd[1]: Set up automount boot.automount. Feb 9 19:04:38.487767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:04:38.487777 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:04:38.487788 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:04:38.487799 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:04:38.487812 systemd[1]: Reached target integritysetup.target. Feb 9 19:04:38.487824 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:04:38.487834 systemd[1]: Reached target remote-fs.target. Feb 9 19:04:38.487845 systemd[1]: Reached target slices.target. Feb 9 19:04:38.487858 systemd[1]: Reached target swap.target. Feb 9 19:04:38.487868 systemd[1]: Reached target torcx.target. Feb 9 19:04:38.487880 systemd[1]: Reached target veritysetup.target. Feb 9 19:04:38.487890 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:04:38.487902 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:04:38.487913 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:04:38.487928 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:04:38.487941 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:04:38.487951 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:04:38.487963 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:04:38.487973 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:04:38.487986 systemd[1]: Mounting media.mount... Feb 9 19:04:38.487999 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:04:38.488008 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:04:38.489031 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:04:38.489046 systemd[1]: Mounting tmp.mount... Feb 9 19:04:38.489056 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:04:38.489067 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:04:38.489129 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:04:38.489146 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:04:38.489166 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:04:38.489182 systemd[1]: Starting modprobe@drm.service... Feb 9 19:04:38.489192 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:04:38.489207 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:04:38.489232 systemd[1]: Starting modprobe@loop.service... Feb 9 19:04:38.489247 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:04:38.489258 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:04:38.489268 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:04:38.489280 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:04:38.489297 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:04:38.489315 systemd[1]: Stopped systemd-journald.service. Feb 9 19:04:38.489327 kernel: fuse: init (API version 7.34) Feb 9 19:04:38.489337 systemd[1]: Starting systemd-journald.service... Feb 9 19:04:38.489347 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:04:38.489357 kernel: loop: module loaded Feb 9 19:04:38.489371 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:04:38.489391 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:04:38.489407 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:04:38.489417 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:04:38.489426 systemd[1]: Stopped verity-setup.service. Feb 9 19:04:38.489436 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:04:38.489450 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:04:38.489471 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:04:38.489486 systemd[1]: Mounted media.mount. Feb 9 19:04:38.489496 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:04:38.489519 systemd-journald[1132]: Journal started Feb 9 19:04:38.489585 systemd-journald[1132]: Runtime Journal (/run/log/journal/78299d1dd277442ba1141f0e93a25057) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:04:34.524000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:04:34.742000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:04:34.757000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:04:34.757000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:04:34.770000 audit: BPF prog-id=10 op=LOAD Feb 9 19:04:34.770000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:04:34.783000 audit: BPF prog-id=11 op=LOAD Feb 9 19:04:34.783000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:04:35.126000 audit[1031]: AVC avc: denied { associate } for pid=1031 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:04:35.126000 audit[1031]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.126000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:04:35.133000 audit[1031]: AVC avc: denied { associate } for pid=1031 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:04:35.133000 audit[1031]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:35.133000 audit: CWD cwd="/" Feb 9 19:04:35.133000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:35.133000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:35.133000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:04:38.032000 audit: BPF prog-id=12 op=LOAD Feb 9 19:04:38.032000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:04:38.032000 audit: BPF prog-id=13 op=LOAD Feb 9 19:04:38.032000 audit: BPF prog-id=14 op=LOAD Feb 9 19:04:38.032000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:04:38.032000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:04:38.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.044000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:04:38.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.394000 audit: BPF prog-id=15 op=LOAD Feb 9 19:04:38.394000 audit: BPF prog-id=16 op=LOAD Feb 9 19:04:38.394000 audit: BPF prog-id=17 op=LOAD Feb 9 19:04:38.394000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:04:38.394000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:04:38.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.483000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:04:38.483000 audit[1132]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd9f02e9b0 a2=4000 a3=7ffd9f02ea4c items=0 ppid=1 pid=1132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:38.483000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:04:35.121911 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:04:38.031385 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:04:35.122296 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:04:38.034192 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:04:35.122313 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:04:35.122348 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:04:35.122358 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:04:35.122395 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:04:35.122407 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:04:35.122588 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:04:35.122655 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:04:35.122670 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:04:35.123047 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:04:35.123082 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:04:35.123101 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:04:35.123115 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:04:35.123133 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:04:35.123146 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:04:37.544565 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:04:38.499439 systemd[1]: Started systemd-journald.service. Feb 9 19:04:38.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.544833 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:04:37.544968 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:04:37.545626 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:04:37.545714 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:04:37.545778 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-02-09T19:04:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:04:38.500156 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:04:38.502847 systemd[1]: Mounted tmp.mount. Feb 9 19:04:38.505281 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:04:38.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.508148 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:04:38.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.511102 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:04:38.511324 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:04:38.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.514124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:04:38.514312 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:04:38.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.517352 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:04:38.517551 systemd[1]: Finished modprobe@drm.service. Feb 9 19:04:38.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.520423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:04:38.520643 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:04:38.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.523450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:04:38.523660 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:04:38.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.526261 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:04:38.526448 systemd[1]: Finished modprobe@loop.service. Feb 9 19:04:38.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.529435 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:04:38.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.532311 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:04:38.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.535228 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:04:38.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.539253 systemd[1]: Reached target network-pre.target. Feb 9 19:04:38.543983 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:04:38.547947 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:04:38.550940 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:04:38.555247 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:04:38.558910 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:04:38.561792 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:04:38.563296 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:04:38.565686 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:04:38.567147 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:38.571973 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:04:38.594337 systemd-journald[1132]: Time spent on flushing to /var/log/journal/78299d1dd277442ba1141f0e93a25057 is 32.231ms for 1147 entries. Feb 9 19:04:38.594337 systemd-journald[1132]: System Journal (/var/log/journal/78299d1dd277442ba1141f0e93a25057) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:04:38.654242 systemd-journald[1132]: Received client request to flush runtime journal. Feb 9 19:04:38.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.583246 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:04:38.586344 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:04:38.655610 udevadm[1155]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:04:38.589346 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:04:38.592145 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:04:38.600281 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:04:38.604432 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:04:38.610117 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:38.655455 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:04:38.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:38.739793 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:04:38.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:39.150022 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:04:39.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:39.152000 audit: BPF prog-id=18 op=LOAD Feb 9 19:04:39.152000 audit: BPF prog-id=19 op=LOAD Feb 9 19:04:39.152000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:04:39.152000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:04:39.153730 systemd[1]: Starting systemd-udevd.service... Feb 9 19:04:39.172591 systemd-udevd[1157]: Using default interface naming scheme 'v252'. Feb 9 19:04:39.230344 systemd[1]: Started systemd-udevd.service. Feb 9 19:04:39.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:39.233000 audit: BPF prog-id=20 op=LOAD Feb 9 19:04:39.237507 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:39.264231 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:04:39.262000 audit: BPF prog-id=21 op=LOAD Feb 9 19:04:39.262000 audit: BPF prog-id=22 op=LOAD Feb 9 19:04:39.262000 audit: BPF prog-id=23 op=LOAD Feb 9 19:04:39.297399 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:04:39.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:39.318845 systemd[1]: Started systemd-userdbd.service. Feb 9 19:04:39.366639 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:04:39.399916 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:04:39.400042 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:04:39.413648 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:04:39.382000 audit[1166]: AVC avc: denied { confidentiality } for pid=1166 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:04:39.440598 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:04:39.440762 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:04:39.448595 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:04:39.448699 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:04:39.449640 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:04:39.490140 systemd-networkd[1163]: lo: Link UP Feb 9 19:04:39.490487 systemd-networkd[1163]: lo: Gained carrier Feb 9 19:04:39.491252 systemd-networkd[1163]: Enumeration completed Feb 9 19:04:39.491531 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:39.495675 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:04:39.496861 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:39.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:39.514647 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:04:39.382000 audit[1166]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564f7789cce0 a1=f884 a2=7f7495f01bc5 a3=5 items=12 ppid=1157 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:39.382000 audit: CWD cwd="/" Feb 9 19:04:39.382000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=1 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=2 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=3 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=4 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=5 name=(null) inode=15597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=6 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=7 name=(null) inode=15598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=8 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=9 name=(null) inode=15599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=10 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PATH item=11 name=(null) inode=15600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:39.382000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:04:39.530001 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:04:39.530084 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:04:39.530109 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:04:40.085483 kernel: mlx5_core ebcd:00:02.0 enP60365s1: Link up Feb 9 19:04:40.113488 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1169) Feb 9 19:04:40.122768 kernel: hv_netvsc 000d3ad9-84db-000d-3ad9-84db000d3ad9 eth0: Data path switched to VF: enP60365s1 Feb 9 19:04:40.128852 systemd-networkd[1163]: enP60365s1: Link UP Feb 9 19:04:40.129608 systemd-networkd[1163]: eth0: Link UP Feb 9 19:04:40.129730 systemd-networkd[1163]: eth0: Gained carrier Feb 9 19:04:40.134293 systemd-networkd[1163]: enP60365s1: Gained carrier Feb 9 19:04:40.162616 systemd-networkd[1163]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:40.203510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:04:40.255469 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:04:40.278922 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:04:40.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:40.282897 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:04:40.285356 kernel: kauditd_printk_skb: 81 callbacks suppressed Feb 9 19:04:40.285406 kernel: audit: type=1130 audit(1707505480.281:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:40.373560 lvm[1235]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:04:40.399548 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:04:40.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:40.402097 systemd[1]: Reached target cryptsetup.target. Feb 9 19:04:40.415226 kernel: audit: type=1130 audit(1707505480.399:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:40.416546 systemd[1]: Starting lvm2-activation.service... Feb 9 19:04:40.422694 lvm[1236]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:04:40.444526 systemd[1]: Finished lvm2-activation.service. Feb 9 19:04:40.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:40.447209 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:04:40.458451 kernel: audit: type=1130 audit(1707505480.446:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:40.460368 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:04:40.460405 systemd[1]: Reached target local-fs.target. Feb 9 19:04:40.462628 systemd[1]: Reached target machines.target. Feb 9 19:04:40.466146 systemd[1]: Starting ldconfig.service... Feb 9 19:04:40.468348 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:04:40.468462 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:04:40.469701 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:04:40.473315 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:04:40.477416 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:04:40.480075 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:04:40.480182 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:04:40.481540 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:04:40.494807 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1238 (bootctl) Feb 9 19:04:40.496119 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:04:41.447751 systemd-networkd[1163]: eth0: Gained IPv6LL Feb 9 19:04:41.510744 kernel: audit: type=1130 audit(1707505481.453:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.453492 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:04:41.517961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:04:41.531840 kernel: audit: type=1130 audit(1707505481.517:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.541225 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:04:41.565058 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:04:41.582604 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:04:41.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.650523 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:04:41.651249 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:04:41.667669 kernel: audit: type=1130 audit(1707505481.653:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.728580 systemd-fsck[1247]: fsck.fat 4.2 (2021-01-31) Feb 9 19:04:41.728580 systemd-fsck[1247]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:04:41.730777 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:04:41.753470 kernel: audit: type=1130 audit(1707505481.733:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.735599 systemd[1]: Mounting boot.mount... Feb 9 19:04:41.761937 systemd[1]: Mounted boot.mount. Feb 9 19:04:41.776799 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:04:41.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.792463 kernel: audit: type=1130 audit(1707505481.778:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.878040 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:04:41.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.882034 systemd[1]: Starting audit-rules.service... Feb 9 19:04:41.896449 kernel: audit: type=1130 audit(1707505481.880:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.898558 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:04:41.902610 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:04:41.905000 audit: BPF prog-id=24 op=LOAD Feb 9 19:04:41.907044 systemd[1]: Starting systemd-resolved.service... Feb 9 19:04:41.912958 kernel: audit: type=1334 audit(1707505481.905:157): prog-id=24 op=LOAD Feb 9 19:04:41.913000 audit: BPF prog-id=25 op=LOAD Feb 9 19:04:41.914998 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:04:41.918400 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:04:41.926217 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:04:41.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.928619 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:04:41.938000 audit[1264]: SYSTEM_BOOT pid=1264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:04:41.942409 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:04:41.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:42.020286 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:04:42.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:42.022919 systemd[1]: Reached target time-set.target. Feb 9 19:04:42.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:42.036609 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:04:42.044609 systemd-resolved[1262]: Positive Trust Anchors: Feb 9 19:04:42.044626 systemd-resolved[1262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:04:42.044676 systemd-resolved[1262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:04:42.076000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:04:42.076000 audit[1274]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd53f4f980 a2=420 a3=0 items=0 ppid=1253 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:42.076000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:04:42.077398 augenrules[1274]: No rules Feb 9 19:04:42.078422 systemd[1]: Finished audit-rules.service. Feb 9 19:04:42.082287 systemd-resolved[1262]: Using system hostname 'ci-3510.3.2-a-97ddcae7e2'. Feb 9 19:04:42.083871 systemd[1]: Started systemd-resolved.service. Feb 9 19:04:42.086344 systemd[1]: Reached target network.target. Feb 9 19:04:42.088518 systemd[1]: Reached target network-online.target. Feb 9 19:04:42.090819 systemd[1]: Reached target nss-lookup.target. Feb 9 19:04:42.092493 systemd-timesyncd[1263]: Contacted time server 77.68.25.145:123 (0.flatcar.pool.ntp.org). Feb 9 19:04:42.093092 systemd-timesyncd[1263]: Initial clock synchronization to Fri 2024-02-09 19:04:42.095608 UTC. Feb 9 19:04:43.220506 ldconfig[1237]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:04:43.233600 systemd[1]: Finished ldconfig.service. Feb 9 19:04:43.237468 systemd[1]: Starting systemd-update-done.service... Feb 9 19:04:43.246044 systemd[1]: Finished systemd-update-done.service. Feb 9 19:04:43.248401 systemd[1]: Reached target sysinit.target. Feb 9 19:04:43.250505 systemd[1]: Started motdgen.path. Feb 9 19:04:43.252267 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:04:43.255233 systemd[1]: Started logrotate.timer. Feb 9 19:04:43.257112 systemd[1]: Started mdadm.timer. Feb 9 19:04:43.258916 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:04:43.261088 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:04:43.261121 systemd[1]: Reached target paths.target. Feb 9 19:04:43.263138 systemd[1]: Reached target timers.target. Feb 9 19:04:43.265363 systemd[1]: Listening on dbus.socket. Feb 9 19:04:43.268234 systemd[1]: Starting docker.socket... Feb 9 19:04:43.272772 systemd[1]: Listening on sshd.socket. Feb 9 19:04:43.275288 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:04:43.275751 systemd[1]: Listening on docker.socket. Feb 9 19:04:43.278096 systemd[1]: Reached target sockets.target. Feb 9 19:04:43.280211 systemd[1]: Reached target basic.target. Feb 9 19:04:43.282147 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:04:43.282180 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:04:43.283176 systemd[1]: Starting containerd.service... Feb 9 19:04:43.286324 systemd[1]: Starting dbus.service... Feb 9 19:04:43.289099 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:04:43.292260 systemd[1]: Starting extend-filesystems.service... Feb 9 19:04:43.294618 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:04:43.296097 systemd[1]: Starting motdgen.service... Feb 9 19:04:43.301648 systemd[1]: Started nvidia.service. Feb 9 19:04:43.305171 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:04:43.308513 systemd[1]: Starting prepare-critools.service... Feb 9 19:04:43.311707 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:04:43.314836 systemd[1]: Starting sshd-keygen.service... Feb 9 19:04:43.319566 systemd[1]: Starting systemd-logind.service... Feb 9 19:04:43.322246 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:04:43.322330 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:04:43.322890 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:04:43.323755 systemd[1]: Starting update-engine.service... Feb 9 19:04:43.326766 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:04:43.333367 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:04:43.333776 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:04:43.347460 jq[1298]: true Feb 9 19:04:43.350193 jq[1284]: false Feb 9 19:04:43.350827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:04:43.351030 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:04:43.376604 extend-filesystems[1285]: Found sda Feb 9 19:04:43.384978 tar[1300]: ./ Feb 9 19:04:43.384978 tar[1300]: ./macvlan Feb 9 19:04:43.385276 jq[1305]: true Feb 9 19:04:43.385481 tar[1301]: crictl Feb 9 19:04:43.388609 extend-filesystems[1285]: Found sda1 Feb 9 19:04:43.390965 extend-filesystems[1285]: Found sda2 Feb 9 19:04:43.394961 extend-filesystems[1285]: Found sda3 Feb 9 19:04:43.397055 extend-filesystems[1285]: Found usr Feb 9 19:04:43.399171 extend-filesystems[1285]: Found sda4 Feb 9 19:04:43.401425 extend-filesystems[1285]: Found sda6 Feb 9 19:04:43.403627 extend-filesystems[1285]: Found sda7 Feb 9 19:04:43.405697 extend-filesystems[1285]: Found sda9 Feb 9 19:04:43.407742 extend-filesystems[1285]: Checking size of /dev/sda9 Feb 9 19:04:43.437145 dbus-daemon[1283]: [system] SELinux support is enabled Feb 9 19:04:43.437339 systemd[1]: Started dbus.service. Feb 9 19:04:43.441851 extend-filesystems[1285]: Old size kept for /dev/sda9 Feb 9 19:04:43.441851 extend-filesystems[1285]: Found sr0 Feb 9 19:04:43.442579 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:04:43.442751 systemd[1]: Finished extend-filesystems.service. Feb 9 19:04:43.451073 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:04:43.451274 systemd[1]: Finished motdgen.service. Feb 9 19:04:43.455807 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:04:43.455851 systemd[1]: Reached target system-config.target. Feb 9 19:04:43.458128 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:04:43.458151 systemd[1]: Reached target user-config.target. Feb 9 19:04:43.503424 env[1308]: time="2024-02-09T19:04:43.503313115Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:04:43.512009 tar[1300]: ./static Feb 9 19:04:43.549057 bash[1342]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:04:43.549951 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:04:43.594951 systemd-logind[1295]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:04:43.595194 systemd-logind[1295]: New seat seat0. Feb 9 19:04:43.597968 systemd[1]: Started systemd-logind.service. Feb 9 19:04:43.640410 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:04:43.649208 tar[1300]: ./vlan Feb 9 19:04:43.650229 update_engine[1297]: I0209 19:04:43.649863 1297 main.cc:92] Flatcar Update Engine starting Feb 9 19:04:43.665059 systemd[1]: Started update-engine.service. Feb 9 19:04:43.667668 update_engine[1297]: I0209 19:04:43.665454 1297 update_check_scheduler.cc:74] Next update check in 11m45s Feb 9 19:04:43.672295 systemd[1]: Started locksmithd.service. Feb 9 19:04:43.680735 env[1308]: time="2024-02-09T19:04:43.680691175Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:04:43.683277 env[1308]: time="2024-02-09T19:04:43.683247889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:43.688992 env[1308]: time="2024-02-09T19:04:43.688954436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:04:43.689101 env[1308]: time="2024-02-09T19:04:43.689084362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:43.689471 env[1308]: time="2024-02-09T19:04:43.689426131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:04:43.689558 env[1308]: time="2024-02-09T19:04:43.689543254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:43.689631 env[1308]: time="2024-02-09T19:04:43.689616769Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:04:43.689692 env[1308]: time="2024-02-09T19:04:43.689679282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:43.689839 env[1308]: time="2024-02-09T19:04:43.689824711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:43.690175 env[1308]: time="2024-02-09T19:04:43.690151277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:43.691690 env[1308]: time="2024-02-09T19:04:43.691662180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:04:43.693923 env[1308]: time="2024-02-09T19:04:43.693898930Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:04:43.694083 env[1308]: time="2024-02-09T19:04:43.694062963Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:04:43.694175 env[1308]: time="2024-02-09T19:04:43.694161283Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:04:43.717493 env[1308]: time="2024-02-09T19:04:43.717424859Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717651705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717683011Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717733021Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717756126Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717777230Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717794034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717819439Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717837442Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717857847Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717877751Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.717897855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.718044584Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:04:43.718468 env[1308]: time="2024-02-09T19:04:43.718162508Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719094795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719138804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719159208Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719222521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719243925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719263629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719280333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719298836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719319140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719338044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719354347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.719467 env[1308]: time="2024-02-09T19:04:43.719373351Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720036284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720064090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720083394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720100797Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720123702Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720139105Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720162710Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:04:43.721453 env[1308]: time="2024-02-09T19:04:43.720204118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:04:43.721797 env[1308]: time="2024-02-09T19:04:43.720479874Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:04:43.721797 env[1308]: time="2024-02-09T19:04:43.720560290Z" level=info msg="Connect containerd service" Feb 9 19:04:43.721797 env[1308]: time="2024-02-09T19:04:43.720606399Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:04:43.721797 env[1308]: time="2024-02-09T19:04:43.721274133Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.721934766Z" level=info msg="Start subscribing containerd event" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.721994578Z" level=info msg="Start recovering state" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722064392Z" level=info msg="Start event monitor" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722080195Z" level=info msg="Start snapshots syncer" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722092698Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722103200Z" level=info msg="Start streaming server" Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722453070Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722511582Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:04:43.731305 env[1308]: time="2024-02-09T19:04:43.722577795Z" level=info msg="containerd successfully booted in 0.224578s" Feb 9 19:04:43.722665 systemd[1]: Started containerd.service. Feb 9 19:04:43.800030 tar[1300]: ./portmap Feb 9 19:04:43.878612 tar[1300]: ./host-local Feb 9 19:04:43.946291 tar[1300]: ./vrf Feb 9 19:04:43.983015 tar[1300]: ./bridge Feb 9 19:04:44.027545 tar[1300]: ./tuning Feb 9 19:04:44.085877 tar[1300]: ./firewall Feb 9 19:04:44.174317 tar[1300]: ./host-device Feb 9 19:04:44.258455 tar[1300]: ./sbr Feb 9 19:04:44.260873 sshd_keygen[1306]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:04:44.325634 tar[1300]: ./loopback Feb 9 19:04:44.329112 systemd[1]: Finished sshd-keygen.service. Feb 9 19:04:44.333835 systemd[1]: Starting issuegen.service... Feb 9 19:04:44.337893 systemd[1]: Started waagent.service. Feb 9 19:04:44.353759 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:04:44.353953 systemd[1]: Finished issuegen.service. Feb 9 19:04:44.358095 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:04:44.369354 systemd[1]: Finished prepare-critools.service. Feb 9 19:04:44.376908 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:04:44.381157 systemd[1]: Started getty@tty1.service. Feb 9 19:04:44.388952 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:04:44.392109 systemd[1]: Reached target getty.target. Feb 9 19:04:44.411381 tar[1300]: ./dhcp Feb 9 19:04:44.498483 tar[1300]: ./ptp Feb 9 19:04:44.534177 tar[1300]: ./ipvlan Feb 9 19:04:44.567672 tar[1300]: ./bandwidth Feb 9 19:04:44.620602 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:04:44.623961 systemd[1]: Reached target multi-user.target. Feb 9 19:04:44.628364 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:04:44.637232 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:04:44.637416 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:04:44.640390 systemd[1]: Startup finished in 656ms (firmware) + 7.036s (loader) + 957ms (kernel) + 9.614s (initrd) + 9.763s (userspace) = 28.029s. Feb 9 19:04:44.716578 locksmithd[1366]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:04:44.728180 login[1391]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:04:44.730689 login[1392]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:04:44.747198 systemd[1]: Created slice user-500.slice. Feb 9 19:04:44.747805 systemd-logind[1295]: New session 2 of user core. Feb 9 19:04:44.748879 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:04:44.757496 systemd-logind[1295]: New session 1 of user core. Feb 9 19:04:44.761872 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:04:44.764721 systemd[1]: Starting user@500.service... Feb 9 19:04:44.770711 (systemd)[1401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:44.873888 systemd[1401]: Queued start job for default target default.target. Feb 9 19:04:44.874496 systemd[1401]: Reached target paths.target. Feb 9 19:04:44.874525 systemd[1401]: Reached target sockets.target. Feb 9 19:04:44.874541 systemd[1401]: Reached target timers.target. Feb 9 19:04:44.874555 systemd[1401]: Reached target basic.target. Feb 9 19:04:44.874692 systemd[1]: Started user@500.service. Feb 9 19:04:44.875934 systemd[1]: Started session-1.scope. Feb 9 19:04:44.876791 systemd[1]: Started session-2.scope. Feb 9 19:04:44.877694 systemd[1401]: Reached target default.target. Feb 9 19:04:44.877754 systemd[1401]: Startup finished in 100ms. Feb 9 19:04:46.310804 waagent[1385]: 2024-02-09T19:04:46.310678Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:04:46.315449 waagent[1385]: 2024-02-09T19:04:46.315347Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:04:46.318543 waagent[1385]: 2024-02-09T19:04:46.318481Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:04:46.321237 waagent[1385]: 2024-02-09T19:04:46.321161Z INFO Daemon Daemon Run daemon Feb 9 19:04:46.324027 waagent[1385]: 2024-02-09T19:04:46.323708Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:04:46.337320 waagent[1385]: 2024-02-09T19:04:46.337199Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:04:46.346077 waagent[1385]: 2024-02-09T19:04:46.345959Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:04:46.351529 waagent[1385]: 2024-02-09T19:04:46.351456Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:04:46.354203 waagent[1385]: 2024-02-09T19:04:46.354141Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:04:46.357464 waagent[1385]: 2024-02-09T19:04:46.357387Z INFO Daemon Daemon Activate resource disk Feb 9 19:04:46.360163 waagent[1385]: 2024-02-09T19:04:46.360099Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:04:46.370712 waagent[1385]: 2024-02-09T19:04:46.370636Z INFO Daemon Daemon Found device: None Feb 9 19:04:46.373601 waagent[1385]: 2024-02-09T19:04:46.373534Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:04:46.377622 waagent[1385]: 2024-02-09T19:04:46.377551Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:04:46.387968 waagent[1385]: 2024-02-09T19:04:46.378831Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:04:46.387968 waagent[1385]: 2024-02-09T19:04:46.379822Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:04:46.389845 waagent[1385]: 2024-02-09T19:04:46.389712Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:04:46.398361 waagent[1385]: 2024-02-09T19:04:46.398240Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:04:46.403659 waagent[1385]: 2024-02-09T19:04:46.403581Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:04:46.406524 waagent[1385]: 2024-02-09T19:04:46.406452Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:04:46.457129 waagent[1385]: 2024-02-09T19:04:46.453149Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:04:46.491189 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:04:46.508929 waagent[1385]: 2024-02-09T19:04:46.508777Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:04:46.512132 waagent[1385]: 2024-02-09T19:04:46.512040Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:04:46.515463 waagent[1385]: 2024-02-09T19:04:46.515377Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:04:46.518939 waagent[1385]: 2024-02-09T19:04:46.518866Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:04:46.522016 waagent[1385]: 2024-02-09T19:04:46.521942Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:04:46.524936 waagent[1385]: 2024-02-09T19:04:46.524871Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:04:46.559969 waagent[1385]: 2024-02-09T19:04:46.559888Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:04:46.568600 waagent[1385]: 2024-02-09T19:04:46.560915Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:04:46.568600 waagent[1385]: 2024-02-09T19:04:46.561605Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:04:46.785134 waagent[1385]: 2024-02-09T19:04:46.784967Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:04:46.795406 waagent[1385]: 2024-02-09T19:04:46.795318Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:04:46.800853 waagent[1385]: 2024-02-09T19:04:46.795799Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:04:46.870845 waagent[1385]: 2024-02-09T19:04:46.870701Z INFO Daemon Daemon Found private key matching thumbprint 04B76A1F1FFD6C34C8D59C5B9479BF25D93F9FB0 Feb 9 19:04:46.875991 waagent[1385]: 2024-02-09T19:04:46.875910Z INFO Daemon Daemon Certificate with thumbprint D3BF49CF5747E4090932C7BF2BB73ABF3631AEC5 has no matching private key. Feb 9 19:04:46.881056 waagent[1385]: 2024-02-09T19:04:46.880979Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:04:46.905455 waagent[1385]: 2024-02-09T19:04:46.905353Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: c8e6fec7-f9f4-4c0e-a29b-28a4cc0ba6ab New eTag: 16165427914175485031] Feb 9 19:04:46.910671 waagent[1385]: 2024-02-09T19:04:46.910593Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:04:46.922359 waagent[1385]: 2024-02-09T19:04:46.922282Z INFO Daemon Daemon Starting provisioning Feb 9 19:04:46.924970 waagent[1385]: 2024-02-09T19:04:46.924896Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:04:46.927245 waagent[1385]: 2024-02-09T19:04:46.927182Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-97ddcae7e2] Feb 9 19:04:46.937159 waagent[1385]: 2024-02-09T19:04:46.937033Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-97ddcae7e2] Feb 9 19:04:46.940954 waagent[1385]: 2024-02-09T19:04:46.940862Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:04:46.944280 waagent[1385]: 2024-02-09T19:04:46.944199Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:04:46.959427 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:04:46.959711 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:04:46.959788 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:04:46.960140 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:04:46.965495 systemd-networkd[1163]: eth0: DHCPv6 lease lost Feb 9 19:04:46.966993 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:04:46.967156 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:04:46.969613 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:47.001249 systemd-networkd[1443]: enP60365s1: Link UP Feb 9 19:04:47.001260 systemd-networkd[1443]: enP60365s1: Gained carrier Feb 9 19:04:47.002849 systemd-networkd[1443]: eth0: Link UP Feb 9 19:04:47.002859 systemd-networkd[1443]: eth0: Gained carrier Feb 9 19:04:47.003306 systemd-networkd[1443]: lo: Link UP Feb 9 19:04:47.003316 systemd-networkd[1443]: lo: Gained carrier Feb 9 19:04:47.003679 systemd-networkd[1443]: eth0: Gained IPv6LL Feb 9 19:04:47.004229 systemd-networkd[1443]: Enumeration completed Feb 9 19:04:47.004359 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:47.006604 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:04:47.012853 waagent[1385]: 2024-02-09T19:04:47.009830Z INFO Daemon Daemon Create user account if not exists Feb 9 19:04:47.014237 waagent[1385]: 2024-02-09T19:04:47.013443Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:04:47.015677 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:47.016806 waagent[1385]: 2024-02-09T19:04:47.016718Z INFO Daemon Daemon Configure sudoer Feb 9 19:04:47.019662 waagent[1385]: 2024-02-09T19:04:47.019595Z INFO Daemon Daemon Configure sshd Feb 9 19:04:47.022201 waagent[1385]: 2024-02-09T19:04:47.021898Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:04:47.051567 systemd-networkd[1443]: eth0: DHCPv4 address 10.200.8.47/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:47.055194 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:04:48.267493 waagent[1385]: 2024-02-09T19:04:48.267374Z INFO Daemon Daemon Provisioning complete Feb 9 19:04:48.284461 waagent[1385]: 2024-02-09T19:04:48.284372Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:04:48.288102 waagent[1385]: 2024-02-09T19:04:48.288027Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:04:48.293838 waagent[1385]: 2024-02-09T19:04:48.293772Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:04:48.561606 waagent[1452]: 2024-02-09T19:04:48.561408Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:04:48.562331 waagent[1452]: 2024-02-09T19:04:48.562261Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:04:48.562485 waagent[1452]: 2024-02-09T19:04:48.562420Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:04:48.573741 waagent[1452]: 2024-02-09T19:04:48.573663Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:04:48.573903 waagent[1452]: 2024-02-09T19:04:48.573848Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:04:48.634996 waagent[1452]: 2024-02-09T19:04:48.634879Z INFO ExtHandler ExtHandler Found private key matching thumbprint 04B76A1F1FFD6C34C8D59C5B9479BF25D93F9FB0 Feb 9 19:04:48.635206 waagent[1452]: 2024-02-09T19:04:48.635150Z INFO ExtHandler ExtHandler Certificate with thumbprint D3BF49CF5747E4090932C7BF2BB73ABF3631AEC5 has no matching private key. Feb 9 19:04:48.635451 waagent[1452]: 2024-02-09T19:04:48.635384Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:04:48.649170 waagent[1452]: 2024-02-09T19:04:48.649105Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: f344e251-b973-46c8-8f38-c88299c557aa New eTag: 16165427914175485031] Feb 9 19:04:48.649758 waagent[1452]: 2024-02-09T19:04:48.649699Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:04:48.687983 waagent[1452]: 2024-02-09T19:04:48.687858Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:04:48.697248 waagent[1452]: 2024-02-09T19:04:48.697167Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1452 Feb 9 19:04:48.702142 waagent[1452]: 2024-02-09T19:04:48.702069Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:04:48.703933 waagent[1452]: 2024-02-09T19:04:48.703866Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:04:48.729870 waagent[1452]: 2024-02-09T19:04:48.729802Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:04:48.732322 waagent[1452]: 2024-02-09T19:04:48.732253Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:04:48.740529 waagent[1452]: 2024-02-09T19:04:48.740472Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:04:48.741008 waagent[1452]: 2024-02-09T19:04:48.740948Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:04:48.742076 waagent[1452]: 2024-02-09T19:04:48.742009Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:04:48.743342 waagent[1452]: 2024-02-09T19:04:48.743284Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.743529Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.743707Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.744297Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.744645Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:04:48.752773 waagent[1452]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:04:48.752773 waagent[1452]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:04:48.752773 waagent[1452]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:04:48.752773 waagent[1452]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:04:48.752773 waagent[1452]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:04:48.752773 waagent[1452]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.748125Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.748772Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.748960Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.749685Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.749847Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.749966Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.751171Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.751082Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:04:48.752773 waagent[1452]: 2024-02-09T19:04:48.752008Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:04:48.753669 waagent[1452]: 2024-02-09T19:04:48.751901Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:04:48.753934 waagent[1452]: 2024-02-09T19:04:48.753868Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:04:48.763955 waagent[1452]: 2024-02-09T19:04:48.763901Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:04:48.765563 waagent[1452]: 2024-02-09T19:04:48.765512Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:04:48.766600 waagent[1452]: 2024-02-09T19:04:48.766552Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:04:48.801293 waagent[1452]: 2024-02-09T19:04:48.801216Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:04:49.423040 waagent[1452]: 2024-02-09T19:04:49.422959Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1443' Feb 9 19:04:49.436958 waagent[1452]: 2024-02-09T19:04:49.436841Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:04:49.436958 waagent[1452]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:04:49.436958 waagent[1452]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:04:49.436958 waagent[1452]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d9:84:db brd ff:ff:ff:ff:ff:ff Feb 9 19:04:49.436958 waagent[1452]: 3: enP60365s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d9:84:db brd ff:ff:ff:ff:ff:ff\ altname enP60365p0s2 Feb 9 19:04:49.436958 waagent[1452]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:04:49.436958 waagent[1452]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:04:49.436958 waagent[1452]: 2: eth0 inet 10.200.8.47/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:04:49.436958 waagent[1452]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:04:49.436958 waagent[1452]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:04:49.436958 waagent[1452]: 2: eth0 inet6 fe80::20d:3aff:fed9:84db/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:04:49.670620 waagent[1452]: 2024-02-09T19:04:49.670419Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 19:04:49.673718 waagent[1452]: 2024-02-09T19:04:49.673557Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 19:04:49.673718 waagent[1452]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:04:49.673718 waagent[1452]: pkts bytes target prot opt in out source destination Feb 9 19:04:49.673718 waagent[1452]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:04:49.673718 waagent[1452]: pkts bytes target prot opt in out source destination Feb 9 19:04:49.673718 waagent[1452]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:04:49.673718 waagent[1452]: pkts bytes target prot opt in out source destination Feb 9 19:04:49.673718 waagent[1452]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:04:49.673718 waagent[1452]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:04:49.675095 waagent[1452]: 2024-02-09T19:04:49.675039Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:04:49.697278 waagent[1452]: 2024-02-09T19:04:49.697201Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:04:50.298755 waagent[1385]: 2024-02-09T19:04:50.298584Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:04:50.305251 waagent[1385]: 2024-02-09T19:04:50.305161Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:04:51.287234 waagent[1490]: 2024-02-09T19:04:51.287111Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:04:51.287965 waagent[1490]: 2024-02-09T19:04:51.287896Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:04:51.288116 waagent[1490]: 2024-02-09T19:04:51.288060Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:04:51.297505 waagent[1490]: 2024-02-09T19:04:51.297392Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:04:51.297882 waagent[1490]: 2024-02-09T19:04:51.297825Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:04:51.298042 waagent[1490]: 2024-02-09T19:04:51.297993Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:04:51.309758 waagent[1490]: 2024-02-09T19:04:51.309686Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:04:51.322203 waagent[1490]: 2024-02-09T19:04:51.322142Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:04:51.323107 waagent[1490]: 2024-02-09T19:04:51.323045Z INFO ExtHandler Feb 9 19:04:51.323251 waagent[1490]: 2024-02-09T19:04:51.323200Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6c0e6dcd-c148-49f6-a88e-475548418076 eTag: 16165427914175485031 source: Fabric] Feb 9 19:04:51.323968 waagent[1490]: 2024-02-09T19:04:51.323911Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:04:51.325054 waagent[1490]: 2024-02-09T19:04:51.324994Z INFO ExtHandler Feb 9 19:04:51.325185 waagent[1490]: 2024-02-09T19:04:51.325136Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:04:51.331606 waagent[1490]: 2024-02-09T19:04:51.331557Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:04:51.332040 waagent[1490]: 2024-02-09T19:04:51.331993Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:04:51.352327 waagent[1490]: 2024-02-09T19:04:51.352242Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:04:51.418574 waagent[1490]: 2024-02-09T19:04:51.418373Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D3BF49CF5747E4090932C7BF2BB73ABF3631AEC5', 'hasPrivateKey': False} Feb 9 19:04:51.419595 waagent[1490]: 2024-02-09T19:04:51.419527Z INFO ExtHandler Downloaded certificate {'thumbprint': '04B76A1F1FFD6C34C8D59C5B9479BF25D93F9FB0', 'hasPrivateKey': True} Feb 9 19:04:51.420558 waagent[1490]: 2024-02-09T19:04:51.420498Z INFO ExtHandler Fetch goal state completed Feb 9 19:04:51.442867 waagent[1490]: 2024-02-09T19:04:51.442787Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1490 Feb 9 19:04:51.446136 waagent[1490]: 2024-02-09T19:04:51.446060Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:04:51.447586 waagent[1490]: 2024-02-09T19:04:51.447528Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:04:51.452425 waagent[1490]: 2024-02-09T19:04:51.452368Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:04:51.452809 waagent[1490]: 2024-02-09T19:04:51.452752Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:04:51.460763 waagent[1490]: 2024-02-09T19:04:51.460709Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:04:51.461200 waagent[1490]: 2024-02-09T19:04:51.461146Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:04:51.473317 waagent[1490]: 2024-02-09T19:04:51.473223Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 19:04:51.475959 waagent[1490]: 2024-02-09T19:04:51.475862Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 19:04:51.480577 waagent[1490]: 2024-02-09T19:04:51.480518Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:04:51.481950 waagent[1490]: 2024-02-09T19:04:51.481892Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:04:51.482797 waagent[1490]: 2024-02-09T19:04:51.482741Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:04:51.482907 waagent[1490]: 2024-02-09T19:04:51.482844Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:04:51.483135 waagent[1490]: 2024-02-09T19:04:51.483084Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:04:51.483441 waagent[1490]: 2024-02-09T19:04:51.483380Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:04:51.484115 waagent[1490]: 2024-02-09T19:04:51.484061Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:04:51.484557 waagent[1490]: 2024-02-09T19:04:51.484497Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:04:51.484730 waagent[1490]: 2024-02-09T19:04:51.484662Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:04:51.484987 waagent[1490]: 2024-02-09T19:04:51.484933Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:04:51.485143 waagent[1490]: 2024-02-09T19:04:51.485090Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:04:51.485143 waagent[1490]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:04:51.485143 waagent[1490]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:04:51.485143 waagent[1490]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:04:51.485143 waagent[1490]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:04:51.485143 waagent[1490]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:04:51.485143 waagent[1490]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:04:51.485823 waagent[1490]: 2024-02-09T19:04:51.485774Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:04:51.487969 waagent[1490]: 2024-02-09T19:04:51.487856Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:04:51.488482 waagent[1490]: 2024-02-09T19:04:51.488379Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:04:51.488594 waagent[1490]: 2024-02-09T19:04:51.488534Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:04:51.488791 waagent[1490]: 2024-02-09T19:04:51.488743Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:04:51.492778 waagent[1490]: 2024-02-09T19:04:51.492727Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:04:51.512079 waagent[1490]: 2024-02-09T19:04:51.512010Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:04:51.512517 waagent[1490]: 2024-02-09T19:04:51.512460Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:04:51.520181 waagent[1490]: 2024-02-09T19:04:51.519980Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:04:51.520181 waagent[1490]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:04:51.520181 waagent[1490]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:04:51.520181 waagent[1490]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d9:84:db brd ff:ff:ff:ff:ff:ff Feb 9 19:04:51.520181 waagent[1490]: 3: enP60365s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d9:84:db brd ff:ff:ff:ff:ff:ff\ altname enP60365p0s2 Feb 9 19:04:51.520181 waagent[1490]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:04:51.520181 waagent[1490]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:04:51.520181 waagent[1490]: 2: eth0 inet 10.200.8.47/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:04:51.520181 waagent[1490]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:04:51.520181 waagent[1490]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:04:51.520181 waagent[1490]: 2: eth0 inet6 fe80::20d:3aff:fed9:84db/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:04:51.573788 waagent[1490]: 2024-02-09T19:04:51.573718Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:04:51.573788 waagent[1490]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:04:51.573788 waagent[1490]: pkts bytes target prot opt in out source destination Feb 9 19:04:51.573788 waagent[1490]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:04:51.573788 waagent[1490]: pkts bytes target prot opt in out source destination Feb 9 19:04:51.573788 waagent[1490]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:04:51.573788 waagent[1490]: pkts bytes target prot opt in out source destination Feb 9 19:04:51.573788 waagent[1490]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:04:51.573788 waagent[1490]: 104 12591 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:04:51.573788 waagent[1490]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:04:51.583702 waagent[1490]: 2024-02-09T19:04:51.583652Z INFO ExtHandler ExtHandler Feb 9 19:04:51.583977 waagent[1490]: 2024-02-09T19:04:51.583921Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4feb023f-be2e-41da-9ed8-2d6ea52c994f correlation f58a5dcc-e86a-4c20-ac2d-99a703b89e11 created: 2024-02-09T19:04:06.483711Z] Feb 9 19:04:51.584761 waagent[1490]: 2024-02-09T19:04:51.584700Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:04:51.586453 waagent[1490]: 2024-02-09T19:04:51.586395Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 9 19:04:51.608604 waagent[1490]: 2024-02-09T19:04:51.608541Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:04:51.618054 waagent[1490]: 2024-02-09T19:04:51.617980Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 14478217-D190-4593-A52B-05B5A20394B7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:05:13.541756 systemd[1]: Created slice system-sshd.slice. Feb 9 19:05:13.543390 systemd[1]: Started sshd@0-10.200.8.47:22-10.200.12.6:45270.service. Feb 9 19:05:14.277130 sshd[1529]: Accepted publickey for core from 10.200.12.6 port 45270 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:14.278862 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:14.284156 systemd-logind[1295]: New session 3 of user core. Feb 9 19:05:14.285131 systemd[1]: Started session-3.scope. Feb 9 19:05:14.920476 systemd[1]: Started sshd@1-10.200.8.47:22-10.200.12.6:45278.service. Feb 9 19:05:15.576657 sshd[1534]: Accepted publickey for core from 10.200.12.6 port 45278 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:15.578369 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:15.583079 systemd-logind[1295]: New session 4 of user core. Feb 9 19:05:15.583718 systemd[1]: Started session-4.scope. Feb 9 19:05:16.047116 sshd[1534]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:16.050710 systemd[1]: sshd@1-10.200.8.47:22-10.200.12.6:45278.service: Deactivated successfully. Feb 9 19:05:16.051643 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:05:16.052248 systemd-logind[1295]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:05:16.053030 systemd-logind[1295]: Removed session 4. Feb 9 19:05:16.159672 systemd[1]: Started sshd@2-10.200.8.47:22-10.200.12.6:45280.service. Feb 9 19:05:16.790652 sshd[1540]: Accepted publickey for core from 10.200.12.6 port 45280 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:16.792421 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:16.797713 systemd[1]: Started session-5.scope. Feb 9 19:05:16.798303 systemd-logind[1295]: New session 5 of user core. Feb 9 19:05:17.564364 sshd[1540]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:17.567902 systemd[1]: sshd@2-10.200.8.47:22-10.200.12.6:45280.service: Deactivated successfully. Feb 9 19:05:17.568937 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:05:17.569707 systemd-logind[1295]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:05:17.570531 systemd-logind[1295]: Removed session 5. Feb 9 19:05:17.741338 systemd[1]: Started sshd@3-10.200.8.47:22-10.200.12.6:42990.service. Feb 9 19:05:18.915337 sshd[1546]: Accepted publickey for core from 10.200.12.6 port 42990 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:18.917044 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:18.922140 systemd[1]: Started session-6.scope. Feb 9 19:05:18.922612 systemd-logind[1295]: New session 6 of user core. Feb 9 19:05:19.988983 sshd[1546]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:19.992720 systemd[1]: sshd@3-10.200.8.47:22-10.200.12.6:42990.service: Deactivated successfully. Feb 9 19:05:19.993625 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:05:19.994253 systemd-logind[1295]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:05:19.995088 systemd-logind[1295]: Removed session 6. Feb 9 19:05:20.201814 systemd[1]: Started sshd@4-10.200.8.47:22-10.200.12.6:42998.service. Feb 9 19:05:21.024093 sshd[1552]: Accepted publickey for core from 10.200.12.6 port 42998 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:21.025805 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:21.031375 systemd[1]: Started session-7.scope. Feb 9 19:05:21.031817 systemd-logind[1295]: New session 7 of user core. Feb 9 19:05:21.424305 sudo[1555]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:05:21.424587 sudo[1555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:05:22.012134 systemd[1]: Reloading. Feb 9 19:05:22.099086 /usr/lib/systemd/system-generators/torcx-generator[1585]: time="2024-02-09T19:05:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:05:22.099611 /usr/lib/systemd/system-generators/torcx-generator[1585]: time="2024-02-09T19:05:22Z" level=info msg="torcx already run" Feb 9 19:05:22.186141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:05:22.186162 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:05:22.202189 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:05:22.290116 systemd[1]: Started kubelet.service. Feb 9 19:05:22.308619 systemd[1]: Starting coreos-metadata.service... Feb 9 19:05:22.362602 coreos-metadata[1654]: Feb 09 19:05:22.362 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:05:22.365992 coreos-metadata[1654]: Feb 09 19:05:22.365 INFO Fetch successful Feb 9 19:05:22.366165 coreos-metadata[1654]: Feb 09 19:05:22.366 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 19:05:22.368315 coreos-metadata[1654]: Feb 09 19:05:22.368 INFO Fetch successful Feb 9 19:05:22.368475 coreos-metadata[1654]: Feb 09 19:05:22.368 INFO Fetching http://168.63.129.16/machine/f7656108-246c-4a4a-98af-ebce7b71166a/0ca8074d%2D9232%2D40d7%2D8314%2D036c8e8a4ede.%5Fci%2D3510.3.2%2Da%2D97ddcae7e2?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 19:05:22.369796 coreos-metadata[1654]: Feb 09 19:05:22.369 INFO Fetch successful Feb 9 19:05:22.386914 kubelet[1646]: E0209 19:05:22.386850 1646 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:05:22.389300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:05:22.389506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:05:22.411867 coreos-metadata[1654]: Feb 09 19:05:22.411 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:05:22.423478 coreos-metadata[1654]: Feb 09 19:05:22.423 INFO Fetch successful Feb 9 19:05:22.432688 systemd[1]: Finished coreos-metadata.service. Feb 9 19:05:23.490532 systemd[1]: Stopped kubelet.service. Feb 9 19:05:23.504610 systemd[1]: Reloading. Feb 9 19:05:23.592012 /usr/lib/systemd/system-generators/torcx-generator[1710]: time="2024-02-09T19:05:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:05:23.595936 /usr/lib/systemd/system-generators/torcx-generator[1710]: time="2024-02-09T19:05:23Z" level=info msg="torcx already run" Feb 9 19:05:23.679335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:05:23.679357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:05:23.695292 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:05:23.789113 systemd[1]: Started kubelet.service. Feb 9 19:05:23.837809 kubelet[1772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:05:23.837809 kubelet[1772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:05:23.838263 kubelet[1772]: I0209 19:05:23.837864 1772 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:05:23.839156 kubelet[1772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:05:23.839264 kubelet[1772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:05:24.290532 kubelet[1772]: I0209 19:05:24.290487 1772 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:05:24.290532 kubelet[1772]: I0209 19:05:24.290519 1772 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:05:24.290811 kubelet[1772]: I0209 19:05:24.290791 1772 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:05:24.292992 kubelet[1772]: I0209 19:05:24.292965 1772 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:05:24.296077 kubelet[1772]: I0209 19:05:24.296043 1772 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:05:24.296292 kubelet[1772]: I0209 19:05:24.296272 1772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:05:24.296374 kubelet[1772]: I0209 19:05:24.296357 1772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:05:24.296528 kubelet[1772]: I0209 19:05:24.296389 1772 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:05:24.296528 kubelet[1772]: I0209 19:05:24.296406 1772 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:05:24.296613 kubelet[1772]: I0209 19:05:24.296540 1772 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:05:24.299990 kubelet[1772]: I0209 19:05:24.299970 1772 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:05:24.300118 kubelet[1772]: I0209 19:05:24.300105 1772 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:05:24.300168 kubelet[1772]: I0209 19:05:24.300142 1772 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:05:24.300168 kubelet[1772]: I0209 19:05:24.300163 1772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:05:24.300533 kubelet[1772]: E0209 19:05:24.300506 1772 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:24.300600 kubelet[1772]: E0209 19:05:24.300569 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:24.301066 kubelet[1772]: I0209 19:05:24.301045 1772 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:05:24.301374 kubelet[1772]: W0209 19:05:24.301356 1772 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:05:24.301870 kubelet[1772]: I0209 19:05:24.301840 1772 server.go:1186] "Started kubelet" Feb 9 19:05:24.302083 kubelet[1772]: I0209 19:05:24.302064 1772 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:05:24.302885 kubelet[1772]: I0209 19:05:24.302863 1772 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:05:24.308223 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:05:24.309917 kubelet[1772]: I0209 19:05:24.309901 1772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:05:24.310397 kubelet[1772]: E0209 19:05:24.310378 1772 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:05:24.310488 kubelet[1772]: E0209 19:05:24.310409 1772 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:05:24.314656 kubelet[1772]: E0209 19:05:24.314538 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a4406ead5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 301818581, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 301818581, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.319528 kubelet[1772]: E0209 19:05:24.319424 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a4489bb33", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 310391603, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 310391603, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.319662 kubelet[1772]: W0209 19:05:24.319592 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:24.319662 kubelet[1772]: E0209 19:05:24.319623 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:24.319760 kubelet[1772]: W0209 19:05:24.319667 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:24.319760 kubelet[1772]: E0209 19:05:24.319679 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:24.319839 kubelet[1772]: E0209 19:05:24.319824 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:24.319880 kubelet[1772]: I0209 19:05:24.319859 1772 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:05:24.320548 kubelet[1772]: I0209 19:05:24.320532 1772 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:05:24.323364 kubelet[1772]: E0209 19:05:24.323348 1772 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.8.47" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:24.329123 kubelet[1772]: W0209 19:05:24.329102 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:24.329214 kubelet[1772]: E0209 19:05:24.329129 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:24.352415 kubelet[1772]: I0209 19:05:24.352391 1772 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:05:24.352630 kubelet[1772]: I0209 19:05:24.352604 1772 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:05:24.352630 kubelet[1772]: I0209 19:05:24.352630 1772 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:05:24.357006 kubelet[1772]: I0209 19:05:24.356983 1772 policy_none.go:49] "None policy: Start" Feb 9 19:05:24.357163 kubelet[1772]: E0209 19:05:24.357090 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.357892 kubelet[1772]: I0209 19:05:24.357869 1772 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:05:24.357892 kubelet[1772]: I0209 19:05:24.357895 1772 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:05:24.361617 kubelet[1772]: E0209 19:05:24.361536 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.362582 kubelet[1772]: E0209 19:05:24.362518 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.364989 systemd[1]: Created slice kubepods.slice. Feb 9 19:05:24.369332 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:05:24.372770 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:05:24.379344 kubelet[1772]: I0209 19:05:24.379328 1772 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:05:24.380890 kubelet[1772]: E0209 19:05:24.380873 1772 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.47\" not found" Feb 9 19:05:24.381001 kubelet[1772]: I0209 19:05:24.380921 1772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:05:24.383122 kubelet[1772]: E0209 19:05:24.383052 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a48ce8d65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 382010725, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 382010725, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.421462 kubelet[1772]: I0209 19:05:24.421410 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:24.423221 kubelet[1772]: E0209 19:05:24.423186 1772 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.47" Feb 9 19:05:24.423420 kubelet[1772]: E0209 19:05:24.423154 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 421361586, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9804a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.424428 kubelet[1772]: E0209 19:05:24.424352 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 421368286, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f99ec2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.425278 kubelet[1772]: E0209 19:05:24.425223 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 421372786, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9adff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.525810 kubelet[1772]: E0209 19:05:24.525771 1772 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.8.47" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:24.580344 kubelet[1772]: I0209 19:05:24.580307 1772 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:05:24.624416 kubelet[1772]: I0209 19:05:24.624373 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:24.625689 kubelet[1772]: E0209 19:05:24.625660 1772 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.47" Feb 9 19:05:24.625909 kubelet[1772]: E0209 19:05:24.625662 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 624326881, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9804a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.626810 kubelet[1772]: E0209 19:05:24.626744 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 624336081, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f99ec2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.648845 kubelet[1772]: I0209 19:05:24.648819 1772 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:05:24.648845 kubelet[1772]: I0209 19:05:24.648844 1772 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:05:24.649022 kubelet[1772]: I0209 19:05:24.648865 1772 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:05:24.649022 kubelet[1772]: E0209 19:05:24.648910 1772 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:05:24.651352 kubelet[1772]: W0209 19:05:24.651331 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:24.651521 kubelet[1772]: E0209 19:05:24.651507 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:24.706666 kubelet[1772]: E0209 19:05:24.706567 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 624340881, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9adff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:24.927580 kubelet[1772]: E0209 19:05:24.927452 1772 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.8.47" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:25.026934 kubelet[1772]: I0209 19:05:25.026887 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:25.027957 kubelet[1772]: E0209 19:05:25.027921 1772 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.47" Feb 9 19:05:25.028516 kubelet[1772]: E0209 19:05:25.028403 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 25, 26830715, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9804a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:25.107117 kubelet[1772]: E0209 19:05:25.107005 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 25, 26843816, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f99ec2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:25.301713 kubelet[1772]: E0209 19:05:25.301595 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:25.306685 kubelet[1772]: E0209 19:05:25.306591 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 25, 26849416, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9adff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:25.367666 kubelet[1772]: W0209 19:05:25.367615 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:25.367666 kubelet[1772]: E0209 19:05:25.367662 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:25.506211 kubelet[1772]: W0209 19:05:25.506168 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:25.506211 kubelet[1772]: E0209 19:05:25.506210 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:25.616223 kubelet[1772]: W0209 19:05:25.616176 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:25.616223 kubelet[1772]: E0209 19:05:25.616220 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:25.729676 kubelet[1772]: E0209 19:05:25.729618 1772 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.8.47" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:25.829280 kubelet[1772]: I0209 19:05:25.829233 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:25.830175 kubelet[1772]: E0209 19:05:25.830143 1772 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.47" Feb 9 19:05:25.830468 kubelet[1772]: E0209 19:05:25.830364 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 25, 829180243, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9804a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:25.831948 kubelet[1772]: E0209 19:05:25.831876 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 25, 829192243, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f99ec2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:25.844038 kubelet[1772]: W0209 19:05:25.844011 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:25.844038 kubelet[1772]: E0209 19:05:25.844041 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:25.906573 kubelet[1772]: E0209 19:05:25.906369 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 25, 829196744, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9adff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:26.302507 kubelet[1772]: E0209 19:05:26.302345 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:27.303525 kubelet[1772]: E0209 19:05:27.303465 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:27.331106 kubelet[1772]: E0209 19:05:27.331052 1772 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.8.47" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:27.432163 kubelet[1772]: I0209 19:05:27.431897 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:27.433470 kubelet[1772]: E0209 19:05:27.433276 1772 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.47" Feb 9 19:05:27.433470 kubelet[1772]: E0209 19:05:27.433208 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 27, 431839654, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9804a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:27.434333 kubelet[1772]: E0209 19:05:27.434255 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 27, 431851954, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f99ec2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:27.435163 kubelet[1772]: E0209 19:05:27.435089 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 27, 431859755, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9adff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:27.610455 kubelet[1772]: W0209 19:05:27.610403 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:27.610455 kubelet[1772]: E0209 19:05:27.610468 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:27.713817 kubelet[1772]: W0209 19:05:27.713771 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:27.713817 kubelet[1772]: E0209 19:05:27.713813 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:27.995661 kubelet[1772]: W0209 19:05:27.995531 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:27.995661 kubelet[1772]: E0209 19:05:27.995577 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:28.155948 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:05:28.304446 kubelet[1772]: E0209 19:05:28.304303 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:28.917872 kubelet[1772]: W0209 19:05:28.917820 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:28.917872 kubelet[1772]: E0209 19:05:28.917868 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:29.304853 kubelet[1772]: E0209 19:05:29.304709 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:29.309535 update_engine[1297]: I0209 19:05:29.309491 1297 update_attempter.cc:509] Updating boot flags... Feb 9 19:05:30.305537 kubelet[1772]: E0209 19:05:30.305472 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:30.532666 kubelet[1772]: E0209 19:05:30.532614 1772 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.8.47" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:30.634403 kubelet[1772]: I0209 19:05:30.634358 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:30.635786 kubelet[1772]: E0209 19:05:30.635752 1772 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.47" Feb 9 19:05:30.635948 kubelet[1772]: E0209 19:05:30.635805 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9804a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.47 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351270986, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 30, 634307445, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9804a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:30.636835 kubelet[1772]: E0209 19:05:30.636757 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f99ec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.47 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351278786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 30, 634320345, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f99ec2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:30.637708 kubelet[1772]: E0209 19:05:30.637635 1772 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.47.17b2473a46f9adff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.47", UID:"10.200.8.47", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.47 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.47"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 24, 351282687, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 30, 634325945, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.47.17b2473a46f9adff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:31.306106 kubelet[1772]: E0209 19:05:31.306052 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:31.892657 kubelet[1772]: W0209 19:05:31.892614 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:31.892657 kubelet[1772]: E0209 19:05:31.892656 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:32.306923 kubelet[1772]: E0209 19:05:32.306787 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:33.307282 kubelet[1772]: E0209 19:05:33.307215 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:33.537670 kubelet[1772]: W0209 19:05:33.537618 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:33.537670 kubelet[1772]: E0209 19:05:33.537666 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.47" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:33.599108 kubelet[1772]: W0209 19:05:33.599058 1772 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:33.599108 kubelet[1772]: E0209 19:05:33.599107 1772 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:34.292840 kubelet[1772]: I0209 19:05:34.292775 1772 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:05:34.308210 kubelet[1772]: E0209 19:05:34.308165 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:34.381610 kubelet[1772]: E0209 19:05:34.381504 1772 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.47\" not found" Feb 9 19:05:34.669471 kubelet[1772]: E0209 19:05:34.669415 1772 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.47" not found Feb 9 19:05:35.308925 kubelet[1772]: E0209 19:05:35.308856 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:35.925372 kubelet[1772]: E0209 19:05:35.925266 1772 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.47" not found Feb 9 19:05:36.309260 kubelet[1772]: E0209 19:05:36.309002 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:36.938064 kubelet[1772]: E0209 19:05:36.938010 1772 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.47\" not found" node="10.200.8.47" Feb 9 19:05:37.037523 kubelet[1772]: I0209 19:05:37.037485 1772 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.47" Feb 9 19:05:37.309377 kubelet[1772]: E0209 19:05:37.309204 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:37.324991 kubelet[1772]: I0209 19:05:37.324957 1772 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.47" Feb 9 19:05:37.336612 kubelet[1772]: E0209 19:05:37.336572 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:37.436945 kubelet[1772]: E0209 19:05:37.436884 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:37.537928 kubelet[1772]: E0209 19:05:37.537873 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:37.638909 kubelet[1772]: E0209 19:05:37.638829 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:37.739964 kubelet[1772]: E0209 19:05:37.739899 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:37.804865 sudo[1555]: pam_unix(sudo:session): session closed for user root Feb 9 19:05:37.840117 kubelet[1772]: E0209 19:05:37.840060 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:37.908855 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:37.912683 systemd[1]: sshd@4-10.200.8.47:22-10.200.12.6:42998.service: Deactivated successfully. Feb 9 19:05:37.913812 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:05:37.914694 systemd-logind[1295]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:05:37.915830 systemd-logind[1295]: Removed session 7. Feb 9 19:05:37.940260 kubelet[1772]: E0209 19:05:37.940218 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.041188 kubelet[1772]: E0209 19:05:38.041124 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.142078 kubelet[1772]: E0209 19:05:38.142019 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.243072 kubelet[1772]: E0209 19:05:38.242931 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.309484 kubelet[1772]: E0209 19:05:38.309422 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:38.343956 kubelet[1772]: E0209 19:05:38.343909 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.445029 kubelet[1772]: E0209 19:05:38.444963 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.546286 kubelet[1772]: E0209 19:05:38.546069 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.647059 kubelet[1772]: E0209 19:05:38.646965 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.747509 kubelet[1772]: E0209 19:05:38.747448 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.847862 kubelet[1772]: E0209 19:05:38.847800 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:38.948661 kubelet[1772]: E0209 19:05:38.948597 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.049556 kubelet[1772]: E0209 19:05:39.049497 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.150479 kubelet[1772]: E0209 19:05:39.150315 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.251333 kubelet[1772]: E0209 19:05:39.251270 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.309938 kubelet[1772]: E0209 19:05:39.309876 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:39.351596 kubelet[1772]: E0209 19:05:39.351527 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.452826 kubelet[1772]: E0209 19:05:39.452676 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.553548 kubelet[1772]: E0209 19:05:39.553485 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.653666 kubelet[1772]: E0209 19:05:39.653601 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.754878 kubelet[1772]: E0209 19:05:39.754629 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.855558 kubelet[1772]: E0209 19:05:39.855491 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:39.956378 kubelet[1772]: E0209 19:05:39.956318 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.057237 kubelet[1772]: E0209 19:05:40.057098 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.158006 kubelet[1772]: E0209 19:05:40.157940 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.258597 kubelet[1772]: E0209 19:05:40.258533 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.310175 kubelet[1772]: E0209 19:05:40.310012 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:40.359681 kubelet[1772]: E0209 19:05:40.359619 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.460404 kubelet[1772]: E0209 19:05:40.460336 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.561091 kubelet[1772]: E0209 19:05:40.560937 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.662134 kubelet[1772]: E0209 19:05:40.662076 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.763290 kubelet[1772]: E0209 19:05:40.763228 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.864331 kubelet[1772]: E0209 19:05:40.864266 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:40.965072 kubelet[1772]: E0209 19:05:40.965012 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.066074 kubelet[1772]: E0209 19:05:41.066013 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.167150 kubelet[1772]: E0209 19:05:41.167006 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.267915 kubelet[1772]: E0209 19:05:41.267853 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.310500 kubelet[1772]: E0209 19:05:41.310420 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:41.368249 kubelet[1772]: E0209 19:05:41.368192 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.469463 kubelet[1772]: E0209 19:05:41.469299 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.570034 kubelet[1772]: E0209 19:05:41.569969 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.671102 kubelet[1772]: E0209 19:05:41.671039 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.772330 kubelet[1772]: E0209 19:05:41.772189 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.873223 kubelet[1772]: E0209 19:05:41.873164 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:41.974009 kubelet[1772]: E0209 19:05:41.973948 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.074541 kubelet[1772]: E0209 19:05:42.074500 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.175222 kubelet[1772]: E0209 19:05:42.175163 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.275930 kubelet[1772]: E0209 19:05:42.275865 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.311464 kubelet[1772]: E0209 19:05:42.311372 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:42.377044 kubelet[1772]: E0209 19:05:42.376906 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.477883 kubelet[1772]: E0209 19:05:42.477824 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.578665 kubelet[1772]: E0209 19:05:42.578604 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.679324 kubelet[1772]: E0209 19:05:42.678734 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.779761 kubelet[1772]: E0209 19:05:42.779703 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.880681 kubelet[1772]: E0209 19:05:42.880620 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:42.981537 kubelet[1772]: E0209 19:05:42.981376 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.082067 kubelet[1772]: E0209 19:05:43.082031 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.182844 kubelet[1772]: E0209 19:05:43.182787 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.283747 kubelet[1772]: E0209 19:05:43.283604 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.312229 kubelet[1772]: E0209 19:05:43.312164 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:43.384674 kubelet[1772]: E0209 19:05:43.384609 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.485693 kubelet[1772]: E0209 19:05:43.485630 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.586659 kubelet[1772]: E0209 19:05:43.586612 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.686959 kubelet[1772]: E0209 19:05:43.686914 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.787482 kubelet[1772]: E0209 19:05:43.787412 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.887842 kubelet[1772]: E0209 19:05:43.887693 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:43.988484 kubelet[1772]: E0209 19:05:43.988407 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:44.089221 kubelet[1772]: E0209 19:05:44.089157 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:44.189597 kubelet[1772]: E0209 19:05:44.189451 1772 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.47\" not found" Feb 9 19:05:44.290947 kubelet[1772]: I0209 19:05:44.290905 1772 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:05:44.291498 env[1308]: time="2024-02-09T19:05:44.291424770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:05:44.291997 kubelet[1772]: I0209 19:05:44.291699 1772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:05:44.300577 kubelet[1772]: E0209 19:05:44.300548 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:44.312840 kubelet[1772]: I0209 19:05:44.312805 1772 apiserver.go:52] "Watching apiserver" Feb 9 19:05:44.313251 kubelet[1772]: E0209 19:05:44.312814 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:44.315341 kubelet[1772]: I0209 19:05:44.315314 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:05:44.315472 kubelet[1772]: I0209 19:05:44.315419 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:05:44.322913 systemd[1]: Created slice kubepods-besteffort-pod9ad88d33_c78e_4a97_a0c8_177621cc1ab5.slice. Feb 9 19:05:44.323536 kubelet[1772]: I0209 19:05:44.323520 1772 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:05:44.333775 systemd[1]: Created slice kubepods-burstable-pod073c80ed_0181_4de2_bf33_1d3394b5cf09.slice. Feb 9 19:05:44.347283 kubelet[1772]: I0209 19:05:44.347258 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ad88d33-c78e-4a97-a0c8-177621cc1ab5-kube-proxy\") pod \"kube-proxy-hh47k\" (UID: \"9ad88d33-c78e-4a97-a0c8-177621cc1ab5\") " pod="kube-system/kube-proxy-hh47k" Feb 9 19:05:44.347400 kubelet[1772]: I0209 19:05:44.347301 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgsr\" (UniqueName: \"kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-kube-api-access-ghgsr\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347400 kubelet[1772]: I0209 19:05:44.347330 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad88d33-c78e-4a97-a0c8-177621cc1ab5-xtables-lock\") pod \"kube-proxy-hh47k\" (UID: \"9ad88d33-c78e-4a97-a0c8-177621cc1ab5\") " pod="kube-system/kube-proxy-hh47k" Feb 9 19:05:44.347400 kubelet[1772]: I0209 19:05:44.347358 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad88d33-c78e-4a97-a0c8-177621cc1ab5-lib-modules\") pod \"kube-proxy-hh47k\" (UID: \"9ad88d33-c78e-4a97-a0c8-177621cc1ab5\") " pod="kube-system/kube-proxy-hh47k" Feb 9 19:05:44.347400 kubelet[1772]: I0209 19:05:44.347384 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-bpf-maps\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347601 kubelet[1772]: I0209 19:05:44.347409 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-hostproc\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347601 kubelet[1772]: I0209 19:05:44.347449 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-lib-modules\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347601 kubelet[1772]: I0209 19:05:44.347481 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-xtables-lock\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347601 kubelet[1772]: I0209 19:05:44.347513 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-kernel\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347601 kubelet[1772]: I0209 19:05:44.347541 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-cgroup\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347601 kubelet[1772]: I0209 19:05:44.347569 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cni-path\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347834 kubelet[1772]: I0209 19:05:44.347598 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-etc-cni-netd\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347834 kubelet[1772]: I0209 19:05:44.347635 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/073c80ed-0181-4de2-bf33-1d3394b5cf09-clustermesh-secrets\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347834 kubelet[1772]: I0209 19:05:44.347666 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-config-path\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347834 kubelet[1772]: I0209 19:05:44.347707 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-net\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.347834 kubelet[1772]: I0209 19:05:44.347744 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdnn\" (UniqueName: \"kubernetes.io/projected/9ad88d33-c78e-4a97-a0c8-177621cc1ab5-kube-api-access-fhdnn\") pod \"kube-proxy-hh47k\" (UID: \"9ad88d33-c78e-4a97-a0c8-177621cc1ab5\") " pod="kube-system/kube-proxy-hh47k" Feb 9 19:05:44.348030 kubelet[1772]: I0209 19:05:44.347773 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-run\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.348030 kubelet[1772]: I0209 19:05:44.347805 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-hubble-tls\") pod \"cilium-hd7xw\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " pod="kube-system/cilium-hd7xw" Feb 9 19:05:44.348030 kubelet[1772]: I0209 19:05:44.347817 1772 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:05:44.932749 env[1308]: time="2024-02-09T19:05:44.932683385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hh47k,Uid:9ad88d33-c78e-4a97-a0c8-177621cc1ab5,Namespace:kube-system,Attempt:0,}" Feb 9 19:05:45.243077 env[1308]: time="2024-02-09T19:05:45.242935243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hd7xw,Uid:073c80ed-0181-4de2-bf33-1d3394b5cf09,Namespace:kube-system,Attempt:0,}" Feb 9 19:05:45.313914 kubelet[1772]: E0209 19:05:45.313880 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:45.739066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528773627.mount: Deactivated successfully. Feb 9 19:05:45.761151 env[1308]: time="2024-02-09T19:05:45.761086749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.763990 env[1308]: time="2024-02-09T19:05:45.763938459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.772998 env[1308]: time="2024-02-09T19:05:45.772945592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.776595 env[1308]: time="2024-02-09T19:05:45.776549405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.779229 env[1308]: time="2024-02-09T19:05:45.779190315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.782701 env[1308]: time="2024-02-09T19:05:45.782663628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.785058 env[1308]: time="2024-02-09T19:05:45.785020337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.789792 env[1308]: time="2024-02-09T19:05:45.789755554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:45.849343 env[1308]: time="2024-02-09T19:05:45.849267373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:45.849552 env[1308]: time="2024-02-09T19:05:45.849358373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:45.849552 env[1308]: time="2024-02-09T19:05:45.849390173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:45.849663 env[1308]: time="2024-02-09T19:05:45.849586274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f pid=1897 runtime=io.containerd.runc.v2 Feb 9 19:05:45.851066 env[1308]: time="2024-02-09T19:05:45.851001379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:05:45.851245 env[1308]: time="2024-02-09T19:05:45.851218480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:05:45.851378 env[1308]: time="2024-02-09T19:05:45.851353281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:05:45.859010 env[1308]: time="2024-02-09T19:05:45.858191106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7fee00d7082445972af740b4a7d748c019344abc90ad5db0db951d4b9ca8297 pid=1910 runtime=io.containerd.runc.v2 Feb 9 19:05:45.873609 systemd[1]: Started cri-containerd-903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f.scope. Feb 9 19:05:45.890299 systemd[1]: Started cri-containerd-e7fee00d7082445972af740b4a7d748c019344abc90ad5db0db951d4b9ca8297.scope. Feb 9 19:05:45.916652 env[1308]: time="2024-02-09T19:05:45.916593920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hd7xw,Uid:073c80ed-0181-4de2-bf33-1d3394b5cf09,Namespace:kube-system,Attempt:0,} returns sandbox id \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\"" Feb 9 19:05:45.919985 env[1308]: time="2024-02-09T19:05:45.919942533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:05:45.935184 env[1308]: time="2024-02-09T19:05:45.935136689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hh47k,Uid:9ad88d33-c78e-4a97-a0c8-177621cc1ab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7fee00d7082445972af740b4a7d748c019344abc90ad5db0db951d4b9ca8297\"" Feb 9 19:05:46.315039 kubelet[1772]: E0209 19:05:46.314976 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:47.315416 kubelet[1772]: E0209 19:05:47.315370 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:48.315975 kubelet[1772]: E0209 19:05:48.315909 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:49.316729 kubelet[1772]: E0209 19:05:49.316687 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:50.317651 kubelet[1772]: E0209 19:05:50.317562 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:51.304044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716871086.mount: Deactivated successfully. Feb 9 19:05:51.318293 kubelet[1772]: E0209 19:05:51.318188 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:52.318795 kubelet[1772]: E0209 19:05:52.318710 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:53.319625 kubelet[1772]: E0209 19:05:53.319540 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:53.906993 env[1308]: time="2024-02-09T19:05:53.906933658Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:53.912304 env[1308]: time="2024-02-09T19:05:53.912258254Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:53.917773 env[1308]: time="2024-02-09T19:05:53.917730569Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:53.918390 env[1308]: time="2024-02-09T19:05:53.918353150Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:05:53.920112 env[1308]: time="2024-02-09T19:05:53.920075075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:05:53.921176 env[1308]: time="2024-02-09T19:05:53.921141614Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:05:53.947552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997747014.mount: Deactivated successfully. Feb 9 19:05:53.966496 env[1308]: time="2024-02-09T19:05:53.966426030Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\"" Feb 9 19:05:53.967496 env[1308]: time="2024-02-09T19:05:53.967459865Z" level=info msg="StartContainer for \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\"" Feb 9 19:05:53.989380 systemd[1]: Started cri-containerd-1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894.scope. Feb 9 19:05:54.024592 env[1308]: time="2024-02-09T19:05:54.024530235Z" level=info msg="StartContainer for \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\" returns successfully" Feb 9 19:05:54.027623 systemd[1]: cri-containerd-1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894.scope: Deactivated successfully. Feb 9 19:05:54.670770 kubelet[1772]: E0209 19:05:54.320274 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:54.940866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894-rootfs.mount: Deactivated successfully. Feb 9 19:05:55.321332 kubelet[1772]: E0209 19:05:55.321286 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:56.322302 kubelet[1772]: E0209 19:05:56.322237 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:57.323061 kubelet[1772]: E0209 19:05:57.323000 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:57.674554 env[1308]: time="2024-02-09T19:05:57.674100202Z" level=info msg="shim disconnected" id=1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894 Feb 9 19:05:57.674554 env[1308]: time="2024-02-09T19:05:57.674155808Z" level=warning msg="cleaning up after shim disconnected" id=1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894 namespace=k8s.io Feb 9 19:05:57.674554 env[1308]: time="2024-02-09T19:05:57.674168010Z" level=info msg="cleaning up dead shim" Feb 9 19:05:57.682627 env[1308]: time="2024-02-09T19:05:57.682583995Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:05:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2022 runtime=io.containerd.runc.v2\n" Feb 9 19:05:57.709593 env[1308]: time="2024-02-09T19:05:57.709544453Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:05:57.739882 env[1308]: time="2024-02-09T19:05:57.739834700Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\"" Feb 9 19:05:57.740548 env[1308]: time="2024-02-09T19:05:57.740510679Z" level=info msg="StartContainer for \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\"" Feb 9 19:05:57.769775 systemd[1]: Started cri-containerd-8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3.scope. Feb 9 19:05:57.805248 env[1308]: time="2024-02-09T19:05:57.805191453Z" level=info msg="StartContainer for \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\" returns successfully" Feb 9 19:05:57.807577 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:05:57.807891 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:05:57.808062 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:05:57.810213 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:05:57.811654 systemd[1]: cri-containerd-8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3.scope: Deactivated successfully. Feb 9 19:05:57.824867 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:05:57.847466 env[1308]: time="2024-02-09T19:05:57.847403297Z" level=info msg="shim disconnected" id=8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3 Feb 9 19:05:57.847686 env[1308]: time="2024-02-09T19:05:57.847469204Z" level=warning msg="cleaning up after shim disconnected" id=8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3 namespace=k8s.io Feb 9 19:05:57.847686 env[1308]: time="2024-02-09T19:05:57.847482306Z" level=info msg="cleaning up dead shim" Feb 9 19:05:57.855754 env[1308]: time="2024-02-09T19:05:57.855711670Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:05:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2084 runtime=io.containerd.runc.v2\n" Feb 9 19:05:58.323154 kubelet[1772]: E0209 19:05:58.323090 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:58.714366 env[1308]: time="2024-02-09T19:05:58.713944249Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:05:58.729739 systemd[1]: run-containerd-runc-k8s.io-8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3-runc.BfFPbe.mount: Deactivated successfully. Feb 9 19:05:58.729874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3-rootfs.mount: Deactivated successfully. Feb 9 19:05:58.754968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313804018.mount: Deactivated successfully. Feb 9 19:05:58.762251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922266383.mount: Deactivated successfully. Feb 9 19:05:58.776067 env[1308]: time="2024-02-09T19:05:58.776013224Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\"" Feb 9 19:05:58.776820 env[1308]: time="2024-02-09T19:05:58.776789712Z" level=info msg="StartContainer for \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\"" Feb 9 19:05:58.808541 systemd[1]: Started cri-containerd-ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902.scope. Feb 9 19:05:58.852478 systemd[1]: cri-containerd-ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902.scope: Deactivated successfully. Feb 9 19:05:58.857686 env[1308]: time="2024-02-09T19:05:58.857596223Z" level=info msg="StartContainer for \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\" returns successfully" Feb 9 19:05:59.357507 kubelet[1772]: E0209 19:05:59.323644 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:59.371719 env[1308]: time="2024-02-09T19:05:59.371659297Z" level=info msg="shim disconnected" id=ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902 Feb 9 19:05:59.372013 env[1308]: time="2024-02-09T19:05:59.371988233Z" level=warning msg="cleaning up after shim disconnected" id=ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902 namespace=k8s.io Feb 9 19:05:59.372119 env[1308]: time="2024-02-09T19:05:59.372104846Z" level=info msg="cleaning up dead shim" Feb 9 19:05:59.387353 env[1308]: time="2024-02-09T19:05:59.387316134Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:05:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2143 runtime=io.containerd.runc.v2\n" Feb 9 19:05:59.428484 env[1308]: time="2024-02-09T19:05:59.428366289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:59.435961 env[1308]: time="2024-02-09T19:05:59.435915927Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:59.440851 env[1308]: time="2024-02-09T19:05:59.440811970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:59.444674 env[1308]: time="2024-02-09T19:05:59.444637294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:05:59.445163 env[1308]: time="2024-02-09T19:05:59.445125949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:05:59.447166 env[1308]: time="2024-02-09T19:05:59.447131671Z" level=info msg="CreateContainer within sandbox \"e7fee00d7082445972af740b4a7d748c019344abc90ad5db0db951d4b9ca8297\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:05:59.477354 env[1308]: time="2024-02-09T19:05:59.477303219Z" level=info msg="CreateContainer within sandbox \"e7fee00d7082445972af740b4a7d748c019344abc90ad5db0db951d4b9ca8297\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf535af644248976afd3f1fab765c2da8f8d6ae49ccdc65e7f09c7f7c669645e\"" Feb 9 19:05:59.478021 env[1308]: time="2024-02-09T19:05:59.477988295Z" level=info msg="StartContainer for \"cf535af644248976afd3f1fab765c2da8f8d6ae49ccdc65e7f09c7f7c669645e\"" Feb 9 19:05:59.495366 systemd[1]: Started cri-containerd-cf535af644248976afd3f1fab765c2da8f8d6ae49ccdc65e7f09c7f7c669645e.scope. Feb 9 19:05:59.531778 env[1308]: time="2024-02-09T19:05:59.531729658Z" level=info msg="StartContainer for \"cf535af644248976afd3f1fab765c2da8f8d6ae49ccdc65e7f09c7f7c669645e\" returns successfully" Feb 9 19:05:59.722881 env[1308]: time="2024-02-09T19:05:59.722761354Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:05:59.739573 kubelet[1772]: I0209 19:05:59.739544 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hh47k" podStartSLOduration=-9.223372014115265e+09 pod.CreationTimestamp="2024-02-09 19:05:37 +0000 UTC" firstStartedPulling="2024-02-09 19:05:45.936292693 +0000 UTC m=+22.142577156" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:05:59.724295825 +0000 UTC m=+35.930580288" watchObservedRunningTime="2024-02-09 19:05:59.739510013 +0000 UTC m=+35.945794476" Feb 9 19:05:59.748833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422083085.mount: Deactivated successfully. Feb 9 19:05:59.764776 env[1308]: time="2024-02-09T19:05:59.764733211Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\"" Feb 9 19:05:59.765302 env[1308]: time="2024-02-09T19:05:59.765272171Z" level=info msg="StartContainer for \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\"" Feb 9 19:05:59.792219 systemd[1]: Started cri-containerd-7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493.scope. Feb 9 19:05:59.827551 systemd[1]: cri-containerd-7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493.scope: Deactivated successfully. Feb 9 19:05:59.830711 env[1308]: time="2024-02-09T19:05:59.830598620Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod073c80ed_0181_4de2_bf33_1d3394b5cf09.slice/cri-containerd-7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493.scope/memory.events\": no such file or directory" Feb 9 19:05:59.835179 env[1308]: time="2024-02-09T19:05:59.835133923Z" level=info msg="StartContainer for \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\" returns successfully" Feb 9 19:05:59.873740 env[1308]: time="2024-02-09T19:05:59.873689101Z" level=info msg="shim disconnected" id=7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493 Feb 9 19:05:59.873740 env[1308]: time="2024-02-09T19:05:59.873736606Z" level=warning msg="cleaning up after shim disconnected" id=7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493 namespace=k8s.io Feb 9 19:05:59.874046 env[1308]: time="2024-02-09T19:05:59.873750308Z" level=info msg="cleaning up dead shim" Feb 9 19:05:59.881829 env[1308]: time="2024-02-09T19:05:59.881788200Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:05:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2340 runtime=io.containerd.runc.v2\n" Feb 9 19:06:00.324404 kubelet[1772]: E0209 19:06:00.324340 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:00.730665 env[1308]: time="2024-02-09T19:06:00.730364177Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:06:00.733143 systemd[1]: run-containerd-runc-k8s.io-7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493-runc.FMLIYQ.mount: Deactivated successfully. Feb 9 19:06:00.733289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493-rootfs.mount: Deactivated successfully. Feb 9 19:06:00.757603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637731936.mount: Deactivated successfully. Feb 9 19:06:00.773225 env[1308]: time="2024-02-09T19:06:00.773169201Z" level=info msg="CreateContainer within sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\"" Feb 9 19:06:00.774082 env[1308]: time="2024-02-09T19:06:00.774040795Z" level=info msg="StartContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\"" Feb 9 19:06:00.792037 systemd[1]: Started cri-containerd-f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba.scope. Feb 9 19:06:00.837546 env[1308]: time="2024-02-09T19:06:00.837480648Z" level=info msg="StartContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" returns successfully" Feb 9 19:06:00.948357 kubelet[1772]: I0209 19:06:00.948321 1772 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:06:01.238516 kernel: Initializing XFRM netlink socket Feb 9 19:06:01.324994 kubelet[1772]: E0209 19:06:01.324952 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:01.743960 kubelet[1772]: I0209 19:06:01.743924 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hd7xw" podStartSLOduration=-9.22337201211089e+09 pod.CreationTimestamp="2024-02-09 19:05:37 +0000 UTC" firstStartedPulling="2024-02-09 19:05:45.918935429 +0000 UTC m=+22.125219992" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:01.743632227 +0000 UTC m=+37.949916790" watchObservedRunningTime="2024-02-09 19:06:01.743885853 +0000 UTC m=+37.950170416" Feb 9 19:06:02.325245 kubelet[1772]: E0209 19:06:02.325189 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:02.865170 systemd-networkd[1443]: cilium_host: Link UP Feb 9 19:06:02.868942 systemd-networkd[1443]: cilium_net: Link UP Feb 9 19:06:02.870045 systemd-networkd[1443]: cilium_net: Gained carrier Feb 9 19:06:02.873840 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:06:02.873917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:06:02.874082 systemd-networkd[1443]: cilium_host: Gained carrier Feb 9 19:06:02.989365 systemd-networkd[1443]: cilium_vxlan: Link UP Feb 9 19:06:02.989374 systemd-networkd[1443]: cilium_vxlan: Gained carrier Feb 9 19:06:03.196562 kernel: NET: Registered PF_ALG protocol family Feb 9 19:06:03.325841 kubelet[1772]: E0209 19:06:03.325773 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:03.431668 systemd-networkd[1443]: cilium_host: Gained IPv6LL Feb 9 19:06:03.688613 systemd-networkd[1443]: cilium_net: Gained IPv6LL Feb 9 19:06:03.850933 systemd-networkd[1443]: lxc_health: Link UP Feb 9 19:06:03.860384 systemd-networkd[1443]: lxc_health: Gained carrier Feb 9 19:06:03.860558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:06:04.300681 kubelet[1772]: E0209 19:06:04.300613 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:04.326844 kubelet[1772]: E0209 19:06:04.326750 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:04.327591 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Feb 9 19:06:05.327299 kubelet[1772]: E0209 19:06:05.327238 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:05.543609 systemd-networkd[1443]: lxc_health: Gained IPv6LL Feb 9 19:06:05.976249 kubelet[1772]: I0209 19:06:05.976197 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:05.985355 systemd[1]: Created slice kubepods-besteffort-podf9209c55_be10_461a_a5a2_404ee299b50f.slice. Feb 9 19:06:06.098036 kubelet[1772]: I0209 19:06:06.097980 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmrt9\" (UniqueName: \"kubernetes.io/projected/f9209c55-be10-461a-a5a2-404ee299b50f-kube-api-access-cmrt9\") pod \"nginx-deployment-8ffc5cf85-htcpj\" (UID: \"f9209c55-be10-461a-a5a2-404ee299b50f\") " pod="default/nginx-deployment-8ffc5cf85-htcpj" Feb 9 19:06:06.289960 env[1308]: time="2024-02-09T19:06:06.289159284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-htcpj,Uid:f9209c55-be10-461a-a5a2-404ee299b50f,Namespace:default,Attempt:0,}" Feb 9 19:06:06.332510 kubelet[1772]: E0209 19:06:06.332399 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:06.375016 systemd-networkd[1443]: lxc9b7cb56bfab3: Link UP Feb 9 19:06:06.384585 kernel: eth0: renamed from tmp25809 Feb 9 19:06:06.396095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:06:06.396228 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b7cb56bfab3: link becomes ready Feb 9 19:06:06.399709 systemd-networkd[1443]: lxc9b7cb56bfab3: Gained carrier Feb 9 19:06:07.340482 kubelet[1772]: E0209 19:06:07.340414 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:08.232626 systemd-networkd[1443]: lxc9b7cb56bfab3: Gained IPv6LL Feb 9 19:06:08.233819 env[1308]: time="2024-02-09T19:06:08.233732885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:08.234203 env[1308]: time="2024-02-09T19:06:08.233833694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:08.234203 env[1308]: time="2024-02-09T19:06:08.233863597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:08.234203 env[1308]: time="2024-02-09T19:06:08.234074815Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/258097add2f5f566954d975407fb76a40a0a6ff2ec334e72d151f3c7110a6ccb pid=2862 runtime=io.containerd.runc.v2 Feb 9 19:06:08.254987 systemd[1]: Started cri-containerd-258097add2f5f566954d975407fb76a40a0a6ff2ec334e72d151f3c7110a6ccb.scope. Feb 9 19:06:08.310872 env[1308]: time="2024-02-09T19:06:08.310817142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-htcpj,Uid:f9209c55-be10-461a-a5a2-404ee299b50f,Namespace:default,Attempt:0,} returns sandbox id \"258097add2f5f566954d975407fb76a40a0a6ff2ec334e72d151f3c7110a6ccb\"" Feb 9 19:06:08.314724 env[1308]: time="2024-02-09T19:06:08.314688181Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:06:08.341463 kubelet[1772]: E0209 19:06:08.341361 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:09.342625 kubelet[1772]: E0209 19:06:09.342503 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:10.343063 kubelet[1772]: E0209 19:06:10.343011 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:11.343459 kubelet[1772]: E0209 19:06:11.343396 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:12.344171 kubelet[1772]: E0209 19:06:12.344116 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:12.608049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49375411.mount: Deactivated successfully. Feb 9 19:06:12.654054 kubelet[1772]: I0209 19:06:12.653715 1772 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:06:13.345256 kubelet[1772]: E0209 19:06:13.345097 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:13.546606 env[1308]: time="2024-02-09T19:06:13.546552154Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:13.551513 env[1308]: time="2024-02-09T19:06:13.551429331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:13.554585 env[1308]: time="2024-02-09T19:06:13.554540272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:13.558372 env[1308]: time="2024-02-09T19:06:13.558230857Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:13.559158 env[1308]: time="2024-02-09T19:06:13.559126526Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:06:13.561581 env[1308]: time="2024-02-09T19:06:13.561551314Z" level=info msg="CreateContainer within sandbox \"258097add2f5f566954d975407fb76a40a0a6ff2ec334e72d151f3c7110a6ccb\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:06:13.581372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545566687.mount: Deactivated successfully. Feb 9 19:06:13.587372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098011721.mount: Deactivated successfully. Feb 9 19:06:13.595968 env[1308]: time="2024-02-09T19:06:13.595760458Z" level=info msg="CreateContainer within sandbox \"258097add2f5f566954d975407fb76a40a0a6ff2ec334e72d151f3c7110a6ccb\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4568944f5efaa9607ccc847b24e081b8480c4f34b7ab7c52310f0379ecd2cdca\"" Feb 9 19:06:13.597387 env[1308]: time="2024-02-09T19:06:13.597358082Z" level=info msg="StartContainer for \"4568944f5efaa9607ccc847b24e081b8480c4f34b7ab7c52310f0379ecd2cdca\"" Feb 9 19:06:13.620087 systemd[1]: run-containerd-runc-k8s.io-4568944f5efaa9607ccc847b24e081b8480c4f34b7ab7c52310f0379ecd2cdca-runc.ysv1NN.mount: Deactivated successfully. Feb 9 19:06:13.624809 systemd[1]: Started cri-containerd-4568944f5efaa9607ccc847b24e081b8480c4f34b7ab7c52310f0379ecd2cdca.scope. Feb 9 19:06:13.659840 env[1308]: time="2024-02-09T19:06:13.659803609Z" level=info msg="StartContainer for \"4568944f5efaa9607ccc847b24e081b8480c4f34b7ab7c52310f0379ecd2cdca\" returns successfully" Feb 9 19:06:13.764185 kubelet[1772]: I0209 19:06:13.764142 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-htcpj" podStartSLOduration=-9.22337202809066e+09 pod.CreationTimestamp="2024-02-09 19:06:05 +0000 UTC" firstStartedPulling="2024-02-09 19:06:08.312562595 +0000 UTC m=+44.518847158" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:13.763762945 +0000 UTC m=+49.970047408" watchObservedRunningTime="2024-02-09 19:06:13.764115773 +0000 UTC m=+49.970400236" Feb 9 19:06:14.345761 kubelet[1772]: E0209 19:06:14.345699 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:15.346607 kubelet[1772]: E0209 19:06:15.346547 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:16.347629 kubelet[1772]: E0209 19:06:16.347562 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:17.347983 kubelet[1772]: E0209 19:06:17.347924 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:18.348973 kubelet[1772]: E0209 19:06:18.348872 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:19.350126 kubelet[1772]: E0209 19:06:19.350050 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:20.351028 kubelet[1772]: E0209 19:06:20.350955 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:21.352097 kubelet[1772]: E0209 19:06:21.351964 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:22.352373 kubelet[1772]: E0209 19:06:22.352268 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:23.214476 kubelet[1772]: I0209 19:06:23.214276 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:23.221345 systemd[1]: Created slice kubepods-besteffort-pod19f99f06_6351_45a0_9e34_5fefc7f67872.slice. Feb 9 19:06:23.309494 kubelet[1772]: I0209 19:06:23.309416 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frf8q\" (UniqueName: \"kubernetes.io/projected/19f99f06-6351-45a0-9e34-5fefc7f67872-kube-api-access-frf8q\") pod \"nfs-server-provisioner-0\" (UID: \"19f99f06-6351-45a0-9e34-5fefc7f67872\") " pod="default/nfs-server-provisioner-0" Feb 9 19:06:23.309819 kubelet[1772]: I0209 19:06:23.309791 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/19f99f06-6351-45a0-9e34-5fefc7f67872-data\") pod \"nfs-server-provisioner-0\" (UID: \"19f99f06-6351-45a0-9e34-5fefc7f67872\") " pod="default/nfs-server-provisioner-0" Feb 9 19:06:23.352521 kubelet[1772]: E0209 19:06:23.352456 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:23.527791 env[1308]: time="2024-02-09T19:06:23.527639095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:19f99f06-6351-45a0-9e34-5fefc7f67872,Namespace:default,Attempt:0,}" Feb 9 19:06:23.581810 systemd-networkd[1443]: lxcfbf2afe1cbf8: Link UP Feb 9 19:06:23.589463 kernel: eth0: renamed from tmpaf050 Feb 9 19:06:23.602280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:06:23.602394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfbf2afe1cbf8: link becomes ready Feb 9 19:06:23.602898 systemd-networkd[1443]: lxcfbf2afe1cbf8: Gained carrier Feb 9 19:06:23.794136 env[1308]: time="2024-02-09T19:06:23.793984027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:23.794136 env[1308]: time="2024-02-09T19:06:23.794039030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:23.794136 env[1308]: time="2024-02-09T19:06:23.794052931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:23.794672 env[1308]: time="2024-02-09T19:06:23.794613165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af0507450cb4d2acbcc80f020c27ba623c196765e12944d377965d2ca6edaade pid=3037 runtime=io.containerd.runc.v2 Feb 9 19:06:23.812490 systemd[1]: Started cri-containerd-af0507450cb4d2acbcc80f020c27ba623c196765e12944d377965d2ca6edaade.scope. Feb 9 19:06:23.857882 env[1308]: time="2024-02-09T19:06:23.857834518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:19f99f06-6351-45a0-9e34-5fefc7f67872,Namespace:default,Attempt:0,} returns sandbox id \"af0507450cb4d2acbcc80f020c27ba623c196765e12944d377965d2ca6edaade\"" Feb 9 19:06:23.859646 env[1308]: time="2024-02-09T19:06:23.859611027Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:06:24.300427 kubelet[1772]: E0209 19:06:24.300372 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:24.352780 kubelet[1772]: E0209 19:06:24.352699 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:24.420814 systemd[1]: run-containerd-runc-k8s.io-af0507450cb4d2acbcc80f020c27ba623c196765e12944d377965d2ca6edaade-runc.xjA5JQ.mount: Deactivated successfully. Feb 9 19:06:25.191847 systemd-networkd[1443]: lxcfbf2afe1cbf8: Gained IPv6LL Feb 9 19:06:25.353852 kubelet[1772]: E0209 19:06:25.353778 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:26.354485 kubelet[1772]: E0209 19:06:26.354406 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:26.588193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383129671.mount: Deactivated successfully. Feb 9 19:06:27.354888 kubelet[1772]: E0209 19:06:27.354789 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:28.355271 kubelet[1772]: E0209 19:06:28.355226 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:28.542680 env[1308]: time="2024-02-09T19:06:28.542617484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:28.550282 env[1308]: time="2024-02-09T19:06:28.550236400Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:28.555656 env[1308]: time="2024-02-09T19:06:28.555548789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:28.562746 env[1308]: time="2024-02-09T19:06:28.562716580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:28.563380 env[1308]: time="2024-02-09T19:06:28.563345814Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:06:28.565778 env[1308]: time="2024-02-09T19:06:28.565746545Z" level=info msg="CreateContainer within sandbox \"af0507450cb4d2acbcc80f020c27ba623c196765e12944d377965d2ca6edaade\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:06:28.601585 env[1308]: time="2024-02-09T19:06:28.601547297Z" level=info msg="CreateContainer within sandbox \"af0507450cb4d2acbcc80f020c27ba623c196765e12944d377965d2ca6edaade\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"66fde35b6e72de53fedd3243d8f8814e691a7975b80bec7ec8ee45bc9301e083\"" Feb 9 19:06:28.602146 env[1308]: time="2024-02-09T19:06:28.602117028Z" level=info msg="StartContainer for \"66fde35b6e72de53fedd3243d8f8814e691a7975b80bec7ec8ee45bc9301e083\"" Feb 9 19:06:28.621828 systemd[1]: Started cri-containerd-66fde35b6e72de53fedd3243d8f8814e691a7975b80bec7ec8ee45bc9301e083.scope. Feb 9 19:06:28.660900 env[1308]: time="2024-02-09T19:06:28.660837629Z" level=info msg="StartContainer for \"66fde35b6e72de53fedd3243d8f8814e691a7975b80bec7ec8ee45bc9301e083\" returns successfully" Feb 9 19:06:28.810355 kubelet[1772]: I0209 19:06:28.810311 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031044518e+09 pod.CreationTimestamp="2024-02-09 19:06:23 +0000 UTC" firstStartedPulling="2024-02-09 19:06:23.859092795 +0000 UTC m=+60.065377258" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:28.809817451 +0000 UTC m=+65.016101914" watchObservedRunningTime="2024-02-09 19:06:28.810258976 +0000 UTC m=+65.016543539" Feb 9 19:06:29.356854 kubelet[1772]: E0209 19:06:29.356781 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:30.357012 kubelet[1772]: E0209 19:06:30.356943 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:31.358075 kubelet[1772]: E0209 19:06:31.358012 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:32.358579 kubelet[1772]: E0209 19:06:32.358511 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:33.359003 kubelet[1772]: E0209 19:06:33.358934 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:34.359800 kubelet[1772]: E0209 19:06:34.359719 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:35.360339 kubelet[1772]: E0209 19:06:35.360277 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:36.361505 kubelet[1772]: E0209 19:06:36.361429 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:37.362416 kubelet[1772]: E0209 19:06:37.362354 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:37.884584 kubelet[1772]: I0209 19:06:37.884542 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:37.890575 systemd[1]: Created slice kubepods-besteffort-pod9f9298d1_35a5_4b05_811d_36c75c0ccb1d.slice. Feb 9 19:06:38.006883 kubelet[1772]: I0209 19:06:38.006817 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1d290556-f521-4e47-a746-75e8af24e762\" (UniqueName: \"kubernetes.io/nfs/9f9298d1-35a5-4b05-811d-36c75c0ccb1d-pvc-1d290556-f521-4e47-a746-75e8af24e762\") pod \"test-pod-1\" (UID: \"9f9298d1-35a5-4b05-811d-36c75c0ccb1d\") " pod="default/test-pod-1" Feb 9 19:06:38.006883 kubelet[1772]: I0209 19:06:38.006896 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzllw\" (UniqueName: \"kubernetes.io/projected/9f9298d1-35a5-4b05-811d-36c75c0ccb1d-kube-api-access-qzllw\") pod \"test-pod-1\" (UID: \"9f9298d1-35a5-4b05-811d-36c75c0ccb1d\") " pod="default/test-pod-1" Feb 9 19:06:38.167468 kernel: FS-Cache: Loaded Feb 9 19:06:38.214814 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:06:38.214982 kernel: RPC: Registered udp transport module. Feb 9 19:06:38.215010 kernel: RPC: Registered tcp transport module. Feb 9 19:06:38.220005 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:06:38.290468 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:06:38.363294 kubelet[1772]: E0209 19:06:38.363250 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:38.477533 kernel: NFS: Registering the id_resolver key type Feb 9 19:06:38.477699 kernel: Key type id_resolver registered Feb 9 19:06:38.477731 kernel: Key type id_legacy registered Feb 9 19:06:38.595787 nfsidmap[3179]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-97ddcae7e2' Feb 9 19:06:38.611264 nfsidmap[3180]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-97ddcae7e2' Feb 9 19:06:38.794391 env[1308]: time="2024-02-09T19:06:38.794326086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9f9298d1-35a5-4b05-811d-36c75c0ccb1d,Namespace:default,Attempt:0,}" Feb 9 19:06:38.850462 systemd-networkd[1443]: lxcacd49c1e0f68: Link UP Feb 9 19:06:38.857492 kernel: eth0: renamed from tmp0a0a1 Feb 9 19:06:38.870883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:06:38.871020 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcacd49c1e0f68: link becomes ready Feb 9 19:06:38.873474 systemd-networkd[1443]: lxcacd49c1e0f68: Gained carrier Feb 9 19:06:39.106920 env[1308]: time="2024-02-09T19:06:39.106756655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:39.107103 env[1308]: time="2024-02-09T19:06:39.106794557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:39.107103 env[1308]: time="2024-02-09T19:06:39.106808457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:39.107447 env[1308]: time="2024-02-09T19:06:39.107363782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a0a12d3bd9f9fce309782dae1475c7579bb88f9afe4f1c11f25a236e53d44ab pid=3206 runtime=io.containerd.runc.v2 Feb 9 19:06:39.133033 systemd[1]: run-containerd-runc-k8s.io-0a0a12d3bd9f9fce309782dae1475c7579bb88f9afe4f1c11f25a236e53d44ab-runc.XVZJK3.mount: Deactivated successfully. Feb 9 19:06:39.135991 systemd[1]: Started cri-containerd-0a0a12d3bd9f9fce309782dae1475c7579bb88f9afe4f1c11f25a236e53d44ab.scope. Feb 9 19:06:39.177781 env[1308]: time="2024-02-09T19:06:39.177727243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9f9298d1-35a5-4b05-811d-36c75c0ccb1d,Namespace:default,Attempt:0,} returns sandbox id \"0a0a12d3bd9f9fce309782dae1475c7579bb88f9afe4f1c11f25a236e53d44ab\"" Feb 9 19:06:39.179654 env[1308]: time="2024-02-09T19:06:39.179613725Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:06:39.364993 kubelet[1772]: E0209 19:06:39.364838 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:39.722737 env[1308]: time="2024-02-09T19:06:39.722285335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:39.726613 env[1308]: time="2024-02-09T19:06:39.726565621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:39.730905 env[1308]: time="2024-02-09T19:06:39.730866608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:39.734799 env[1308]: time="2024-02-09T19:06:39.734763078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:39.735457 env[1308]: time="2024-02-09T19:06:39.735405005Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:06:39.737725 env[1308]: time="2024-02-09T19:06:39.737695905Z" level=info msg="CreateContainer within sandbox \"0a0a12d3bd9f9fce309782dae1475c7579bb88f9afe4f1c11f25a236e53d44ab\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:06:39.770484 env[1308]: time="2024-02-09T19:06:39.770410728Z" level=info msg="CreateContainer within sandbox \"0a0a12d3bd9f9fce309782dae1475c7579bb88f9afe4f1c11f25a236e53d44ab\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9397cb4fa05b9e21e016e8c3f36c8b8b51433223f86e15c8d49171f63fa74574\"" Feb 9 19:06:39.771475 env[1308]: time="2024-02-09T19:06:39.771422272Z" level=info msg="StartContainer for \"9397cb4fa05b9e21e016e8c3f36c8b8b51433223f86e15c8d49171f63fa74574\"" Feb 9 19:06:39.789465 systemd[1]: Started cri-containerd-9397cb4fa05b9e21e016e8c3f36c8b8b51433223f86e15c8d49171f63fa74574.scope. Feb 9 19:06:39.825122 env[1308]: time="2024-02-09T19:06:39.825066806Z" level=info msg="StartContainer for \"9397cb4fa05b9e21e016e8c3f36c8b8b51433223f86e15c8d49171f63fa74574\" returns successfully" Feb 9 19:06:40.103728 systemd-networkd[1443]: lxcacd49c1e0f68: Gained IPv6LL Feb 9 19:06:40.119649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338724186.mount: Deactivated successfully. Feb 9 19:06:40.365730 kubelet[1772]: E0209 19:06:40.365571 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:41.366532 kubelet[1772]: E0209 19:06:41.366470 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:42.367029 kubelet[1772]: E0209 19:06:42.366959 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:43.367689 kubelet[1772]: E0209 19:06:43.367616 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:44.301156 kubelet[1772]: E0209 19:06:44.301096 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:44.367907 kubelet[1772]: E0209 19:06:44.367841 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:45.368595 kubelet[1772]: E0209 19:06:45.368527 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:46.368770 kubelet[1772]: E0209 19:06:46.368702 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:46.908662 kubelet[1772]: I0209 19:06:46.908610 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372013946207e+09 pod.CreationTimestamp="2024-02-09 19:06:24 +0000 UTC" firstStartedPulling="2024-02-09 19:06:39.179064601 +0000 UTC m=+75.385349064" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:39.838967511 +0000 UTC m=+76.045251974" watchObservedRunningTime="2024-02-09 19:06:46.908568042 +0000 UTC m=+83.114852605" Feb 9 19:06:46.931324 systemd[1]: run-containerd-runc-k8s.io-f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba-runc.O5vyLZ.mount: Deactivated successfully. Feb 9 19:06:46.946838 env[1308]: time="2024-02-09T19:06:46.946743502Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:06:46.952529 env[1308]: time="2024-02-09T19:06:46.952490022Z" level=info msg="StopContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" with timeout 1 (s)" Feb 9 19:06:46.952780 env[1308]: time="2024-02-09T19:06:46.952748332Z" level=info msg="Stop container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" with signal terminated" Feb 9 19:06:46.961499 systemd-networkd[1443]: lxc_health: Link DOWN Feb 9 19:06:46.961507 systemd-networkd[1443]: lxc_health: Lost carrier Feb 9 19:06:46.982937 systemd[1]: cri-containerd-f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba.scope: Deactivated successfully. Feb 9 19:06:46.983263 systemd[1]: cri-containerd-f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba.scope: Consumed 6.534s CPU time. Feb 9 19:06:47.003305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba-rootfs.mount: Deactivated successfully. Feb 9 19:06:47.369327 kubelet[1772]: E0209 19:06:47.369272 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:47.963934 env[1308]: time="2024-02-09T19:06:47.963843978Z" level=info msg="Kill container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\"" Feb 9 19:06:48.370157 kubelet[1772]: E0209 19:06:48.370102 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:49.223628 env[1308]: time="2024-02-09T19:06:49.223561626Z" level=info msg="shim disconnected" id=f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba Feb 9 19:06:49.223628 env[1308]: time="2024-02-09T19:06:49.223625229Z" level=warning msg="cleaning up after shim disconnected" id=f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba namespace=k8s.io Feb 9 19:06:49.223628 env[1308]: time="2024-02-09T19:06:49.223638929Z" level=info msg="cleaning up dead shim" Feb 9 19:06:49.232697 env[1308]: time="2024-02-09T19:06:49.232640256Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3338 runtime=io.containerd.runc.v2\n" Feb 9 19:06:49.238741 env[1308]: time="2024-02-09T19:06:49.238692376Z" level=info msg="StopContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" returns successfully" Feb 9 19:06:49.239523 env[1308]: time="2024-02-09T19:06:49.239488005Z" level=info msg="StopPodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\"" Feb 9 19:06:49.239651 env[1308]: time="2024-02-09T19:06:49.239563008Z" level=info msg="Container to stop \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:49.239651 env[1308]: time="2024-02-09T19:06:49.239582508Z" level=info msg="Container to stop \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:49.239651 env[1308]: time="2024-02-09T19:06:49.239598509Z" level=info msg="Container to stop \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:49.239651 env[1308]: time="2024-02-09T19:06:49.239617110Z" level=info msg="Container to stop \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:49.239651 env[1308]: time="2024-02-09T19:06:49.239631410Z" level=info msg="Container to stop \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:49.241842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f-shm.mount: Deactivated successfully. Feb 9 19:06:49.249538 systemd[1]: cri-containerd-903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f.scope: Deactivated successfully. Feb 9 19:06:49.272045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f-rootfs.mount: Deactivated successfully. Feb 9 19:06:49.283563 env[1308]: time="2024-02-09T19:06:49.283512805Z" level=info msg="shim disconnected" id=903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f Feb 9 19:06:49.283563 env[1308]: time="2024-02-09T19:06:49.283562407Z" level=warning msg="cleaning up after shim disconnected" id=903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f namespace=k8s.io Feb 9 19:06:49.283563 env[1308]: time="2024-02-09T19:06:49.283574107Z" level=info msg="cleaning up dead shim" Feb 9 19:06:49.292858 env[1308]: time="2024-02-09T19:06:49.292813343Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3370 runtime=io.containerd.runc.v2\n" Feb 9 19:06:49.293182 env[1308]: time="2024-02-09T19:06:49.293150755Z" level=info msg="TearDown network for sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" successfully" Feb 9 19:06:49.293289 env[1308]: time="2024-02-09T19:06:49.293180456Z" level=info msg="StopPodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" returns successfully" Feb 9 19:06:49.371022 kubelet[1772]: E0209 19:06:49.370955 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:49.383285 kubelet[1772]: I0209 19:06:49.383233 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-xtables-lock\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383285 kubelet[1772]: I0209 19:06:49.383287 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-hostproc\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383539 kubelet[1772]: I0209 19:06:49.383309 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-bpf-maps\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383539 kubelet[1772]: I0209 19:06:49.383331 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-cgroup\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383539 kubelet[1772]: I0209 19:06:49.383354 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cni-path\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383539 kubelet[1772]: I0209 19:06:49.383377 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-etc-cni-netd\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383539 kubelet[1772]: I0209 19:06:49.383417 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-hubble-tls\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383539 kubelet[1772]: I0209 19:06:49.383466 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgsr\" (UniqueName: \"kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-kube-api-access-ghgsr\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383782 kubelet[1772]: I0209 19:06:49.383491 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-lib-modules\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383782 kubelet[1772]: I0209 19:06:49.383519 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-kernel\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383782 kubelet[1772]: I0209 19:06:49.383544 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-run\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383782 kubelet[1772]: I0209 19:06:49.383578 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/073c80ed-0181-4de2-bf33-1d3394b5cf09-clustermesh-secrets\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383782 kubelet[1772]: I0209 19:06:49.383612 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-config-path\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.383782 kubelet[1772]: I0209 19:06:49.383646 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-net\") pod \"073c80ed-0181-4de2-bf33-1d3394b5cf09\" (UID: \"073c80ed-0181-4de2-bf33-1d3394b5cf09\") " Feb 9 19:06:49.384037 kubelet[1772]: I0209 19:06:49.383718 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.384037 kubelet[1772]: I0209 19:06:49.383767 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.384037 kubelet[1772]: I0209 19:06:49.383787 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-hostproc" (OuterVolumeSpecName: "hostproc") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.384037 kubelet[1772]: I0209 19:06:49.383809 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.384037 kubelet[1772]: I0209 19:06:49.383834 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.384246 kubelet[1772]: I0209 19:06:49.383854 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cni-path" (OuterVolumeSpecName: "cni-path") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.384246 kubelet[1772]: I0209 19:06:49.383881 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.385423 kubelet[1772]: I0209 19:06:49.384395 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.385423 kubelet[1772]: I0209 19:06:49.384634 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.385423 kubelet[1772]: I0209 19:06:49.384719 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:49.385423 kubelet[1772]: W0209 19:06:49.384875 1772 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/073c80ed-0181-4de2-bf33-1d3394b5cf09/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:49.387397 kubelet[1772]: I0209 19:06:49.387363 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:49.392161 systemd[1]: var-lib-kubelet-pods-073c80ed\x2d0181\x2d4de2\x2dbf33\x2d1d3394b5cf09-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:06:49.394673 systemd[1]: var-lib-kubelet-pods-073c80ed\x2d0181\x2d4de2\x2dbf33\x2d1d3394b5cf09-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:49.396040 kubelet[1772]: I0209 19:06:49.395790 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073c80ed-0181-4de2-bf33-1d3394b5cf09-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:49.396584 kubelet[1772]: I0209 19:06:49.396559 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:49.398836 systemd[1]: var-lib-kubelet-pods-073c80ed\x2d0181\x2d4de2\x2dbf33\x2d1d3394b5cf09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dghgsr.mount: Deactivated successfully. Feb 9 19:06:49.399778 kubelet[1772]: I0209 19:06:49.399595 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-kube-api-access-ghgsr" (OuterVolumeSpecName: "kube-api-access-ghgsr") pod "073c80ed-0181-4de2-bf33-1d3394b5cf09" (UID: "073c80ed-0181-4de2-bf33-1d3394b5cf09"). InnerVolumeSpecName "kube-api-access-ghgsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:49.400243 kubelet[1772]: E0209 19:06:49.400223 1772 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484110 1772 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-hostproc\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484161 1772 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-xtables-lock\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484180 1772 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-hubble-tls\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484194 1772 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-bpf-maps\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484210 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-cgroup\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484226 1772 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cni-path\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.484283 kubelet[1772]: I0209 19:06:49.484244 1772 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-etc-cni-netd\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.485496 kubelet[1772]: I0209 19:06:49.485473 1772 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ghgsr\" (UniqueName: \"kubernetes.io/projected/073c80ed-0181-4de2-bf33-1d3394b5cf09-kube-api-access-ghgsr\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.485640 kubelet[1772]: I0209 19:06:49.485627 1772 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-lib-modules\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.485763 kubelet[1772]: I0209 19:06:49.485753 1772 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-kernel\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.485872 kubelet[1772]: I0209 19:06:49.485862 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-run\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.485988 kubelet[1772]: I0209 19:06:49.485978 1772 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/073c80ed-0181-4de2-bf33-1d3394b5cf09-clustermesh-secrets\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.486103 kubelet[1772]: I0209 19:06:49.486093 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/073c80ed-0181-4de2-bf33-1d3394b5cf09-cilium-config-path\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.486209 kubelet[1772]: I0209 19:06:49.486198 1772 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/073c80ed-0181-4de2-bf33-1d3394b5cf09-host-proc-sys-net\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:49.853456 kubelet[1772]: I0209 19:06:49.853406 1772 scope.go:115] "RemoveContainer" containerID="f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba" Feb 9 19:06:49.855528 env[1308]: time="2024-02-09T19:06:49.855468790Z" level=info msg="RemoveContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\"" Feb 9 19:06:49.860298 systemd[1]: Removed slice kubepods-burstable-pod073c80ed_0181_4de2_bf33_1d3394b5cf09.slice. Feb 9 19:06:49.860442 systemd[1]: kubepods-burstable-pod073c80ed_0181_4de2_bf33_1d3394b5cf09.slice: Consumed 6.646s CPU time. Feb 9 19:06:49.861766 env[1308]: time="2024-02-09T19:06:49.861729118Z" level=info msg="RemoveContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" returns successfully" Feb 9 19:06:49.862147 kubelet[1772]: I0209 19:06:49.862124 1772 scope.go:115] "RemoveContainer" containerID="7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493" Feb 9 19:06:49.863014 env[1308]: time="2024-02-09T19:06:49.862980063Z" level=info msg="RemoveContainer for \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\"" Feb 9 19:06:49.869089 env[1308]: time="2024-02-09T19:06:49.869054684Z" level=info msg="RemoveContainer for \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\" returns successfully" Feb 9 19:06:49.869281 kubelet[1772]: I0209 19:06:49.869260 1772 scope.go:115] "RemoveContainer" containerID="ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902" Feb 9 19:06:49.870118 env[1308]: time="2024-02-09T19:06:49.870089121Z" level=info msg="RemoveContainer for \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\"" Feb 9 19:06:49.877107 env[1308]: time="2024-02-09T19:06:49.877075275Z" level=info msg="RemoveContainer for \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\" returns successfully" Feb 9 19:06:49.877235 kubelet[1772]: I0209 19:06:49.877213 1772 scope.go:115] "RemoveContainer" containerID="8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3" Feb 9 19:06:49.878041 env[1308]: time="2024-02-09T19:06:49.878015610Z" level=info msg="RemoveContainer for \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\"" Feb 9 19:06:49.884834 env[1308]: time="2024-02-09T19:06:49.884804356Z" level=info msg="RemoveContainer for \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\" returns successfully" Feb 9 19:06:49.885007 kubelet[1772]: I0209 19:06:49.884988 1772 scope.go:115] "RemoveContainer" containerID="1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894" Feb 9 19:06:49.885939 env[1308]: time="2024-02-09T19:06:49.885913897Z" level=info msg="RemoveContainer for \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\"" Feb 9 19:06:49.891676 env[1308]: time="2024-02-09T19:06:49.891640905Z" level=info msg="RemoveContainer for \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\" returns successfully" Feb 9 19:06:49.891863 kubelet[1772]: I0209 19:06:49.891844 1772 scope.go:115] "RemoveContainer" containerID="f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba" Feb 9 19:06:49.892183 env[1308]: time="2024-02-09T19:06:49.892090621Z" level=error msg="ContainerStatus for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\": not found" Feb 9 19:06:49.892375 kubelet[1772]: E0209 19:06:49.892356 1772 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\": not found" containerID="f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba" Feb 9 19:06:49.892475 kubelet[1772]: I0209 19:06:49.892397 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba} err="failed to get container status \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\": rpc error: code = NotFound desc = an error occurred when try to find container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\": not found" Feb 9 19:06:49.892475 kubelet[1772]: I0209 19:06:49.892413 1772 scope.go:115] "RemoveContainer" containerID="7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493" Feb 9 19:06:49.892654 env[1308]: time="2024-02-09T19:06:49.892590839Z" level=error msg="ContainerStatus for \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\": not found" Feb 9 19:06:49.892764 kubelet[1772]: E0209 19:06:49.892746 1772 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\": not found" containerID="7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493" Feb 9 19:06:49.892836 kubelet[1772]: I0209 19:06:49.892781 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493} err="failed to get container status \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c9cf175f35f3733387389bb7f11024738f29c11101e29bad0a9074ace21e493\": not found" Feb 9 19:06:49.892836 kubelet[1772]: I0209 19:06:49.892795 1772 scope.go:115] "RemoveContainer" containerID="ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902" Feb 9 19:06:49.893108 env[1308]: time="2024-02-09T19:06:49.893048956Z" level=error msg="ContainerStatus for \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\": not found" Feb 9 19:06:49.893211 kubelet[1772]: E0209 19:06:49.893197 1772 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\": not found" containerID="ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902" Feb 9 19:06:49.893283 kubelet[1772]: I0209 19:06:49.893232 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902} err="failed to get container status \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba35ab34f857f0b8c780dc4d64dbc96eaaa27022f0adc4613b2169eaa07fe902\": not found" Feb 9 19:06:49.893283 kubelet[1772]: I0209 19:06:49.893246 1772 scope.go:115] "RemoveContainer" containerID="8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3" Feb 9 19:06:49.893474 env[1308]: time="2024-02-09T19:06:49.893405969Z" level=error msg="ContainerStatus for \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\": not found" Feb 9 19:06:49.893589 kubelet[1772]: E0209 19:06:49.893572 1772 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\": not found" containerID="8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3" Feb 9 19:06:49.893677 kubelet[1772]: I0209 19:06:49.893605 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3} err="failed to get container status \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8091d5369b2102834af2d82254e46dc104fc16d5f2e0354a225a2499c01f41f3\": not found" Feb 9 19:06:49.893677 kubelet[1772]: I0209 19:06:49.893617 1772 scope.go:115] "RemoveContainer" containerID="1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894" Feb 9 19:06:49.893826 env[1308]: time="2024-02-09T19:06:49.893771982Z" level=error msg="ContainerStatus for \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\": not found" Feb 9 19:06:49.893982 kubelet[1772]: E0209 19:06:49.893962 1772 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\": not found" containerID="1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894" Feb 9 19:06:49.894056 kubelet[1772]: I0209 19:06:49.894002 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894} err="failed to get container status \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a681dbf57a5add475ecee2a281565f918fc7af38130e78d4b470df11cac0894\": not found" Feb 9 19:06:50.372011 kubelet[1772]: E0209 19:06:50.371942 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:50.652111 env[1308]: time="2024-02-09T19:06:50.651946758Z" level=info msg="StopContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" with timeout 1 (s)" Feb 9 19:06:50.652111 env[1308]: time="2024-02-09T19:06:50.652011460Z" level=error msg="StopContainer for \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\": not found" Feb 9 19:06:50.652731 kubelet[1772]: E0209 19:06:50.652599 1772 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba\": not found" containerID="f048082d32af057eb32bc45de408ceeba181bdd93ddb803e0e8e9fc5f3133eba" Feb 9 19:06:50.652914 env[1308]: time="2024-02-09T19:06:50.652864891Z" level=info msg="StopPodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\"" Feb 9 19:06:50.653044 env[1308]: time="2024-02-09T19:06:50.652980795Z" level=info msg="TearDown network for sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" successfully" Feb 9 19:06:50.653131 env[1308]: time="2024-02-09T19:06:50.653046997Z" level=info msg="StopPodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" returns successfully" Feb 9 19:06:50.654759 kubelet[1772]: I0209 19:06:50.653878 1772 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=073c80ed-0181-4de2-bf33-1d3394b5cf09 path="/var/lib/kubelet/pods/073c80ed-0181-4de2-bf33-1d3394b5cf09/volumes" Feb 9 19:06:50.779079 kubelet[1772]: I0209 19:06:50.779029 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:50.779079 kubelet[1772]: E0209 19:06:50.779095 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073c80ed-0181-4de2-bf33-1d3394b5cf09" containerName="mount-cgroup" Feb 9 19:06:50.779398 kubelet[1772]: E0209 19:06:50.779110 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073c80ed-0181-4de2-bf33-1d3394b5cf09" containerName="apply-sysctl-overwrites" Feb 9 19:06:50.779398 kubelet[1772]: E0209 19:06:50.779122 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073c80ed-0181-4de2-bf33-1d3394b5cf09" containerName="clean-cilium-state" Feb 9 19:06:50.779398 kubelet[1772]: E0209 19:06:50.779133 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073c80ed-0181-4de2-bf33-1d3394b5cf09" containerName="mount-bpf-fs" Feb 9 19:06:50.779398 kubelet[1772]: E0209 19:06:50.779143 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073c80ed-0181-4de2-bf33-1d3394b5cf09" containerName="cilium-agent" Feb 9 19:06:50.779398 kubelet[1772]: I0209 19:06:50.779172 1772 memory_manager.go:346] "RemoveStaleState removing state" podUID="073c80ed-0181-4de2-bf33-1d3394b5cf09" containerName="cilium-agent" Feb 9 19:06:50.785263 systemd[1]: Created slice kubepods-besteffort-poddfa85779_1408_4fd4_970b_e36be7a936cb.slice. Feb 9 19:06:50.793187 kubelet[1772]: W0209 19:06:50.793154 1772 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.200.8.47" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.47' and this object Feb 9 19:06:50.793348 kubelet[1772]: E0209 19:06:50.793195 1772 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.200.8.47" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.47' and this object Feb 9 19:06:50.796412 kubelet[1772]: I0209 19:06:50.796382 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:50.800938 systemd[1]: Created slice kubepods-burstable-pod01e8a714_10d0_48c1_ba09_29e90098a4d2.slice. Feb 9 19:06:50.894184 kubelet[1772]: I0209 19:06:50.894132 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-cgroup\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894184 kubelet[1772]: I0209 19:06:50.894201 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cni-path\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894591 kubelet[1772]: I0209 19:06:50.894234 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-xtables-lock\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894591 kubelet[1772]: I0209 19:06:50.894266 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-net\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894591 kubelet[1772]: I0209 19:06:50.894299 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-hubble-tls\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894591 kubelet[1772]: I0209 19:06:50.894342 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfa85779-1408-4fd4-970b-e36be7a936cb-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-9mgs9\" (UID: \"dfa85779-1408-4fd4-970b-e36be7a936cb\") " pod="kube-system/cilium-operator-f59cbd8c6-9mgs9" Feb 9 19:06:50.894591 kubelet[1772]: I0209 19:06:50.894379 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-run\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894876 kubelet[1772]: I0209 19:06:50.894412 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-hostproc\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894876 kubelet[1772]: I0209 19:06:50.894467 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-clustermesh-secrets\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894876 kubelet[1772]: I0209 19:06:50.894501 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-ipsec-secrets\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.894876 kubelet[1772]: I0209 19:06:50.894543 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892pr\" (UniqueName: \"kubernetes.io/projected/dfa85779-1408-4fd4-970b-e36be7a936cb-kube-api-access-892pr\") pod \"cilium-operator-f59cbd8c6-9mgs9\" (UID: \"dfa85779-1408-4fd4-970b-e36be7a936cb\") " pod="kube-system/cilium-operator-f59cbd8c6-9mgs9" Feb 9 19:06:50.894876 kubelet[1772]: I0209 19:06:50.894581 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-bpf-maps\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.895134 kubelet[1772]: I0209 19:06:50.894615 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-lib-modules\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.895134 kubelet[1772]: I0209 19:06:50.894656 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-kernel\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.895134 kubelet[1772]: I0209 19:06:50.894696 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-etc-cni-netd\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.895134 kubelet[1772]: I0209 19:06:50.894737 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-config-path\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:50.895134 kubelet[1772]: I0209 19:06:50.894776 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffglx\" (UniqueName: \"kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-kube-api-access-ffglx\") pod \"cilium-g7d88\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " pod="kube-system/cilium-g7d88" Feb 9 19:06:51.372464 kubelet[1772]: E0209 19:06:51.372399 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:51.997154 kubelet[1772]: E0209 19:06:51.997095 1772 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:51.997421 kubelet[1772]: E0209 19:06:51.997235 1772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-config-path podName:01e8a714-10d0-48c1-ba09-29e90098a4d2 nodeName:}" failed. No retries permitted until 2024-02-09 19:06:52.497202561 +0000 UTC m=+88.703487024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-config-path") pod "cilium-g7d88" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:51.999266 kubelet[1772]: E0209 19:06:51.999234 1772 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:51.999387 kubelet[1772]: E0209 19:06:51.999320 1772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfa85779-1408-4fd4-970b-e36be7a936cb-cilium-config-path podName:dfa85779-1408-4fd4-970b-e36be7a936cb nodeName:}" failed. No retries permitted until 2024-02-09 19:06:52.499298034 +0000 UTC m=+88.705582597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dfa85779-1408-4fd4-970b-e36be7a936cb-cilium-config-path") pod "cilium-operator-f59cbd8c6-9mgs9" (UID: "dfa85779-1408-4fd4-970b-e36be7a936cb") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:52.373360 kubelet[1772]: E0209 19:06:52.373293 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:52.591192 env[1308]: time="2024-02-09T19:06:52.590733298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9mgs9,Uid:dfa85779-1408-4fd4-970b-e36be7a936cb,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:52.608490 env[1308]: time="2024-02-09T19:06:52.608418410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7d88,Uid:01e8a714-10d0-48c1-ba09-29e90098a4d2,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:52.627777 env[1308]: time="2024-02-09T19:06:52.626999452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:52.627777 env[1308]: time="2024-02-09T19:06:52.627041754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:52.627777 env[1308]: time="2024-02-09T19:06:52.627055354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:52.627777 env[1308]: time="2024-02-09T19:06:52.627192559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75d3eb45484e14aa92f0bbe5566ffcc1d68ddb14643e26081381246563646179 pid=3398 runtime=io.containerd.runc.v2 Feb 9 19:06:52.647141 systemd[1]: Started cri-containerd-75d3eb45484e14aa92f0bbe5566ffcc1d68ddb14643e26081381246563646179.scope. Feb 9 19:06:52.671164 env[1308]: time="2024-02-09T19:06:52.671090578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:52.671397 env[1308]: time="2024-02-09T19:06:52.671365087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:52.671591 env[1308]: time="2024-02-09T19:06:52.671563194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:52.671865 env[1308]: time="2024-02-09T19:06:52.671827103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1 pid=3429 runtime=io.containerd.runc.v2 Feb 9 19:06:52.697011 systemd[1]: Started cri-containerd-f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1.scope. Feb 9 19:06:52.725160 env[1308]: time="2024-02-09T19:06:52.725112647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9mgs9,Uid:dfa85779-1408-4fd4-970b-e36be7a936cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"75d3eb45484e14aa92f0bbe5566ffcc1d68ddb14643e26081381246563646179\"" Feb 9 19:06:52.727700 env[1308]: time="2024-02-09T19:06:52.727658535Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:06:52.744781 env[1308]: time="2024-02-09T19:06:52.744732326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7d88,Uid:01e8a714-10d0-48c1-ba09-29e90098a4d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\"" Feb 9 19:06:52.747600 env[1308]: time="2024-02-09T19:06:52.747568724Z" level=info msg="CreateContainer within sandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:52.773032 env[1308]: time="2024-02-09T19:06:52.772984303Z" level=info msg="CreateContainer within sandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\"" Feb 9 19:06:52.773533 env[1308]: time="2024-02-09T19:06:52.773495321Z" level=info msg="StartContainer for \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\"" Feb 9 19:06:52.790188 systemd[1]: Started cri-containerd-7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89.scope. Feb 9 19:06:52.802274 systemd[1]: cri-containerd-7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89.scope: Deactivated successfully. Feb 9 19:06:52.828260 env[1308]: time="2024-02-09T19:06:52.828201914Z" level=info msg="shim disconnected" id=7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89 Feb 9 19:06:52.828260 env[1308]: time="2024-02-09T19:06:52.828258416Z" level=warning msg="cleaning up after shim disconnected" id=7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89 namespace=k8s.io Feb 9 19:06:52.828539 env[1308]: time="2024-02-09T19:06:52.828270416Z" level=info msg="cleaning up dead shim" Feb 9 19:06:52.836676 env[1308]: time="2024-02-09T19:06:52.836621305Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3498 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:06:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:06:52.837023 env[1308]: time="2024-02-09T19:06:52.836907215Z" level=error msg="copy shim log" error="read /proc/self/fd/68: file already closed" Feb 9 19:06:52.840566 env[1308]: time="2024-02-09T19:06:52.840507040Z" level=error msg="Failed to pipe stdout of container \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\"" error="reading from a closed fifo" Feb 9 19:06:52.840566 env[1308]: time="2024-02-09T19:06:52.840511340Z" level=error msg="Failed to pipe stderr of container \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\"" error="reading from a closed fifo" Feb 9 19:06:52.844033 env[1308]: time="2024-02-09T19:06:52.843979260Z" level=error msg="StartContainer for \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:06:52.844262 kubelet[1772]: E0209 19:06:52.844236 1772 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89" Feb 9 19:06:52.844419 kubelet[1772]: E0209 19:06:52.844399 1772 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:06:52.844419 kubelet[1772]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:06:52.844419 kubelet[1772]: rm /hostbin/cilium-mount Feb 9 19:06:52.844419 kubelet[1772]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ffglx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-g7d88_kube-system(01e8a714-10d0-48c1-ba09-29e90098a4d2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:06:52.844686 kubelet[1772]: E0209 19:06:52.844483 1772 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g7d88" podUID=01e8a714-10d0-48c1-ba09-29e90098a4d2 Feb 9 19:06:52.863036 env[1308]: time="2024-02-09T19:06:52.863001418Z" level=info msg="StopPodSandbox for \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\"" Feb 9 19:06:52.863158 env[1308]: time="2024-02-09T19:06:52.863062320Z" level=info msg="Container to stop \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:52.871040 systemd[1]: cri-containerd-f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1.scope: Deactivated successfully. Feb 9 19:06:52.900605 env[1308]: time="2024-02-09T19:06:52.900483415Z" level=info msg="shim disconnected" id=f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1 Feb 9 19:06:52.900924 env[1308]: time="2024-02-09T19:06:52.900899829Z" level=warning msg="cleaning up after shim disconnected" id=f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1 namespace=k8s.io Feb 9 19:06:52.901016 env[1308]: time="2024-02-09T19:06:52.901001933Z" level=info msg="cleaning up dead shim" Feb 9 19:06:52.909759 env[1308]: time="2024-02-09T19:06:52.909721734Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3529 runtime=io.containerd.runc.v2\n" Feb 9 19:06:52.910077 env[1308]: time="2024-02-09T19:06:52.910043345Z" level=info msg="TearDown network for sandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" successfully" Feb 9 19:06:52.910172 env[1308]: time="2024-02-09T19:06:52.910076646Z" level=info msg="StopPodSandbox for \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" returns successfully" Feb 9 19:06:53.012155 kubelet[1772]: I0209 19:06:53.012092 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffglx\" (UniqueName: \"kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-kube-api-access-ffglx\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012155 kubelet[1772]: I0209 19:06:53.012154 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-bpf-maps\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012531 kubelet[1772]: I0209 19:06:53.012187 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-kernel\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012531 kubelet[1772]: I0209 19:06:53.012217 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-etc-cni-netd\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012531 kubelet[1772]: I0209 19:06:53.012252 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-config-path\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012531 kubelet[1772]: I0209 19:06:53.012281 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-cgroup\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012531 kubelet[1772]: I0209 19:06:53.012308 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cni-path\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012531 kubelet[1772]: I0209 19:06:53.012341 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-run\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012860 kubelet[1772]: I0209 19:06:53.012367 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-hostproc\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012860 kubelet[1772]: I0209 19:06:53.012403 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-ipsec-secrets\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012860 kubelet[1772]: I0209 19:06:53.012456 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-xtables-lock\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012860 kubelet[1772]: I0209 19:06:53.012507 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-net\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012860 kubelet[1772]: I0209 19:06:53.012542 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-hubble-tls\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.012860 kubelet[1772]: I0209 19:06:53.012580 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-clustermesh-secrets\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.013194 kubelet[1772]: I0209 19:06:53.012612 1772 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-lib-modules\") pod \"01e8a714-10d0-48c1-ba09-29e90098a4d2\" (UID: \"01e8a714-10d0-48c1-ba09-29e90098a4d2\") " Feb 9 19:06:53.013194 kubelet[1772]: I0209 19:06:53.012699 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.015469 kubelet[1772]: I0209 19:06:53.013367 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.015469 kubelet[1772]: I0209 19:06:53.013447 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.015469 kubelet[1772]: I0209 19:06:53.013478 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.015469 kubelet[1772]: I0209 19:06:53.013503 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.015469 kubelet[1772]: W0209 19:06:53.013708 1772 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/01e8a714-10d0-48c1-ba09-29e90098a4d2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:53.016021 kubelet[1772]: I0209 19:06:53.015981 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.017814 kubelet[1772]: I0209 19:06:53.017782 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.017940 kubelet[1772]: I0209 19:06:53.017843 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.018092 kubelet[1772]: I0209 19:06:53.018063 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:53.018254 kubelet[1772]: I0209 19:06:53.018231 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.018402 kubelet[1772]: I0209 19:06:53.018380 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.018655 kubelet[1772]: I0209 19:06:53.018630 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-kube-api-access-ffglx" (OuterVolumeSpecName: "kube-api-access-ffglx") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "kube-api-access-ffglx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:53.021496 kubelet[1772]: I0209 19:06:53.021468 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:53.022742 kubelet[1772]: I0209 19:06:53.022715 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:53.024551 kubelet[1772]: I0209 19:06:53.024524 1772 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01e8a714-10d0-48c1-ba09-29e90098a4d2" (UID: "01e8a714-10d0-48c1-ba09-29e90098a4d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:53.113070 kubelet[1772]: I0209 19:06:53.113019 1772 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-clustermesh-secrets\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113070 kubelet[1772]: I0209 19:06:53.113070 1772 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-lib-modules\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113090 1772 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-xtables-lock\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113107 1772 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-net\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113122 1772 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-hubble-tls\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113136 1772 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-bpf-maps\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113151 1772 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-host-proc-sys-kernel\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113167 1772 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ffglx\" (UniqueName: \"kubernetes.io/projected/01e8a714-10d0-48c1-ba09-29e90098a4d2-kube-api-access-ffglx\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113182 1772 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-etc-cni-netd\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113354 kubelet[1772]: I0209 19:06:53.113197 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-config-path\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113655 kubelet[1772]: I0209 19:06:53.113217 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-run\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113655 kubelet[1772]: I0209 19:06:53.113234 1772 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-hostproc\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113655 kubelet[1772]: I0209 19:06:53.113251 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-ipsec-secrets\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113655 kubelet[1772]: I0209 19:06:53.113267 1772 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cilium-cgroup\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.113655 kubelet[1772]: I0209 19:06:53.113282 1772 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e8a714-10d0-48c1-ba09-29e90098a4d2-cni-path\") on node \"10.200.8.47\" DevicePath \"\"" Feb 9 19:06:53.373517 kubelet[1772]: E0209 19:06:53.373454 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:53.618675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1-rootfs.mount: Deactivated successfully. Feb 9 19:06:53.618821 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1-shm.mount: Deactivated successfully. Feb 9 19:06:53.618926 systemd[1]: var-lib-kubelet-pods-01e8a714\x2d10d0\x2d48c1\x2dba09\x2d29e90098a4d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffglx.mount: Deactivated successfully. Feb 9 19:06:53.619036 systemd[1]: var-lib-kubelet-pods-01e8a714\x2d10d0\x2d48c1\x2dba09\x2d29e90098a4d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:06:53.619137 systemd[1]: var-lib-kubelet-pods-01e8a714\x2d10d0\x2d48c1\x2dba09\x2d29e90098a4d2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:53.619234 systemd[1]: var-lib-kubelet-pods-01e8a714\x2d10d0\x2d48c1\x2dba09\x2d29e90098a4d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:53.867481 kubelet[1772]: I0209 19:06:53.866618 1772 scope.go:115] "RemoveContainer" containerID="7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89" Feb 9 19:06:53.870882 systemd[1]: Removed slice kubepods-burstable-pod01e8a714_10d0_48c1_ba09_29e90098a4d2.slice. Feb 9 19:06:53.872004 env[1308]: time="2024-02-09T19:06:53.871610340Z" level=info msg="RemoveContainer for \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\"" Feb 9 19:06:53.878194 env[1308]: time="2024-02-09T19:06:53.878153263Z" level=info msg="RemoveContainer for \"7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89\" returns successfully" Feb 9 19:06:53.896174 kubelet[1772]: I0209 19:06:53.896117 1772 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:53.896419 kubelet[1772]: E0209 19:06:53.896406 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01e8a714-10d0-48c1-ba09-29e90098a4d2" containerName="mount-cgroup" Feb 9 19:06:53.896566 kubelet[1772]: I0209 19:06:53.896554 1772 memory_manager.go:346] "RemoveStaleState removing state" podUID="01e8a714-10d0-48c1-ba09-29e90098a4d2" containerName="mount-cgroup" Feb 9 19:06:53.905168 systemd[1]: Created slice kubepods-burstable-pod6c99aa79_c232_4f47_921c_4d387a7c6e2a.slice. Feb 9 19:06:54.022945 kubelet[1772]: I0209 19:06:54.022876 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c99aa79-c232-4f47-921c-4d387a7c6e2a-clustermesh-secrets\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023179 kubelet[1772]: I0209 19:06:54.022995 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c99aa79-c232-4f47-921c-4d387a7c6e2a-cilium-ipsec-secrets\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023179 kubelet[1772]: I0209 19:06:54.023046 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6cd\" (UniqueName: \"kubernetes.io/projected/6c99aa79-c232-4f47-921c-4d387a7c6e2a-kube-api-access-sc6cd\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023179 kubelet[1772]: I0209 19:06:54.023105 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c99aa79-c232-4f47-921c-4d387a7c6e2a-cilium-config-path\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023179 kubelet[1772]: I0209 19:06:54.023141 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-host-proc-sys-net\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023179 kubelet[1772]: I0209 19:06:54.023172 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c99aa79-c232-4f47-921c-4d387a7c6e2a-hubble-tls\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023479 kubelet[1772]: I0209 19:06:54.023238 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-cilium-run\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023479 kubelet[1772]: I0209 19:06:54.023276 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-cilium-cgroup\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023479 kubelet[1772]: I0209 19:06:54.023324 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-lib-modules\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023479 kubelet[1772]: I0209 19:06:54.023358 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-cni-path\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023479 kubelet[1772]: I0209 19:06:54.023391 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-etc-cni-netd\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023479 kubelet[1772]: I0209 19:06:54.023422 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-xtables-lock\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023790 kubelet[1772]: I0209 19:06:54.023505 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-host-proc-sys-kernel\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023790 kubelet[1772]: I0209 19:06:54.023581 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-hostproc\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.023790 kubelet[1772]: I0209 19:06:54.023643 1772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c99aa79-c232-4f47-921c-4d387a7c6e2a-bpf-maps\") pod \"cilium-cnzqs\" (UID: \"6c99aa79-c232-4f47-921c-4d387a7c6e2a\") " pod="kube-system/cilium-cnzqs" Feb 9 19:06:54.093844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196484540.mount: Deactivated successfully. Feb 9 19:06:54.214440 env[1308]: time="2024-02-09T19:06:54.214294298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnzqs,Uid:6c99aa79-c232-4f47-921c-4d387a7c6e2a,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:54.268045 env[1308]: time="2024-02-09T19:06:54.267843393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:54.268045 env[1308]: time="2024-02-09T19:06:54.267888295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:54.268045 env[1308]: time="2024-02-09T19:06:54.267902895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:54.268350 env[1308]: time="2024-02-09T19:06:54.268094902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5 pid=3558 runtime=io.containerd.runc.v2 Feb 9 19:06:54.285062 systemd[1]: Started cri-containerd-c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5.scope. Feb 9 19:06:54.318688 env[1308]: time="2024-02-09T19:06:54.318640396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnzqs,Uid:6c99aa79-c232-4f47-921c-4d387a7c6e2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\"" Feb 9 19:06:54.322021 env[1308]: time="2024-02-09T19:06:54.321981608Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:54.347384 env[1308]: time="2024-02-09T19:06:54.347326358Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195\"" Feb 9 19:06:54.348360 env[1308]: time="2024-02-09T19:06:54.348325491Z" level=info msg="StartContainer for \"97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195\"" Feb 9 19:06:54.373900 kubelet[1772]: E0209 19:06:54.373866 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:54.378195 systemd[1]: Started cri-containerd-97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195.scope. Feb 9 19:06:54.401843 kubelet[1772]: E0209 19:06:54.401810 1772 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:06:54.421420 env[1308]: time="2024-02-09T19:06:54.421379340Z" level=info msg="StartContainer for \"97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195\" returns successfully" Feb 9 19:06:54.428294 systemd[1]: cri-containerd-97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195.scope: Deactivated successfully. Feb 9 19:06:54.629302 env[1308]: time="2024-02-09T19:06:54.629246409Z" level=info msg="shim disconnected" id=97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195 Feb 9 19:06:54.629678 env[1308]: time="2024-02-09T19:06:54.629653323Z" level=warning msg="cleaning up after shim disconnected" id=97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195 namespace=k8s.io Feb 9 19:06:54.629783 env[1308]: time="2024-02-09T19:06:54.629766227Z" level=info msg="cleaning up dead shim" Feb 9 19:06:54.647220 env[1308]: time="2024-02-09T19:06:54.647177211Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3646 runtime=io.containerd.runc.v2\n" Feb 9 19:06:54.652462 kubelet[1772]: I0209 19:06:54.652414 1772 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=01e8a714-10d0-48c1-ba09-29e90098a4d2 path="/var/lib/kubelet/pods/01e8a714-10d0-48c1-ba09-29e90098a4d2/volumes" Feb 9 19:06:54.873484 env[1308]: time="2024-02-09T19:06:54.873416795Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:06:54.903945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840784177.mount: Deactivated successfully. Feb 9 19:06:54.916402 env[1308]: time="2024-02-09T19:06:54.916343135Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa\"" Feb 9 19:06:54.917559 env[1308]: time="2024-02-09T19:06:54.917525974Z" level=info msg="StartContainer for \"c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa\"" Feb 9 19:06:54.973611 systemd[1]: Started cri-containerd-c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa.scope. Feb 9 19:06:55.023097 env[1308]: time="2024-02-09T19:06:55.023029500Z" level=info msg="StartContainer for \"c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa\" returns successfully" Feb 9 19:06:55.023097 systemd[1]: cri-containerd-c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa.scope: Deactivated successfully. Feb 9 19:06:55.331945 env[1308]: time="2024-02-09T19:06:55.331879897Z" level=info msg="shim disconnected" id=c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa Feb 9 19:06:55.331945 env[1308]: time="2024-02-09T19:06:55.331939599Z" level=warning msg="cleaning up after shim disconnected" id=c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa namespace=k8s.io Feb 9 19:06:55.331945 env[1308]: time="2024-02-09T19:06:55.331951899Z" level=info msg="cleaning up dead shim" Feb 9 19:06:55.342402 env[1308]: time="2024-02-09T19:06:55.342353743Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Feb 9 19:06:55.374784 kubelet[1772]: E0209 19:06:55.374732 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:55.396403 env[1308]: time="2024-02-09T19:06:55.396349025Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:55.400947 env[1308]: time="2024-02-09T19:06:55.400910576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:55.403590 env[1308]: time="2024-02-09T19:06:55.403560363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:55.404071 env[1308]: time="2024-02-09T19:06:55.404030579Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:06:55.406199 env[1308]: time="2024-02-09T19:06:55.406166049Z" level=info msg="CreateContainer within sandbox \"75d3eb45484e14aa92f0bbe5566ffcc1d68ddb14643e26081381246563646179\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:06:55.436023 env[1308]: time="2024-02-09T19:06:55.435972833Z" level=info msg="CreateContainer within sandbox \"75d3eb45484e14aa92f0bbe5566ffcc1d68ddb14643e26081381246563646179\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92613c6f08386cf1e6ac9274400026e3461d54cfaaae91d8c970eb8827b078b4\"" Feb 9 19:06:55.436892 env[1308]: time="2024-02-09T19:06:55.436858863Z" level=info msg="StartContainer for \"92613c6f08386cf1e6ac9274400026e3461d54cfaaae91d8c970eb8827b078b4\"" Feb 9 19:06:55.453968 systemd[1]: Started cri-containerd-92613c6f08386cf1e6ac9274400026e3461d54cfaaae91d8c970eb8827b078b4.scope. Feb 9 19:06:55.483587 env[1308]: time="2024-02-09T19:06:55.483531603Z" level=info msg="StartContainer for \"92613c6f08386cf1e6ac9274400026e3461d54cfaaae91d8c970eb8827b078b4\" returns successfully" Feb 9 19:06:55.620838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa-rootfs.mount: Deactivated successfully. Feb 9 19:06:55.878838 env[1308]: time="2024-02-09T19:06:55.878630747Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:06:55.896532 kubelet[1772]: I0209 19:06:55.896490 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-9mgs9" podStartSLOduration=-9.223372030958323e+09 pod.CreationTimestamp="2024-02-09 19:06:50 +0000 UTC" firstStartedPulling="2024-02-09 19:06:52.72692941 +0000 UTC m=+88.933213873" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:55.89629233 +0000 UTC m=+92.102576793" watchObservedRunningTime="2024-02-09 19:06:55.896452936 +0000 UTC m=+92.102737399" Feb 9 19:06:55.905692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470548062.mount: Deactivated successfully. Feb 9 19:06:55.920691 env[1308]: time="2024-02-09T19:06:55.920643934Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be\"" Feb 9 19:06:55.921111 env[1308]: time="2024-02-09T19:06:55.921081549Z" level=info msg="StartContainer for \"50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be\"" Feb 9 19:06:55.939646 kubelet[1772]: W0209 19:06:55.939609 1772 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e8a714_10d0_48c1_ba09_29e90098a4d2.slice/cri-containerd-7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89.scope WatchSource:0}: container "7070b8d40bb44f326ff9a27d46c58b53904e76c8c1f8c184d0e25f4dfead5e89" in namespace "k8s.io": not found Feb 9 19:06:55.945242 systemd[1]: Started cri-containerd-50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be.scope. Feb 9 19:06:55.983876 systemd[1]: cri-containerd-50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be.scope: Deactivated successfully. Feb 9 19:06:55.985967 env[1308]: time="2024-02-09T19:06:55.985928190Z" level=info msg="StartContainer for \"50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be\" returns successfully" Feb 9 19:06:56.018248 env[1308]: time="2024-02-09T19:06:56.018195447Z" level=info msg="shim disconnected" id=50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be Feb 9 19:06:56.018248 env[1308]: time="2024-02-09T19:06:56.018245348Z" level=warning msg="cleaning up after shim disconnected" id=50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be namespace=k8s.io Feb 9 19:06:56.018560 env[1308]: time="2024-02-09T19:06:56.018256249Z" level=info msg="cleaning up dead shim" Feb 9 19:06:56.026650 env[1308]: time="2024-02-09T19:06:56.026612220Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3807 runtime=io.containerd.runc.v2\n" Feb 9 19:06:56.375157 kubelet[1772]: E0209 19:06:56.375100 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:56.619167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be-rootfs.mount: Deactivated successfully. Feb 9 19:06:56.885659 env[1308]: time="2024-02-09T19:06:56.885597654Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:06:56.907980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458973083.mount: Deactivated successfully. Feb 9 19:06:56.921014 env[1308]: time="2024-02-09T19:06:56.920962304Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282\"" Feb 9 19:06:56.921745 env[1308]: time="2024-02-09T19:06:56.921687127Z" level=info msg="StartContainer for \"74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282\"" Feb 9 19:06:56.940957 systemd[1]: Started cri-containerd-74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282.scope. Feb 9 19:06:56.969615 systemd[1]: cri-containerd-74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282.scope: Deactivated successfully. Feb 9 19:06:56.973930 env[1308]: time="2024-02-09T19:06:56.973830323Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99aa79_c232_4f47_921c_4d387a7c6e2a.slice/cri-containerd-74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282.scope/memory.events\": no such file or directory" Feb 9 19:06:56.975452 env[1308]: time="2024-02-09T19:06:56.975386774Z" level=info msg="StartContainer for \"74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282\" returns successfully" Feb 9 19:06:57.009244 env[1308]: time="2024-02-09T19:06:57.009179869Z" level=info msg="shim disconnected" id=74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282 Feb 9 19:06:57.009244 env[1308]: time="2024-02-09T19:06:57.009235771Z" level=warning msg="cleaning up after shim disconnected" id=74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282 namespace=k8s.io Feb 9 19:06:57.009244 env[1308]: time="2024-02-09T19:06:57.009247971Z" level=info msg="cleaning up dead shim" Feb 9 19:06:57.018091 env[1308]: time="2024-02-09T19:06:57.018043053Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3863 runtime=io.containerd.runc.v2\n" Feb 9 19:06:57.375356 kubelet[1772]: E0209 19:06:57.375281 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:57.619284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282-rootfs.mount: Deactivated successfully. Feb 9 19:06:57.891163 env[1308]: time="2024-02-09T19:06:57.891112025Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:06:57.922339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626654491.mount: Deactivated successfully. Feb 9 19:06:57.931481 env[1308]: time="2024-02-09T19:06:57.931423117Z" level=info msg="CreateContainer within sandbox \"c8544376a30d4a26a1e5f735de89bee838d3a3dc30e6c4b9d322a10807893ba5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2\"" Feb 9 19:06:57.932021 env[1308]: time="2024-02-09T19:06:57.931987735Z" level=info msg="StartContainer for \"1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2\"" Feb 9 19:06:57.952714 systemd[1]: Started cri-containerd-1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2.scope. Feb 9 19:06:57.986054 env[1308]: time="2024-02-09T19:06:57.985989865Z" level=info msg="StartContainer for \"1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2\" returns successfully" Feb 9 19:06:58.306464 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:06:58.376272 kubelet[1772]: E0209 19:06:58.376205 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:58.713188 kubelet[1772]: I0209 19:06:58.712766 1772 setters.go:548] "Node became not ready" node="10.200.8.47" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:06:58.712705118 +0000 UTC m=+94.918989681 LastTransitionTime:2024-02-09 19:06:58.712705118 +0000 UTC m=+94.918989681 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:06:58.912896 kubelet[1772]: I0209 19:06:58.912847 1772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cnzqs" podStartSLOduration=5.912804436 pod.CreationTimestamp="2024-02-09 19:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:58.912082013 +0000 UTC m=+95.118366476" watchObservedRunningTime="2024-02-09 19:06:58.912804436 +0000 UTC m=+95.119088899" Feb 9 19:06:59.073382 kubelet[1772]: W0209 19:06:59.073323 1772 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99aa79_c232_4f47_921c_4d387a7c6e2a.slice/cri-containerd-97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195.scope WatchSource:0}: task 97219d2410edeb03c718871af61907d6c0a92f1a225595e41b2edde759a5d195 not found: not found Feb 9 19:06:59.377456 kubelet[1772]: E0209 19:06:59.377273 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:59.833824 systemd[1]: run-containerd-runc-k8s.io-1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2-runc.8tT6MS.mount: Deactivated successfully. Feb 9 19:07:00.378073 kubelet[1772]: E0209 19:07:00.378017 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:00.825309 systemd-networkd[1443]: lxc_health: Link UP Feb 9 19:07:00.875779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:07:00.875504 systemd-networkd[1443]: lxc_health: Gained carrier Feb 9 19:07:01.378708 kubelet[1772]: E0209 19:07:01.378653 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:02.059818 systemd[1]: run-containerd-runc-k8s.io-1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2-runc.qS3uFa.mount: Deactivated successfully. Feb 9 19:07:02.183723 kubelet[1772]: W0209 19:07:02.183053 1772 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99aa79_c232_4f47_921c_4d387a7c6e2a.slice/cri-containerd-c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa.scope WatchSource:0}: task c74e48414bdcb8b7aebae03c215fc8ddca7a2ed6372c65adf1c224d7eec7c9aa not found: not found Feb 9 19:07:02.378912 kubelet[1772]: E0209 19:07:02.378870 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:02.439816 systemd-networkd[1443]: lxc_health: Gained IPv6LL Feb 9 19:07:03.380170 kubelet[1772]: E0209 19:07:03.380113 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:04.306644 kubelet[1772]: E0209 19:07:04.306599 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:04.328399 systemd[1]: run-containerd-runc-k8s.io-1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2-runc.W1zwr9.mount: Deactivated successfully. Feb 9 19:07:04.381191 kubelet[1772]: E0209 19:07:04.381138 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:05.294496 kubelet[1772]: W0209 19:07:05.294451 1772 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99aa79_c232_4f47_921c_4d387a7c6e2a.slice/cri-containerd-50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be.scope WatchSource:0}: task 50164133ed64e01663cb49cbca368ce8f5e126f51f4141991642478d807ac6be not found: not found Feb 9 19:07:05.381482 kubelet[1772]: E0209 19:07:05.381395 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:06.382615 kubelet[1772]: E0209 19:07:06.382546 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:06.495734 systemd[1]: run-containerd-runc-k8s.io-1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2-runc.rgZv9I.mount: Deactivated successfully. Feb 9 19:07:07.383761 kubelet[1772]: E0209 19:07:07.383696 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:08.384831 kubelet[1772]: E0209 19:07:08.384762 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:08.401803 kubelet[1772]: W0209 19:07:08.401754 1772 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99aa79_c232_4f47_921c_4d387a7c6e2a.slice/cri-containerd-74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282.scope WatchSource:0}: task 74b2190e4d3e5b5360724879ca192be022c5d976d3b4e5a33f1edf31f57d5282 not found: not found Feb 9 19:07:08.644479 systemd[1]: run-containerd-runc-k8s.io-1e059f0aaa5df0a89d3536e45c78502bc39f50a2c88e01eeccb4f648307ecbb2-runc.YFYgiX.mount: Deactivated successfully. Feb 9 19:07:09.385170 kubelet[1772]: E0209 19:07:09.385117 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:10.385357 kubelet[1772]: E0209 19:07:10.385299 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:11.386411 kubelet[1772]: E0209 19:07:11.386350 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:12.386871 kubelet[1772]: E0209 19:07:12.386801 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:13.387925 kubelet[1772]: E0209 19:07:13.387854 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:14.388728 kubelet[1772]: E0209 19:07:14.388667 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:15.389504 kubelet[1772]: E0209 19:07:15.389420 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:16.390529 kubelet[1772]: E0209 19:07:16.390460 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:17.391010 kubelet[1772]: E0209 19:07:17.390946 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:18.391640 kubelet[1772]: E0209 19:07:18.391571 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:19.392708 kubelet[1772]: E0209 19:07:19.392639 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:20.393241 kubelet[1772]: E0209 19:07:20.393175 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:21.394154 kubelet[1772]: E0209 19:07:21.394085 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:22.394821 kubelet[1772]: E0209 19:07:22.394756 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:23.395428 kubelet[1772]: E0209 19:07:23.395367 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:24.300944 kubelet[1772]: E0209 19:07:24.300881 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:24.324506 env[1308]: time="2024-02-09T19:07:24.324452474Z" level=info msg="StopPodSandbox for \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\"" Feb 9 19:07:24.324963 env[1308]: time="2024-02-09T19:07:24.324558977Z" level=info msg="TearDown network for sandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" successfully" Feb 9 19:07:24.324963 env[1308]: time="2024-02-09T19:07:24.324605078Z" level=info msg="StopPodSandbox for \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" returns successfully" Feb 9 19:07:24.325162 env[1308]: time="2024-02-09T19:07:24.325113790Z" level=info msg="RemovePodSandbox for \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\"" Feb 9 19:07:24.325243 env[1308]: time="2024-02-09T19:07:24.325151590Z" level=info msg="Forcibly stopping sandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\"" Feb 9 19:07:24.325285 env[1308]: time="2024-02-09T19:07:24.325252193Z" level=info msg="TearDown network for sandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" successfully" Feb 9 19:07:24.330582 env[1308]: time="2024-02-09T19:07:24.330542617Z" level=info msg="RemovePodSandbox \"f7efc6d2654c5ef65a822d82ab93443b2936003b9fdced9777324ab2bd53c5d1\" returns successfully" Feb 9 19:07:24.330978 env[1308]: time="2024-02-09T19:07:24.330936226Z" level=info msg="StopPodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\"" Feb 9 19:07:24.331179 env[1308]: time="2024-02-09T19:07:24.331128431Z" level=info msg="TearDown network for sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" successfully" Feb 9 19:07:24.331179 env[1308]: time="2024-02-09T19:07:24.331169732Z" level=info msg="StopPodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" returns successfully" Feb 9 19:07:24.331559 env[1308]: time="2024-02-09T19:07:24.331517940Z" level=info msg="RemovePodSandbox for \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\"" Feb 9 19:07:24.331652 env[1308]: time="2024-02-09T19:07:24.331553841Z" level=info msg="Forcibly stopping sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\"" Feb 9 19:07:24.331652 env[1308]: time="2024-02-09T19:07:24.331634743Z" level=info msg="TearDown network for sandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" successfully" Feb 9 19:07:24.337728 env[1308]: time="2024-02-09T19:07:24.337630383Z" level=info msg="RemovePodSandbox \"903b9892b2fd74933080cbeba96d3a6112af85fd5704d11c25e5fdbb8ed3fd6f\" returns successfully" Feb 9 19:07:24.396008 kubelet[1772]: E0209 19:07:24.395980 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:25.396728 kubelet[1772]: E0209 19:07:25.396664 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:26.397235 kubelet[1772]: E0209 19:07:26.397164 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:27.398082 kubelet[1772]: E0209 19:07:27.398014 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:28.398757 kubelet[1772]: E0209 19:07:28.398688 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:29.134039 kubelet[1772]: E0209 19:07:29.133965 1772 controller.go:189] failed to update lease, error: Put "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.47?timeout=10s": context deadline exceeded Feb 9 19:07:29.399715 kubelet[1772]: E0209 19:07:29.399564 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:29.450025 kubelet[1772]: E0209 19:07:29.449980 1772 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:60316->10.200.8.22:2379: read: connection timed out Feb 9 19:07:30.400797 kubelet[1772]: E0209 19:07:30.400682 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:31.400960 kubelet[1772]: E0209 19:07:31.400884 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:32.401775 kubelet[1772]: E0209 19:07:32.401708 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:33.402504 kubelet[1772]: E0209 19:07:33.402453 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:34.403350 kubelet[1772]: E0209 19:07:34.403266 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:35.403484 kubelet[1772]: E0209 19:07:35.403404 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:36.404186 kubelet[1772]: E0209 19:07:36.404089 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:37.404776 kubelet[1772]: E0209 19:07:37.404712 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:38.405070 kubelet[1772]: E0209 19:07:38.404995 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:39.004076 kubelet[1772]: E0209 19:07:39.004020 1772 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T19:07:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T19:07:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T19:07:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T19:07:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":57035507},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22\\\",\\\"registry.k8s.io/kube-proxy:v1.26.13\\\"],\\\"sizeBytes\\\":23641774},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"10.200.8.47\": Patch \"https://10.200.8.39:6443/api/v1/nodes/10.200.8.47/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:07:39.406025 kubelet[1772]: E0209 19:07:39.405955 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:39.450403 kubelet[1772]: E0209 19:07:39.450228 1772 controller.go:189] failed to update lease, error: Put "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.47?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:07:40.406767 kubelet[1772]: E0209 19:07:40.406696 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:41.407453 kubelet[1772]: E0209 19:07:41.407368 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:42.408571 kubelet[1772]: E0209 19:07:42.408509 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:43.409449 kubelet[1772]: E0209 19:07:43.409360 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:44.300770 kubelet[1772]: E0209 19:07:44.300710 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:44.409615 kubelet[1772]: E0209 19:07:44.409553 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:45.410069 kubelet[1772]: E0209 19:07:45.410009 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:46.411237 kubelet[1772]: E0209 19:07:46.411182 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:47.411811 kubelet[1772]: E0209 19:07:47.411743 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:48.412049 kubelet[1772]: E0209 19:07:48.411981 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:49.004782 kubelet[1772]: E0209 19:07:49.004685 1772 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.47\": Get \"https://10.200.8.39:6443/api/v1/nodes/10.200.8.47?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:07:49.412574 kubelet[1772]: E0209 19:07:49.412508 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:49.451576 kubelet[1772]: E0209 19:07:49.451521 1772 controller.go:189] failed to update lease, error: Put "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.47?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:07:50.412965 kubelet[1772]: E0209 19:07:50.412893 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:51.413159 kubelet[1772]: E0209 19:07:51.413086 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:52.414279 kubelet[1772]: E0209 19:07:52.414229 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:53.415179 kubelet[1772]: E0209 19:07:53.415106 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:54.415755 kubelet[1772]: E0209 19:07:54.415715 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:55.416033 kubelet[1772]: E0209 19:07:55.415968 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:56.416337 kubelet[1772]: E0209 19:07:56.416286 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:57.417282 kubelet[1772]: E0209 19:07:57.417213 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:58.418258 kubelet[1772]: E0209 19:07:58.418215 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:59.005754 kubelet[1772]: E0209 19:07:59.005693 1772 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.47\": Get \"https://10.200.8.39:6443/api/v1/nodes/10.200.8.47?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:07:59.419921 kubelet[1772]: E0209 19:07:59.419860 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:59.452489 kubelet[1772]: E0209 19:07:59.452394 1772 controller.go:189] failed to update lease, error: Put "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.47?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:07:59.452489 kubelet[1772]: I0209 19:07:59.452479 1772 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 9 19:08:00.420297 kubelet[1772]: E0209 19:08:00.420231 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:01.421330 kubelet[1772]: E0209 19:08:01.421259 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:02.422141 kubelet[1772]: E0209 19:08:02.422070 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:03.422302 kubelet[1772]: E0209 19:08:03.422228 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:04.300941 kubelet[1772]: E0209 19:08:04.300886 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:04.422975 kubelet[1772]: E0209 19:08:04.422913 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:04.463471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.475996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.489337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.501831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.515878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.529039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.534414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.539773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.545845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.551868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.552027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.564094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.564402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.575984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.610539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.610831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.610970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.611105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.611236 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.611368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.611533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.621960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.627668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.633189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.638946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.639176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.650678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.657184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.657348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.668139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.680110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.680374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.680559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.696180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.701854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.701975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.707533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.713302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.719075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.719220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.730609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.730901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.741717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.747201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.757899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.758064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.758197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.768594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.774155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.774297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.785606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.812894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.813075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.823877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.824035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.824171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.824299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.824430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.824576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.844129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.865846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.865991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.866104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.866213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.887646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.887843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.887982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.888115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.888247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.888371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.903689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931514 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.931909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.941639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.941918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.953515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.953770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.964025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.969620 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.975216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.985867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.986053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.991667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:04.997011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.002441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.009556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.015071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.025774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.025965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.031422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.042411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.053261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.065602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.065749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.065883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.066006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.066122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.077642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.099516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.121079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.131589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.137875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.138004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.138129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.148540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.159420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.165261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.165422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.165572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.175833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.176044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.186629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.192098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.192239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.203000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.203219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.214346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.236176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.241737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.241888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.242012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.242144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.242272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.252247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.264197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.325941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.326073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.326201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.326338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.326475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.326607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.326737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.335638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.335941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.346489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.346762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.357328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.368624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.369017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.374585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.422130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.422560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.422726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.422859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.422988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.423131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.423260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.423383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.423523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.423651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.424977 kubelet[1772]: E0209 19:08:05.424912 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:05.433513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.461367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.467053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.478883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.494760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.500340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.516975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.528075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.528227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.528368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.528519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.528653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.528778 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.538409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.544619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.544785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.555453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.587965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.600489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.600639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.600774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.600902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.601031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.601161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.601291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.601441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.616056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.616345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.616582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.626743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.632266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.647540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.658640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.658784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.658920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.659043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.659158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.675743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.709277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.715986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.726552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.726866 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.737294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.737582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.748176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.748483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.759399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.759639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.770201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.770422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.781493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.797639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.808300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.813861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.824667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.833636 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.833777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.833911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.834039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.834168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.834295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.846977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.847184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.858545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.858697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.858815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.868901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.890945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.891138 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.891277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.891411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.891557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.902786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.903054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.908504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.919205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.919403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.929936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.946400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.946579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.946711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.946837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.957694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.990889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.996481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.007315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.007498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.007628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.007762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.007889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.008013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.008145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.008273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.023778 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.024116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.024257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.034529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.057541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.057747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.057887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.058015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.058142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.075347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.108854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.109866 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.120512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.120792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.131165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.147331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.163386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.168893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.169249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.174691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.174855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.174967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.175075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.180645 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.185937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.191427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.202335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.219129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.225112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.225251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.225379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.225533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.225659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.236173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.247673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.264332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.280671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.280906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.281049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.281179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.281307 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.281463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.281594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.286503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.297305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.318998 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.337528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.337715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.337868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.338010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.338144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.338272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.338402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.345525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.389123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.389358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.389520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.389654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.389784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.389913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.390037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.390164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.390290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.406884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.417732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.417877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.418021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.418158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.428146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.428627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.428797 kubelet[1772]: E0209 19:08:06.428213 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:06.438963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.449992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.450136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.450278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.471537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.477158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.482544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.482668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.482777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.493641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.499016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.504509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.504850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.505055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.516086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.527047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.559933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.565874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:06.566837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001