Feb 9 19:16:37.202576 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:16:37.202608 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:16:37.202623 kernel: BIOS-provided physical RAM map: Feb 9 19:16:37.202633 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:16:37.202643 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:16:37.202684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:16:37.202699 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:16:37.202710 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:16:37.202720 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:16:37.202730 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:16:37.202740 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:16:37.202750 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:16:37.202760 kernel: NX (Execute Disable) protection: active Feb 9 19:16:37.202770 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:16:37.202786 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:16:37.202798 kernel: random: crng init done Feb 9 19:16:37.202809 kernel: SMBIOS 3.1.0 present. Feb 9 19:16:37.202820 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:16:37.202831 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:16:37.202842 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:16:37.202853 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:16:37.202864 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:16:37.202878 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:16:37.202889 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:16:37.202900 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:16:37.202911 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:16:37.202923 kernel: tsc: Detected 2593.906 MHz processor Feb 9 19:16:37.202935 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:16:37.202946 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:16:37.202958 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:16:37.202969 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:16:37.202980 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:16:37.202994 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:16:37.203005 kernel: Using GB pages for direct mapping Feb 9 19:16:37.203016 kernel: Secure boot disabled Feb 9 19:16:37.203027 kernel: ACPI: Early table checksum verification disabled Feb 9 19:16:37.203036 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:16:37.203046 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203057 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203069 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:16:37.203087 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:16:37.203099 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203111 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203123 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203135 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203146 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203173 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203193 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:16:37.203204 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:16:37.203215 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:16:37.203226 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:16:37.203238 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:16:37.203250 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:16:37.203262 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:16:37.203276 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:16:37.203285 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:16:37.203297 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:16:37.203308 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:16:37.203320 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:16:37.203332 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:16:37.203342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:16:37.203353 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:16:37.203365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:16:37.203377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:16:37.203390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:16:37.203407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:16:37.203416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:16:37.203427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:16:37.203437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:16:37.203448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:16:37.203460 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:16:37.203471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:16:37.203484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:16:37.203495 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:16:37.203506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:16:37.203518 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:16:37.203530 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:16:37.203542 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:16:37.203553 kernel: Zone ranges: Feb 9 19:16:37.203565 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:16:37.203577 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:16:37.203592 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:16:37.203605 kernel: Movable zone start for each node Feb 9 19:16:37.203617 kernel: Early memory node ranges Feb 9 19:16:37.203630 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:16:37.203642 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:16:37.203668 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:16:37.203681 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:16:37.203693 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:16:37.203706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:16:37.203721 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:16:37.203733 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:16:37.203746 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:16:37.203758 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:16:37.203771 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:16:37.203783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:16:37.203796 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:16:37.203808 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:16:37.203821 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:16:37.203835 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:16:37.203847 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:16:37.203860 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:16:37.203873 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:16:37.203886 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:16:37.203898 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:16:37.203911 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:16:37.203923 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:16:37.203936 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:16:37.203951 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:16:37.203964 kernel: Policy zone: Normal Feb 9 19:16:37.203978 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:16:37.203991 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:16:37.204004 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:16:37.204016 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:16:37.204029 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:16:37.204042 kernel: Memory: 8073732K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 313468K reserved, 0K cma-reserved) Feb 9 19:16:37.204057 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:16:37.204070 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:16:37.204091 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:16:37.204106 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:16:37.204120 kernel: rcu: RCU event tracing is enabled. Feb 9 19:16:37.204133 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:16:37.204147 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:16:37.204160 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:16:37.204173 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:16:37.204186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:16:37.204199 kernel: Using NULL legacy PIC Feb 9 19:16:37.204218 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:16:37.204231 kernel: Console: colour dummy device 80x25 Feb 9 19:16:37.204242 kernel: printk: console [tty1] enabled Feb 9 19:16:37.204255 kernel: printk: console [ttyS0] enabled Feb 9 19:16:37.204267 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:16:37.204283 kernel: ACPI: Core revision 20210730 Feb 9 19:16:37.204296 kernel: Failed to register legacy timer interrupt Feb 9 19:16:37.204308 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:16:37.204320 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:16:37.204332 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 9 19:16:37.204345 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:16:37.204357 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:16:37.204370 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:16:37.204382 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:16:37.204394 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:16:37.204409 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:16:37.204421 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:16:37.204433 kernel: RETBleed: Vulnerable Feb 9 19:16:37.204446 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:16:37.204458 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:16:37.204470 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:16:37.204482 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:16:37.204494 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:16:37.204506 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:16:37.204518 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:16:37.204533 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:16:37.204545 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:16:37.204557 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:16:37.204569 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:16:37.204581 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:16:37.204593 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:16:37.204606 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:16:37.204618 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:16:37.204630 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:16:37.204642 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:16:37.204668 kernel: LSM: Security Framework initializing Feb 9 19:16:37.204680 kernel: SELinux: Initializing. Feb 9 19:16:37.204694 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:16:37.204707 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:16:37.204719 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:16:37.204732 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:16:37.204744 kernel: signal: max sigframe size: 3632 Feb 9 19:16:37.204757 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:16:37.204769 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:16:37.204781 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:16:37.204794 kernel: x86: Booting SMP configuration: Feb 9 19:16:37.204806 kernel: .... node #0, CPUs: #1 Feb 9 19:16:37.204821 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:16:37.204834 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:16:37.204846 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:16:37.204859 kernel: smpboot: Max logical packages: 1 Feb 9 19:16:37.204871 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:16:37.204883 kernel: devtmpfs: initialized Feb 9 19:16:37.204895 kernel: x86/mm: Memory block size: 128MB Feb 9 19:16:37.204908 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:16:37.204923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:16:37.204935 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:16:37.204948 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:16:37.204960 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:16:37.204971 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:16:37.204983 kernel: audit: type=2000 audit(1707506195.025:1): state=initialized audit_enabled=0 res=1 Feb 9 19:16:37.204995 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:16:37.205008 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:16:37.205021 kernel: cpuidle: using governor menu Feb 9 19:16:37.205036 kernel: ACPI: bus type PCI registered Feb 9 19:16:37.205061 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:16:37.205073 kernel: dca service started, version 1.12.1 Feb 9 19:16:37.205086 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:16:37.205099 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:16:37.205111 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:16:37.205127 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:16:37.205138 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:16:37.205149 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:16:37.205164 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:16:37.205176 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:16:37.205189 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:16:37.205202 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:16:37.205215 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:16:37.205228 kernel: ACPI: Interpreter enabled Feb 9 19:16:37.205242 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:16:37.205255 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:16:37.205269 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:16:37.205285 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:16:37.205298 kernel: iommu: Default domain type: Translated Feb 9 19:16:37.205311 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:16:37.205325 kernel: vgaarb: loaded Feb 9 19:16:37.205339 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:16:37.205352 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:16:37.205365 kernel: PTP clock support registered Feb 9 19:16:37.205379 kernel: Registered efivars operations Feb 9 19:16:37.205392 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:16:37.205405 kernel: PCI: System does not support PCI Feb 9 19:16:37.205420 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:16:37.205434 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:16:37.205447 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:16:37.205460 kernel: pnp: PnP ACPI init Feb 9 19:16:37.205472 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:16:37.205486 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:16:37.205499 kernel: NET: Registered PF_INET protocol family Feb 9 19:16:37.205512 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:16:37.205528 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:16:37.205541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:16:37.205554 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:16:37.205567 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:16:37.205579 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:16:37.205591 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:16:37.205604 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:16:37.205616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:16:37.205629 kernel: NET: Registered PF_XDP protocol family Feb 9 19:16:37.205644 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:16:37.212933 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:16:37.212950 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:16:37.212959 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:16:37.212970 kernel: Initialise system trusted keyrings Feb 9 19:16:37.212978 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:16:37.212988 kernel: Key type asymmetric registered Feb 9 19:16:37.212996 kernel: Asymmetric key parser 'x509' registered Feb 9 19:16:37.213004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:16:37.213017 kernel: io scheduler mq-deadline registered Feb 9 19:16:37.213028 kernel: io scheduler kyber registered Feb 9 19:16:37.213036 kernel: io scheduler bfq registered Feb 9 19:16:37.213043 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:16:37.213054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:16:37.213062 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:16:37.213072 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:16:37.213079 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:16:37.213221 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:16:37.213325 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:16:36 UTC (1707506196) Feb 9 19:16:37.213455 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:16:37.213474 kernel: fail to initialize ptp_kvm Feb 9 19:16:37.213490 kernel: intel_pstate: CPU model not supported Feb 9 19:16:37.213505 kernel: efifb: probing for efifb Feb 9 19:16:37.213519 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:16:37.213533 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:16:37.213548 kernel: efifb: scrolling: redraw Feb 9 19:16:37.213569 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:16:37.213583 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:16:37.213599 kernel: fb0: EFI VGA frame buffer device Feb 9 19:16:37.213614 kernel: pstore: Registered efi as persistent store backend Feb 9 19:16:37.213628 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:16:37.213644 kernel: Segment Routing with IPv6 Feb 9 19:16:37.213672 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:16:37.213687 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:16:37.213701 kernel: Key type dns_resolver registered Feb 9 19:16:37.213720 kernel: IPI shorthand broadcast: enabled Feb 9 19:16:37.213735 kernel: sched_clock: Marking stable (967250300, 25749900)->(1215137800, -222137600) Feb 9 19:16:37.213750 kernel: registered taskstats version 1 Feb 9 19:16:37.213765 kernel: Loading compiled-in X.509 certificates Feb 9 19:16:37.213780 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:16:37.213795 kernel: Key type .fscrypt registered Feb 9 19:16:37.213814 kernel: Key type fscrypt-provisioning registered Feb 9 19:16:37.213828 kernel: pstore: Using crash dump compression: deflate Feb 9 19:16:37.213847 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:16:37.213861 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:16:37.213876 kernel: ima: No architecture policies found Feb 9 19:16:37.213891 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:16:37.213906 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:16:37.213923 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:16:37.213937 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:16:37.213951 kernel: Run /init as init process Feb 9 19:16:37.213965 kernel: with arguments: Feb 9 19:16:37.213979 kernel: /init Feb 9 19:16:37.213998 kernel: with environment: Feb 9 19:16:37.214011 kernel: HOME=/ Feb 9 19:16:37.214025 kernel: TERM=linux Feb 9 19:16:37.214041 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:16:37.214058 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:16:37.214077 systemd[1]: Detected virtualization microsoft. Feb 9 19:16:37.214094 systemd[1]: Detected architecture x86-64. Feb 9 19:16:37.214114 systemd[1]: Running in initrd. Feb 9 19:16:37.214129 systemd[1]: No hostname configured, using default hostname. Feb 9 19:16:37.214144 systemd[1]: Hostname set to . Feb 9 19:16:37.214160 systemd[1]: Initializing machine ID from random generator. Feb 9 19:16:37.214174 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:16:37.214187 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:16:37.222143 systemd[1]: Reached target cryptsetup.target. Feb 9 19:16:37.222167 systemd[1]: Reached target paths.target. Feb 9 19:16:37.222179 systemd[1]: Reached target slices.target. Feb 9 19:16:37.233806 systemd[1]: Reached target swap.target. Feb 9 19:16:37.233817 systemd[1]: Reached target timers.target. Feb 9 19:16:37.233826 systemd[1]: Listening on iscsid.socket. Feb 9 19:16:37.233835 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:16:37.233843 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:16:37.233851 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:16:37.233859 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:16:37.233874 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:16:37.233884 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:16:37.233892 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:16:37.233899 systemd[1]: Reached target sockets.target. Feb 9 19:16:37.233908 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:16:37.233916 systemd[1]: Finished network-cleanup.service. Feb 9 19:16:37.233924 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:16:37.233932 systemd[1]: Starting systemd-journald.service... Feb 9 19:16:37.233940 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:16:37.233950 systemd[1]: Starting systemd-resolved.service... Feb 9 19:16:37.233958 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:16:37.233966 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:16:37.233978 kernel: audit: type=1130 audit(1707506197.221:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.233988 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:16:37.234004 systemd-journald[183]: Journal started Feb 9 19:16:37.234068 systemd-journald[183]: Runtime Journal (/run/log/journal/bee6fcaea8bd474da080be968eb2b55b) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:16:37.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.203427 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:16:37.255303 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:16:37.257979 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:16:37.269658 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:16:37.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.281062 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:16:37.300371 kernel: audit: type=1130 audit(1707506197.269:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.300402 systemd[1]: Started systemd-journald.service. Feb 9 19:16:37.283832 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:16:37.300800 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:16:37.325329 kernel: Bridge firewalling registered Feb 9 19:16:37.325365 kernel: audit: type=1130 audit(1707506197.299:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.321203 systemd[1]: Started systemd-resolved.service. Feb 9 19:16:37.325485 systemd[1]: Reached target nss-lookup.target. Feb 9 19:16:37.325693 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:16:37.330127 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:16:37.340611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:16:37.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.361847 kernel: audit: type=1130 audit(1707506197.320:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.370681 kernel: SCSI subsystem initialized Feb 9 19:16:37.376995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:16:37.441208 kernel: audit: type=1130 audit(1707506197.324:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.441255 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:16:37.441280 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:16:37.441298 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:16:37.441313 kernel: audit: type=1130 audit(1707506197.379:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.441329 kernel: audit: type=1130 audit(1707506197.384:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.382509 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:16:37.394666 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:16:37.447555 dracut-cmdline[201]: dracut-dracut-053 Feb 9 19:16:37.447555 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:16:37.447555 dracut-cmdline[201]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:16:37.486829 kernel: audit: type=1130 audit(1707506197.451:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.406685 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:16:37.449991 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:16:37.465310 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:37.495820 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:37.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.511668 kernel: audit: type=1130 audit(1707506197.497:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.515669 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:16:37.528665 kernel: iscsi: registered transport (tcp) Feb 9 19:16:37.553991 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:16:37.554044 kernel: QLogic iSCSI HBA Driver Feb 9 19:16:37.582832 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:16:37.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:37.588584 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:16:37.639677 kernel: raid6: avx512x4 gen() 18499 MB/s Feb 9 19:16:37.659666 kernel: raid6: avx512x4 xor() 7224 MB/s Feb 9 19:16:37.679667 kernel: raid6: avx512x2 gen() 18511 MB/s Feb 9 19:16:37.700669 kernel: raid6: avx512x2 xor() 29583 MB/s Feb 9 19:16:37.720664 kernel: raid6: avx512x1 gen() 18424 MB/s Feb 9 19:16:37.740666 kernel: raid6: avx512x1 xor() 26720 MB/s Feb 9 19:16:37.761666 kernel: raid6: avx2x4 gen() 18418 MB/s Feb 9 19:16:37.781703 kernel: raid6: avx2x4 xor() 6954 MB/s Feb 9 19:16:37.801704 kernel: raid6: avx2x2 gen() 17655 MB/s Feb 9 19:16:37.822697 kernel: raid6: avx2x2 xor() 20813 MB/s Feb 9 19:16:37.842683 kernel: raid6: avx2x1 gen() 13441 MB/s Feb 9 19:16:37.862693 kernel: raid6: avx2x1 xor() 18583 MB/s Feb 9 19:16:37.883690 kernel: raid6: sse2x4 gen() 11358 MB/s Feb 9 19:16:37.903692 kernel: raid6: sse2x4 xor() 5764 MB/s Feb 9 19:16:37.923688 kernel: raid6: sse2x2 gen() 12402 MB/s Feb 9 19:16:37.944686 kernel: raid6: sse2x2 xor() 7215 MB/s Feb 9 19:16:37.964680 kernel: raid6: sse2x1 gen() 10482 MB/s Feb 9 19:16:37.988414 kernel: raid6: sse2x1 xor() 5549 MB/s Feb 9 19:16:37.988486 kernel: raid6: using algorithm avx512x2 gen() 18511 MB/s Feb 9 19:16:37.988497 kernel: raid6: .... xor() 29583 MB/s, rmw enabled Feb 9 19:16:37.992520 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:16:38.013678 kernel: xor: automatically using best checksumming function avx Feb 9 19:16:38.115683 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:16:38.125096 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:16:38.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:38.129000 audit: BPF prog-id=7 op=LOAD Feb 9 19:16:38.129000 audit: BPF prog-id=8 op=LOAD Feb 9 19:16:38.131004 systemd[1]: Starting systemd-udevd.service... Feb 9 19:16:38.155838 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 9 19:16:38.163470 systemd[1]: Started systemd-udevd.service. Feb 9 19:16:38.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:38.172761 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:16:38.191973 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Feb 9 19:16:38.223976 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:16:38.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:38.230424 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:16:38.274903 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:16:38.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:38.341674 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:16:38.372071 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:16:38.372765 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:16:38.372962 kernel: AES CTR mode by8 optimization enabled Feb 9 19:16:38.405610 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:16:38.405735 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:16:38.410673 kernel: scsi host0: storvsc_host_t Feb 9 19:16:38.430463 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:16:38.430535 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:16:38.441361 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:16:38.441416 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:16:38.441456 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:16:38.447679 kernel: scsi host1: storvsc_host_t Feb 9 19:16:38.462677 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:16:38.468689 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:16:38.477698 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:16:38.477904 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:16:38.478056 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:16:38.489671 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:16:38.503876 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:16:38.504092 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:16:38.504217 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:16:38.511680 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:16:38.511959 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:16:38.517675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:16:38.522673 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:16:38.585893 kernel: hv_netvsc 002248a2-8454-0022-48a2-8454002248a2 eth0: VF slot 1 added Feb 9 19:16:38.595673 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:16:38.601671 kernel: hv_pci 90b3e339-6dac-412a-ad66-01c28fe563f5: PCI VMBus probing: Using version 0x10004 Feb 9 19:16:38.614365 kernel: hv_pci 90b3e339-6dac-412a-ad66-01c28fe563f5: PCI host bridge to bus 6dac:00 Feb 9 19:16:38.614560 kernel: pci_bus 6dac:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:16:38.614705 kernel: pci_bus 6dac:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:16:38.625116 kernel: pci 6dac:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:16:38.635194 kernel: pci 6dac:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:16:38.657779 kernel: pci 6dac:00:02.0: enabling Extended Tags Feb 9 19:16:38.671671 kernel: pci 6dac:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6dac:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:16:38.681796 kernel: pci_bus 6dac:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:16:38.681990 kernel: pci 6dac:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:16:38.776678 kernel: mlx5_core 6dac:00:02.0: firmware version: 14.30.1224 Feb 9 19:16:38.935685 kernel: mlx5_core 6dac:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:16:38.991329 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:16:39.081278 kernel: mlx5_core 6dac:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:16:39.081530 kernel: mlx5_core 6dac:00:02.0: mlx5e_tc_post_act_init:40:(pid 358): firmware level support is missing Feb 9 19:16:39.081671 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (425) Feb 9 19:16:39.100676 kernel: hv_netvsc 002248a2-8454-0022-48a2-8454002248a2 eth0: VF registering: eth1 Feb 9 19:16:39.100892 kernel: mlx5_core 6dac:00:02.0 eth1: joined to eth0 Feb 9 19:16:39.103580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:16:39.118671 kernel: mlx5_core 6dac:00:02.0 enP28076s1: renamed from eth1 Feb 9 19:16:39.276100 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:16:39.333780 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:16:39.336697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:16:39.343736 systemd[1]: Starting disk-uuid.service... Feb 9 19:16:39.363426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:16:39.370681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:16:40.380683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:16:40.380951 disk-uuid[555]: The operation has completed successfully. Feb 9 19:16:40.469597 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:16:40.469811 systemd[1]: Finished disk-uuid.service. Feb 9 19:16:40.475255 systemd[1]: Starting verity-setup.service... Feb 9 19:16:40.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.544672 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:16:40.921310 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:16:40.927482 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:16:40.933435 systemd[1]: Finished verity-setup.service. Feb 9 19:16:40.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.009723 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:16:41.009748 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:16:41.015071 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:16:41.019738 systemd[1]: Starting ignition-setup.service... Feb 9 19:16:41.022931 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:16:41.058990 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:16:41.059055 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:16:41.059077 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:16:41.116648 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:16:41.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.122000 audit: BPF prog-id=9 op=LOAD Feb 9 19:16:41.123750 systemd[1]: Starting systemd-networkd.service... Feb 9 19:16:41.150851 systemd-networkd[796]: lo: Link UP Feb 9 19:16:41.150860 systemd-networkd[796]: lo: Gained carrier Feb 9 19:16:41.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.157558 systemd-networkd[796]: Enumeration completed Feb 9 19:16:41.195186 kernel: kauditd_printk_skb: 12 callbacks suppressed Feb 9 19:16:41.195225 kernel: audit: type=1130 audit(1707506201.160:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.157698 systemd[1]: Started systemd-networkd.service. Feb 9 19:16:41.160524 systemd[1]: Reached target network.target. Feb 9 19:16:41.174769 systemd[1]: Starting iscsiuio.service... Feb 9 19:16:41.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.225687 kernel: audit: type=1130 audit(1707506201.204:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.191211 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:16:41.199674 systemd[1]: Started iscsiuio.service. Feb 9 19:16:41.223365 systemd[1]: Starting iscsid.service... Feb 9 19:16:41.230279 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:16:41.241296 iscsid[805]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:16:41.241296 iscsid[805]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:16:41.241296 iscsid[805]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:16:41.241296 iscsid[805]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:16:41.268089 iscsid[805]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:16:41.268089 iscsid[805]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:16:41.276890 systemd[1]: Started iscsid.service. Feb 9 19:16:41.280278 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:16:41.309258 kernel: mlx5_core 6dac:00:02.0 enP28076s1: Link up Feb 9 19:16:41.309594 kernel: audit: type=1130 audit(1707506201.279:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.317480 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:16:41.322388 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:16:41.348044 kernel: audit: type=1130 audit(1707506201.320:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.325331 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:16:41.342847 systemd[1]: Reached target remote-fs.target. Feb 9 19:16:41.353549 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:16:41.366042 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:16:41.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.387692 kernel: audit: type=1130 audit(1707506201.368:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.387726 kernel: hv_netvsc 002248a2-8454-0022-48a2-8454002248a2 eth0: Data path switched to VF: enP28076s1 Feb 9 19:16:41.399968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:16:41.399716 systemd-networkd[796]: enP28076s1: Link UP Feb 9 19:16:41.399870 systemd-networkd[796]: eth0: Link UP Feb 9 19:16:41.405075 systemd-networkd[796]: eth0: Gained carrier Feb 9 19:16:41.411881 systemd-networkd[796]: enP28076s1: Gained carrier Feb 9 19:16:41.415208 systemd[1]: Finished ignition-setup.service. Feb 9 19:16:41.437145 kernel: audit: type=1130 audit(1707506201.418:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.421378 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:16:41.460846 systemd-networkd[796]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:16:42.743923 systemd-networkd[796]: eth0: Gained IPv6LL Feb 9 19:16:45.170476 ignition[820]: Ignition 2.14.0 Feb 9 19:16:45.170493 ignition[820]: Stage: fetch-offline Feb 9 19:16:45.170611 ignition[820]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:45.170687 ignition[820]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:45.257697 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:45.257983 ignition[820]: parsed url from cmdline: "" Feb 9 19:16:45.257988 ignition[820]: no config URL provided Feb 9 19:16:45.257997 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:16:45.258008 ignition[820]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:16:45.258015 ignition[820]: failed to fetch config: resource requires networking Feb 9 19:16:45.277052 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:16:45.302601 kernel: audit: type=1130 audit(1707506205.281:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.263841 ignition[820]: Ignition finished successfully Feb 9 19:16:45.299151 systemd[1]: Starting ignition-fetch.service... Feb 9 19:16:45.318229 ignition[826]: Ignition 2.14.0 Feb 9 19:16:45.318240 ignition[826]: Stage: fetch Feb 9 19:16:45.318438 ignition[826]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:45.318469 ignition[826]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:45.320645 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:45.336489 ignition[826]: parsed url from cmdline: "" Feb 9 19:16:45.336590 ignition[826]: no config URL provided Feb 9 19:16:45.336899 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:16:45.336995 ignition[826]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:16:45.337037 ignition[826]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:16:45.428348 ignition[826]: GET result: OK Feb 9 19:16:45.428371 ignition[826]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 9 19:16:45.540620 ignition[826]: opening config device: "/dev/sr0" Feb 9 19:16:45.541057 ignition[826]: getting drive status for "/dev/sr0" Feb 9 19:16:45.541102 ignition[826]: drive status: OK Feb 9 19:16:45.541145 ignition[826]: mounting config device Feb 9 19:16:45.541158 ignition[826]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure3523516760" Feb 9 19:16:45.569385 ignition[826]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure3523516760" Feb 9 19:16:45.574071 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/10 00:00 (1000) Feb 9 19:16:45.574125 ignition[826]: checking for config drive Feb 9 19:16:45.574534 ignition[826]: reading config Feb 9 19:16:45.574870 ignition[826]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure3523516760" Feb 9 19:16:45.578997 ignition[826]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure3523516760" Feb 9 19:16:45.576843 systemd[1]: tmp-ignition\x2dazure3523516760.mount: Deactivated successfully. Feb 9 19:16:45.579014 ignition[826]: config has been read from custom data Feb 9 19:16:45.579071 ignition[826]: parsing config with SHA512: c752bdc715dc22372b582472722a477e4f002272a890ccc28f2a84b3d0061919b1b869352f0927c44adf0436965fce68f1526f90f5fc28a56eb02c0b8ce231fc Feb 9 19:16:45.628904 unknown[826]: fetched base config from "system" Feb 9 19:16:45.628919 unknown[826]: fetched base config from "system" Feb 9 19:16:45.628927 unknown[826]: fetched user config from "azure" Feb 9 19:16:45.633725 ignition[826]: fetch: fetch complete Feb 9 19:16:45.668985 kernel: audit: type=1130 audit(1707506205.641:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.636079 systemd[1]: Finished ignition-fetch.service. Feb 9 19:16:45.633731 ignition[826]: fetch: fetch passed Feb 9 19:16:45.644134 systemd[1]: Starting ignition-kargs.service... Feb 9 19:16:45.633779 ignition[826]: Ignition finished successfully Feb 9 19:16:45.685435 ignition[834]: Ignition 2.14.0 Feb 9 19:16:45.685446 ignition[834]: Stage: kargs Feb 9 19:16:45.685634 ignition[834]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:45.685694 ignition[834]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:45.696863 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:45.703060 ignition[834]: kargs: kargs passed Feb 9 19:16:45.703134 ignition[834]: Ignition finished successfully Feb 9 19:16:45.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.708074 systemd[1]: Finished ignition-kargs.service. Feb 9 19:16:45.738438 kernel: audit: type=1130 audit(1707506205.710:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.727694 ignition[840]: Ignition 2.14.0 Feb 9 19:16:45.714442 systemd[1]: Starting ignition-disks.service... Feb 9 19:16:45.727701 ignition[840]: Stage: disks Feb 9 19:16:45.727853 ignition[840]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:45.727895 ignition[840]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:45.752670 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:45.757250 ignition[840]: disks: disks passed Feb 9 19:16:45.757323 ignition[840]: Ignition finished successfully Feb 9 19:16:45.759595 systemd[1]: Finished ignition-disks.service. Feb 9 19:16:45.787693 kernel: audit: type=1130 audit(1707506205.762:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.764598 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:16:45.769089 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:16:45.781213 systemd[1]: Reached target local-fs.target. Feb 9 19:16:45.787677 systemd[1]: Reached target sysinit.target. Feb 9 19:16:45.789706 systemd[1]: Reached target basic.target. Feb 9 19:16:45.792672 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:16:45.853433 systemd-fsck[848]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:16:45.862111 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:16:45.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.869070 systemd[1]: Mounting sysroot.mount... Feb 9 19:16:45.894623 systemd[1]: Mounted sysroot.mount. Feb 9 19:16:45.901592 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:16:45.898621 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:16:45.931696 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:16:45.936503 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:16:45.941052 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:16:45.941102 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:16:45.957702 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:16:45.997552 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:16:46.004771 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:16:46.024673 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (858) Feb 9 19:16:46.034866 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:16:46.034925 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:16:46.034936 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:16:46.039168 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:16:46.048307 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:16:46.063447 initrd-setup-root[889]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:16:46.071878 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:16:46.081172 initrd-setup-root[905]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:16:46.607264 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:16:46.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.621490 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:16:46.621532 kernel: audit: type=1130 audit(1707506206.610:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.621688 systemd[1]: Starting ignition-mount.service... Feb 9 19:16:46.647799 systemd[1]: Starting sysroot-boot.service... Feb 9 19:16:46.655060 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:16:46.655589 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:16:46.682622 ignition[924]: INFO : Ignition 2.14.0 Feb 9 19:16:46.686282 ignition[924]: INFO : Stage: mount Feb 9 19:16:46.688565 ignition[924]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:46.688565 ignition[924]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:46.705496 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:46.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.726936 ignition[924]: INFO : mount: mount passed Feb 9 19:16:46.726936 ignition[924]: INFO : Ignition finished successfully Feb 9 19:16:46.736109 kernel: audit: type=1130 audit(1707506206.713:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.709570 systemd[1]: Finished ignition-mount.service. Feb 9 19:16:46.743526 systemd[1]: Finished sysroot-boot.service. Feb 9 19:16:46.764193 kernel: audit: type=1130 audit(1707506206.745:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.661168 coreos-metadata[857]: Feb 09 19:16:47.661 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:16:47.677968 coreos-metadata[857]: Feb 09 19:16:47.677 INFO Fetch successful Feb 9 19:16:47.715304 coreos-metadata[857]: Feb 09 19:16:47.715 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:16:47.731201 coreos-metadata[857]: Feb 09 19:16:47.731 INFO Fetch successful Feb 9 19:16:47.749874 coreos-metadata[857]: Feb 09 19:16:47.749 INFO wrote hostname ci-3510.3.2-a-19528a6d7a to /sysroot/etc/hostname Feb 9 19:16:47.756648 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:16:47.781800 kernel: audit: type=1130 audit(1707506207.759:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.764101 systemd[1]: Starting ignition-files.service... Feb 9 19:16:47.787393 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:16:47.802670 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (936) Feb 9 19:16:47.812543 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:16:47.812584 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:16:47.812596 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:16:47.822103 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:16:47.838509 ignition[955]: INFO : Ignition 2.14.0 Feb 9 19:16:47.838509 ignition[955]: INFO : Stage: files Feb 9 19:16:47.844084 ignition[955]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:47.844084 ignition[955]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:47.858977 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:47.878578 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:16:47.883363 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:16:47.883363 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:16:47.933197 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:16:47.938249 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:16:47.950354 unknown[955]: wrote ssh authorized keys file for user: core Feb 9 19:16:47.953743 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:16:47.953743 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:16:47.953743 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:16:48.668861 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:16:48.816949 ignition[955]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:16:48.824877 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:16:48.824877 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:16:48.824877 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:16:49.030220 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:16:49.157815 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:16:49.163709 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:16:49.163709 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:16:49.686453 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:16:49.904489 ignition[955]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:16:49.913900 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:16:49.913900 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:16:49.913900 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:16:50.137856 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:16:50.444695 ignition[955]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 19:16:50.453737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:16:50.453737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:16:50.453737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:16:50.599733 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:16:50.899012 ignition[955]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 19:16:50.908313 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:16:50.908313 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:16:50.908313 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:16:51.038897 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:16:51.530281 ignition[955]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 19:16:51.538766 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:16:51.538766 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:16:51.538766 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:16:51.538766 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:16:51.538766 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:16:52.036430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:16:52.121324 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:16:52.128874 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:16:52.134595 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:16:52.134595 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:16:52.145913 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:16:52.145913 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:16:52.157227 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:16:52.157227 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:16:52.168044 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:16:52.168044 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:16:52.178786 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:16:52.184529 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:16:52.190308 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:52.210280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2196634842" Feb 9 19:16:52.210280 ignition[955]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2196634842": device or resource busy Feb 9 19:16:52.210280 ignition[955]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2196634842", trying btrfs: device or resource busy Feb 9 19:16:52.210280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2196634842" Feb 9 19:16:52.237728 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (958) Feb 9 19:16:52.239308 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2196634842" Feb 9 19:16:52.239308 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2196634842" Feb 9 19:16:52.251135 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2196634842" Feb 9 19:16:52.251135 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:16:52.251135 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:16:52.251135 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:52.270457 systemd[1]: mnt-oem2196634842.mount: Deactivated successfully. Feb 9 19:16:52.285619 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4275698445" Feb 9 19:16:52.296521 ignition[955]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4275698445": device or resource busy Feb 9 19:16:52.296521 ignition[955]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4275698445", trying btrfs: device or resource busy Feb 9 19:16:52.296521 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4275698445" Feb 9 19:16:52.296521 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4275698445" Feb 9 19:16:52.296521 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem4275698445" Feb 9 19:16:52.384004 kernel: audit: type=1130 audit(1707506212.322:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.295581 systemd[1]: mnt-oem4275698445.mount: Deactivated successfully. Feb 9 19:16:52.393903 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem4275698445" Feb 9 19:16:52.393903 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:16:52.393903 ignition[955]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:16:52.523694 kernel: audit: type=1130 audit(1707506212.393:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.523747 kernel: audit: type=1130 audit(1707506212.421:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.523765 kernel: audit: type=1131 audit(1707506212.421:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.318454 systemd[1]: Finished ignition-files.service. Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(23): [started] setting preset to enabled for "waagent.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(23): [finished] setting preset to enabled for "waagent.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(24): [started] setting preset to enabled for "nvidia.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: op(24): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:16:52.526136 ignition[955]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:16:52.526136 ignition[955]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:16:52.526136 ignition[955]: INFO : files: files passed Feb 9 19:16:52.526136 ignition[955]: INFO : Ignition finished successfully Feb 9 19:16:52.614740 kernel: audit: type=1130 audit(1707506212.539:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.614788 kernel: audit: type=1131 audit(1707506212.539:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.614815 kernel: audit: type=1130 audit(1707506212.593:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.325424 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:16:52.617382 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:16:52.368956 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:16:52.369812 systemd[1]: Starting ignition-quench.service... Feb 9 19:16:52.380697 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:16:52.396647 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:16:52.398397 systemd[1]: Finished ignition-quench.service. Feb 9 19:16:52.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.437623 systemd[1]: Reached target ignition-complete.target. Feb 9 19:16:52.660814 kernel: audit: type=1131 audit(1707506212.644:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.520939 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:16:52.537530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:16:52.537639 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:16:52.540142 systemd[1]: Reached target initrd-fs.target. Feb 9 19:16:52.569660 systemd[1]: Reached target initrd.target. Feb 9 19:16:52.574850 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:16:52.575847 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:16:52.590711 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:16:52.607914 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:16:52.619787 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:16:52.628097 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:16:52.633811 systemd[1]: Stopped target timers.target. Feb 9 19:16:52.639005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:16:52.639418 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:16:52.660922 systemd[1]: Stopped target initrd.target. Feb 9 19:16:52.668492 systemd[1]: Stopped target basic.target. Feb 9 19:16:52.684154 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:16:52.689412 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:16:52.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.695731 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:16:52.758084 kernel: audit: type=1131 audit(1707506212.736:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.701373 systemd[1]: Stopped target remote-fs.target. Feb 9 19:16:52.705064 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:16:52.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.710580 systemd[1]: Stopped target sysinit.target. Feb 9 19:16:52.783303 kernel: audit: type=1131 audit(1707506212.764:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.717168 systemd[1]: Stopped target local-fs.target. Feb 9 19:16:52.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.722029 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:16:52.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.727085 systemd[1]: Stopped target swap.target. Feb 9 19:16:52.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.730813 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:16:52.732893 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:16:52.750849 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:16:52.758176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:16:52.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.759868 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:16:52.818577 ignition[993]: INFO : Ignition 2.14.0 Feb 9 19:16:52.818577 ignition[993]: INFO : Stage: umount Feb 9 19:16:52.818577 ignition[993]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:52.818577 ignition[993]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:16:52.818577 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:16:52.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.777704 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:16:52.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.850785 ignition[993]: INFO : umount: umount passed Feb 9 19:16:52.850785 ignition[993]: INFO : Ignition finished successfully Feb 9 19:16:52.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.777928 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:16:52.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.783479 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:16:52.783613 systemd[1]: Stopped ignition-files.service. Feb 9 19:16:52.787259 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:16:52.787393 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:16:52.792515 systemd[1]: Stopping ignition-mount.service... Feb 9 19:16:52.804065 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:16:52.805893 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:16:52.806083 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:16:52.808727 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:16:52.808935 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:16:52.821151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:16:52.821291 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:16:52.837214 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:16:52.837313 systemd[1]: Stopped ignition-mount.service. Feb 9 19:16:52.845314 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:16:52.845364 systemd[1]: Stopped ignition-disks.service. Feb 9 19:16:52.857666 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:16:52.857748 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:16:52.863092 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:16:52.863161 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:16:52.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.909906 systemd[1]: Stopped target network.target. Feb 9 19:16:52.912435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:16:52.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.912504 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:16:52.917803 systemd[1]: Stopped target paths.target. Feb 9 19:16:52.923833 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:16:52.928733 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:16:52.931499 systemd[1]: Stopped target slices.target. Feb 9 19:16:52.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.936818 systemd[1]: Stopped target sockets.target. Feb 9 19:16:52.937853 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:16:52.937888 systemd[1]: Closed iscsid.socket. Feb 9 19:16:52.938415 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:16:52.938438 systemd[1]: Closed iscsiuio.socket. Feb 9 19:16:52.944383 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:16:52.944447 systemd[1]: Stopped ignition-setup.service. Feb 9 19:16:52.948646 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:16:52.966241 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:16:52.970823 systemd-networkd[796]: eth0: DHCPv6 lease lost Feb 9 19:16:52.975866 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:16:52.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.976386 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:16:52.976480 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:16:52.984386 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:16:52.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.984494 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:16:52.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.990918 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:16:52.994000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:16:52.994000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:16:52.991013 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:16:53.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.994878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:16:53.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.994915 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:16:53.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.999197 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:16:52.999252 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:16:53.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.002136 systemd[1]: Stopping network-cleanup.service... Feb 9 19:16:53.004825 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:16:53.004887 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:16:53.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.009496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:16:53.009550 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:16:53.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.013508 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:16:53.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.014392 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:16:53.023169 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:16:53.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.029976 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:16:53.030164 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:16:53.033081 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:16:53.033132 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:16:53.038641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:16:53.038702 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:16:53.044222 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:16:53.044276 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:16:53.046947 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:16:53.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.046986 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:16:53.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.053959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:16:53.054012 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:16:53.073598 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:16:53.083338 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:16:53.112908 kernel: hv_netvsc 002248a2-8454-0022-48a2-8454002248a2 eth0: Data path switched from VF: enP28076s1 Feb 9 19:16:53.083408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:16:53.086198 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:16:53.086240 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:16:53.090326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:16:53.090378 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:16:53.093060 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:16:53.093160 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:16:53.134332 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:16:53.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:53.134436 systemd[1]: Stopped network-cleanup.service. Feb 9 19:16:53.139131 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:16:53.146891 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:16:53.164744 systemd[1]: Switching root. Feb 9 19:16:53.193853 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:16:53.193958 iscsid[805]: iscsid shutting down. Feb 9 19:16:53.196582 systemd-journald[183]: Journal stopped Feb 9 19:17:09.214079 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:17:09.214103 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:17:09.214114 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:17:09.214122 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:17:09.214130 kernel: SELinux: policy capability open_perms=1 Feb 9 19:17:09.214138 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:17:09.214147 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:17:09.214157 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:17:09.214165 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:17:09.214173 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:17:09.214181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:17:09.214190 systemd[1]: Successfully loaded SELinux policy in 291.152ms. Feb 9 19:17:09.214200 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.185ms. Feb 9 19:17:09.214210 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:17:09.214222 systemd[1]: Detected virtualization microsoft. Feb 9 19:17:09.214231 systemd[1]: Detected architecture x86-64. Feb 9 19:17:09.214241 systemd[1]: Detected first boot. Feb 9 19:17:09.214251 systemd[1]: Hostname set to . Feb 9 19:17:09.214260 systemd[1]: Initializing machine ID from random generator. Feb 9 19:17:09.214271 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:17:09.214279 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 9 19:17:09.214289 kernel: audit: type=1400 audit(1707506218.230:88): avc: denied { associate } for pid=1026 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:17:09.214299 kernel: audit: type=1300 audit(1707506218.230:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1009 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:09.214308 kernel: audit: type=1327 audit(1707506218.230:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:17:09.214319 kernel: audit: type=1400 audit(1707506218.240:89): avc: denied { associate } for pid=1026 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:17:09.214328 kernel: audit: type=1300 audit(1707506218.240:89): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1009 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:09.214337 kernel: audit: type=1307 audit(1707506218.240:89): cwd="/" Feb 9 19:17:09.214346 kernel: audit: type=1302 audit(1707506218.240:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:09.214355 kernel: audit: type=1302 audit(1707506218.240:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:09.214364 kernel: audit: type=1327 audit(1707506218.240:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:17:09.214375 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:17:09.214384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:17:09.214394 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:17:09.214404 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:17:09.214413 kernel: audit: type=1334 audit(1707506228.563:90): prog-id=12 op=LOAD Feb 9 19:17:09.214421 kernel: audit: type=1334 audit(1707506228.563:91): prog-id=3 op=UNLOAD Feb 9 19:17:09.214430 kernel: audit: type=1334 audit(1707506228.569:92): prog-id=13 op=LOAD Feb 9 19:17:09.214439 kernel: audit: type=1334 audit(1707506228.575:93): prog-id=14 op=LOAD Feb 9 19:17:09.214450 kernel: audit: type=1334 audit(1707506228.575:94): prog-id=4 op=UNLOAD Feb 9 19:17:09.214458 kernel: audit: type=1334 audit(1707506228.575:95): prog-id=5 op=UNLOAD Feb 9 19:17:09.214470 kernel: audit: type=1334 audit(1707506228.582:96): prog-id=15 op=LOAD Feb 9 19:17:09.214479 kernel: audit: type=1334 audit(1707506228.582:97): prog-id=12 op=UNLOAD Feb 9 19:17:09.214487 kernel: audit: type=1334 audit(1707506228.588:98): prog-id=16 op=LOAD Feb 9 19:17:09.214496 kernel: audit: type=1334 audit(1707506228.611:99): prog-id=17 op=LOAD Feb 9 19:17:09.214505 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:17:09.214514 systemd[1]: Stopped iscsiuio.service. Feb 9 19:17:09.214526 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:17:09.214536 systemd[1]: Stopped iscsid.service. Feb 9 19:17:09.214545 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:17:09.214555 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:17:09.214564 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:17:09.214574 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:17:09.214583 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:17:09.214592 systemd[1]: Created slice system-getty.slice. Feb 9 19:17:09.214601 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:17:09.214613 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:17:09.214622 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:17:09.214632 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:17:09.214642 systemd[1]: Created slice user.slice. Feb 9 19:17:09.222914 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:17:09.222942 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:17:09.222955 systemd[1]: Set up automount boot.automount. Feb 9 19:17:09.222965 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:17:09.222981 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:17:09.222991 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:17:09.223001 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:17:09.223011 systemd[1]: Reached target integritysetup.target. Feb 9 19:17:09.223020 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:17:09.223030 systemd[1]: Reached target remote-fs.target. Feb 9 19:17:09.223039 systemd[1]: Reached target slices.target. Feb 9 19:17:09.223049 systemd[1]: Reached target swap.target. Feb 9 19:17:09.223060 systemd[1]: Reached target torcx.target. Feb 9 19:17:09.223071 systemd[1]: Reached target veritysetup.target. Feb 9 19:17:09.223081 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:17:09.223090 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:17:09.223100 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:17:09.223112 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:17:09.223121 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:17:09.223131 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:17:09.223141 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:17:09.223151 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:17:09.223161 systemd[1]: Mounting media.mount... Feb 9 19:17:09.223171 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:17:09.223180 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:17:09.223190 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:17:09.223202 systemd[1]: Mounting tmp.mount... Feb 9 19:17:09.223212 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:17:09.223222 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:17:09.223231 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:17:09.223241 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:17:09.223250 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:17:09.223260 systemd[1]: Starting modprobe@drm.service... Feb 9 19:17:09.223269 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:17:09.223279 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:17:09.223290 systemd[1]: Starting modprobe@loop.service... Feb 9 19:17:09.223300 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:17:09.223310 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:17:09.223320 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:17:09.223329 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:17:09.223339 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:17:09.223349 systemd[1]: Stopped systemd-journald.service. Feb 9 19:17:09.223358 systemd[1]: Starting systemd-journald.service... Feb 9 19:17:09.223370 kernel: loop: module loaded Feb 9 19:17:09.223380 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:17:09.223389 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:17:09.223399 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:17:09.223409 kernel: fuse: init (API version 7.34) Feb 9 19:17:09.223418 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:17:09.223427 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:17:09.223437 systemd[1]: Stopped verity-setup.service. Feb 9 19:17:09.223446 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:17:09.223458 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:17:09.223468 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:17:09.223477 systemd[1]: Mounted media.mount. Feb 9 19:17:09.223487 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:17:09.223496 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:17:09.223506 systemd[1]: Mounted tmp.mount. Feb 9 19:17:09.223522 systemd-journald[1119]: Journal started Feb 9 19:17:09.223580 systemd-journald[1119]: Runtime Journal (/run/log/journal/2a91fb2051544598b1c08b2a9b96eff0) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:16:55.768000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:16:56.567000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:16:56.582000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:16:56.582000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:16:56.582000 audit: BPF prog-id=10 op=LOAD Feb 9 19:16:56.582000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:16:56.582000 audit: BPF prog-id=11 op=LOAD Feb 9 19:16:56.582000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:16:58.230000 audit[1026]: AVC avc: denied { associate } for pid=1026 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:16:58.230000 audit[1026]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1009 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:58.230000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:16:58.240000 audit[1026]: AVC avc: denied { associate } for pid=1026 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:16:58.240000 audit[1026]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1009 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:58.240000 audit: CWD cwd="/" Feb 9 19:16:58.240000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:58.240000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:58.240000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:17:08.563000 audit: BPF prog-id=12 op=LOAD Feb 9 19:17:08.563000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:17:08.569000 audit: BPF prog-id=13 op=LOAD Feb 9 19:17:08.575000 audit: BPF prog-id=14 op=LOAD Feb 9 19:17:08.575000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:17:08.575000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:17:08.582000 audit: BPF prog-id=15 op=LOAD Feb 9 19:17:08.582000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:17:08.588000 audit: BPF prog-id=16 op=LOAD Feb 9 19:17:08.611000 audit: BPF prog-id=17 op=LOAD Feb 9 19:17:08.612000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:17:08.612000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:17:08.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:08.636000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:17:08.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:08.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:08.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:08.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.082000 audit: BPF prog-id=18 op=LOAD Feb 9 19:17:09.083000 audit: BPF prog-id=19 op=LOAD Feb 9 19:17:09.083000 audit: BPF prog-id=20 op=LOAD Feb 9 19:17:09.083000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:17:09.083000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:17:09.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.205000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:17:09.205000 audit[1119]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd04b88860 a2=4000 a3=7ffd04b888fc items=0 ppid=1 pid=1119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:09.205000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:16:58.214574 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:17:08.561944 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:16:58.215071 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:17:08.617247 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:16:58.215093 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:16:58.215132 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:16:58.215142 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:16:58.215183 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:16:58.215196 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:16:58.215399 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:16:58.215445 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:16:58.215458 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:16:58.215849 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:16:58.215882 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:16:58.215901 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:16:58.215914 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:17:09.232781 systemd[1]: Started systemd-journald.service. Feb 9 19:16:58.215930 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:16:58.215943 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:16:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:17:07.338858 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:17:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:17:07.339112 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:17:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:17:07.339239 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:17:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:17:07.339406 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:17:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:17:07.339455 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:17:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:17:07.339509 /usr/lib/systemd/system-generators/torcx-generator[1026]: time="2024-02-09T19:17:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:17:09.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.237933 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:17:09.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.240728 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:17:09.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.243915 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:17:09.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.244069 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:17:09.246994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:17:09.247156 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:17:09.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.250488 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:17:09.250765 systemd[1]: Finished modprobe@drm.service. Feb 9 19:17:09.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.255227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:17:09.255468 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:17:09.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.258635 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:17:09.258962 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:17:09.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.261964 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:17:09.262103 systemd[1]: Finished modprobe@loop.service. Feb 9 19:17:09.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.265017 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:17:09.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.268170 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:17:09.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.272028 systemd[1]: Reached target network-pre.target. Feb 9 19:17:09.275680 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:17:09.282912 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:17:09.285144 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:17:09.300805 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:17:09.305967 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:17:09.313318 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:17:09.314609 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:17:09.319770 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:17:09.321604 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:17:09.329098 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:17:09.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.333555 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:17:09.336559 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:17:09.342549 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:17:09.373047 systemd-journald[1119]: Time spent on flushing to /var/log/journal/2a91fb2051544598b1c08b2a9b96eff0 is 51.878ms for 1217 entries. Feb 9 19:17:09.373047 systemd-journald[1119]: System Journal (/var/log/journal/2a91fb2051544598b1c08b2a9b96eff0) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:17:09.493867 systemd-journald[1119]: Received client request to flush runtime journal. Feb 9 19:17:09.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.397927 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:17:09.495100 udevadm[1151]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:17:09.401127 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:17:09.442570 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:17:09.447464 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:17:09.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:09.454103 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:17:09.495097 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:17:10.210882 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:17:10.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:10.218589 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:17:10.612217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:17:10.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:11.035976 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:17:11.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:11.042000 audit: BPF prog-id=21 op=LOAD Feb 9 19:17:11.042000 audit: BPF prog-id=22 op=LOAD Feb 9 19:17:11.042000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:17:11.042000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:17:11.043642 systemd[1]: Starting systemd-udevd.service... Feb 9 19:17:11.064694 systemd-udevd[1155]: Using default interface naming scheme 'v252'. Feb 9 19:17:11.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:11.414000 audit: BPF prog-id=23 op=LOAD Feb 9 19:17:11.409271 systemd[1]: Started systemd-udevd.service. Feb 9 19:17:11.417950 systemd[1]: Starting systemd-networkd.service... Feb 9 19:17:11.450900 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:17:11.538807 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:17:11.538942 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:17:11.563541 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:17:11.563728 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:17:11.563772 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:17:11.556000 audit: BPF prog-id=24 op=LOAD Feb 9 19:17:11.562000 audit: BPF prog-id=25 op=LOAD Feb 9 19:17:11.563000 audit: BPF prog-id=26 op=LOAD Feb 9 19:17:11.555668 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:17:11.610277 systemd-journald[1119]: Time jumped backwards, rotating. Feb 9 19:17:11.610384 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:17:11.610406 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:17:11.610425 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:17:11.610444 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:17:11.526000 audit[1157]: AVC avc: denied { confidentiality } for pid=1157 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:17:11.622576 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:17:11.621971 systemd[1]: Started systemd-userdbd.service. Feb 9 19:17:11.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:11.630462 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:17:11.630542 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:17:11.638339 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:17:11.526000 audit[1157]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ccda51d840 a1=f884 a2=7fecaf46abc5 a3=5 items=12 ppid=1155 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:11.526000 audit: CWD cwd="/" Feb 9 19:17:11.526000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=1 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=2 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=3 name=(null) inode=15362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=4 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=5 name=(null) inode=15363 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=6 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=7 name=(null) inode=15364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=8 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=9 name=(null) inode=15365 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=10 name=(null) inode=15361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PATH item=11 name=(null) inode=15366 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:17:11.526000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:17:11.751256 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1185) Feb 9 19:17:11.809463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:17:11.928259 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:17:11.990782 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:17:11.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:11.996184 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:17:12.067404 systemd-networkd[1162]: lo: Link UP Feb 9 19:17:12.067414 systemd-networkd[1162]: lo: Gained carrier Feb 9 19:17:12.068040 systemd-networkd[1162]: Enumeration completed Feb 9 19:17:12.068178 systemd[1]: Started systemd-networkd.service. Feb 9 19:17:12.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:12.073490 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:17:12.108714 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:17:12.164264 kernel: mlx5_core 6dac:00:02.0 enP28076s1: Link up Feb 9 19:17:12.203269 kernel: hv_netvsc 002248a2-8454-0022-48a2-8454002248a2 eth0: Data path switched to VF: enP28076s1 Feb 9 19:17:12.204254 systemd-networkd[1162]: enP28076s1: Link UP Feb 9 19:17:12.204408 systemd-networkd[1162]: eth0: Link UP Feb 9 19:17:12.204413 systemd-networkd[1162]: eth0: Gained carrier Feb 9 19:17:12.208892 systemd-networkd[1162]: enP28076s1: Gained carrier Feb 9 19:17:12.234399 systemd-networkd[1162]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:17:12.381203 lvm[1232]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:17:12.410464 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:17:12.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:12.413510 systemd[1]: Reached target cryptsetup.target. Feb 9 19:17:12.417772 systemd[1]: Starting lvm2-activation.service... Feb 9 19:17:12.422789 lvm[1234]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:17:12.466793 systemd[1]: Finished lvm2-activation.service. Feb 9 19:17:12.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:12.470397 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:17:12.473087 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:17:12.473125 systemd[1]: Reached target local-fs.target. Feb 9 19:17:12.475993 systemd[1]: Reached target machines.target. Feb 9 19:17:12.480118 systemd[1]: Starting ldconfig.service... Feb 9 19:17:12.482755 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:17:12.482868 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:17:12.484166 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:17:12.488290 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:17:12.493487 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:17:12.496900 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:17:12.497275 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:17:12.498639 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:17:12.531147 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:17:12.546747 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:17:12.814200 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1236 (bootctl) Feb 9 19:17:12.815888 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:17:13.066654 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:17:13.166483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:17:13.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:14.221510 systemd-networkd[1162]: eth0: Gained IPv6LL Feb 9 19:17:14.227162 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:17:14.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:14.233442 kernel: kauditd_printk_skb: 77 callbacks suppressed Feb 9 19:17:14.233503 kernel: audit: type=1130 audit(1707506234.229:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.017091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:17:15.017871 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:17:15.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.035302 kernel: audit: type=1130 audit(1707506235.018:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.561268 systemd-fsck[1244]: fsck.fat 4.2 (2021-01-31) Feb 9 19:17:15.561268 systemd-fsck[1244]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:17:15.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.572730 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:17:15.580529 systemd[1]: Mounting boot.mount... Feb 9 19:17:15.597357 kernel: audit: type=1130 audit(1707506235.575:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.605053 systemd[1]: Mounted boot.mount. Feb 9 19:17:15.621833 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:17:15.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.639336 kernel: audit: type=1130 audit(1707506235.623:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.798733 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:17:15.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.804317 systemd[1]: Starting audit-rules.service... Feb 9 19:17:15.820572 kernel: audit: type=1130 audit(1707506235.802:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.823436 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:17:15.828310 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:17:15.844909 kernel: audit: type=1334 audit(1707506235.831:165): prog-id=27 op=LOAD Feb 9 19:17:15.831000 audit: BPF prog-id=27 op=LOAD Feb 9 19:17:15.841860 systemd[1]: Starting systemd-resolved.service... Feb 9 19:17:15.855812 kernel: audit: type=1334 audit(1707506235.844:166): prog-id=28 op=LOAD Feb 9 19:17:15.844000 audit: BPF prog-id=28 op=LOAD Feb 9 19:17:15.847628 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:17:15.860659 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:17:15.887000 audit[1262]: SYSTEM_BOOT pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.894289 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:17:15.920941 kernel: audit: type=1127 audit(1707506235.887:167): pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.946960 kernel: audit: type=1130 audit(1707506235.919:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.925995 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:17:15.952641 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:17:15.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:15.980270 kernel: audit: type=1130 audit(1707506235.951:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:16.012085 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:17:16.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:16.015004 systemd[1]: Reached target time-set.target. Feb 9 19:17:16.020724 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:17:16.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:17:16.125049 systemd-resolved[1255]: Positive Trust Anchors: Feb 9 19:17:16.125571 systemd-resolved[1255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:17:16.125704 systemd-resolved[1255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:17:16.244000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:17:16.244000 audit[1272]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb2bae710 a2=420 a3=0 items=0 ppid=1251 pid=1272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:17:16.244000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:17:16.245851 augenrules[1272]: No rules Feb 9 19:17:16.246529 systemd[1]: Finished audit-rules.service. Feb 9 19:17:16.271125 systemd-resolved[1255]: Using system hostname 'ci-3510.3.2-a-19528a6d7a'. Feb 9 19:17:16.273365 systemd[1]: Started systemd-resolved.service. Feb 9 19:17:16.276273 systemd[1]: Reached target network.target. Feb 9 19:17:16.278825 systemd[1]: Reached target network-online.target. Feb 9 19:17:16.281537 systemd[1]: Reached target nss-lookup.target. Feb 9 19:17:16.378336 systemd-timesyncd[1256]: Contacted time server 85.91.1.164:123 (0.flatcar.pool.ntp.org). Feb 9 19:17:16.378428 systemd-timesyncd[1256]: Initial clock synchronization to Fri 2024-02-09 19:17:16.379248 UTC. Feb 9 19:17:23.154731 ldconfig[1235]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:17:23.166448 systemd[1]: Finished ldconfig.service. Feb 9 19:17:23.170958 systemd[1]: Starting systemd-update-done.service... Feb 9 19:17:23.180545 systemd[1]: Finished systemd-update-done.service. Feb 9 19:17:23.183861 systemd[1]: Reached target sysinit.target. Feb 9 19:17:23.186574 systemd[1]: Started motdgen.path. Feb 9 19:17:23.188723 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:17:23.192658 systemd[1]: Started logrotate.timer. Feb 9 19:17:23.195143 systemd[1]: Started mdadm.timer. Feb 9 19:17:23.197444 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:17:23.200211 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:17:23.200291 systemd[1]: Reached target paths.target. Feb 9 19:17:23.202542 systemd[1]: Reached target timers.target. Feb 9 19:17:23.205579 systemd[1]: Listening on dbus.socket. Feb 9 19:17:23.209091 systemd[1]: Starting docker.socket... Feb 9 19:17:23.228655 systemd[1]: Listening on sshd.socket. Feb 9 19:17:23.231537 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:17:23.232101 systemd[1]: Listening on docker.socket. Feb 9 19:17:23.234698 systemd[1]: Reached target sockets.target. Feb 9 19:17:23.237336 systemd[1]: Reached target basic.target. Feb 9 19:17:23.239892 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:17:23.239930 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:17:23.241213 systemd[1]: Starting containerd.service... Feb 9 19:17:23.245186 systemd[1]: Starting dbus.service... Feb 9 19:17:23.249389 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:17:23.253385 systemd[1]: Starting extend-filesystems.service... Feb 9 19:17:23.256079 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:17:23.257648 systemd[1]: Starting motdgen.service... Feb 9 19:17:23.264410 systemd[1]: Started nvidia.service. Feb 9 19:17:23.268849 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:17:23.276513 systemd[1]: Starting prepare-critools.service... Feb 9 19:17:23.280265 systemd[1]: Starting prepare-helm.service... Feb 9 19:17:23.283772 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:17:23.288128 systemd[1]: Starting sshd-keygen.service... Feb 9 19:17:23.293069 systemd[1]: Starting systemd-logind.service... Feb 9 19:17:23.297897 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:17:23.297983 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:17:23.298518 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:17:23.299684 systemd[1]: Starting update-engine.service... Feb 9 19:17:23.303901 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:17:23.321968 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:17:23.322202 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:17:23.381655 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:17:23.381870 systemd[1]: Finished motdgen.service. Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda1 Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda2 Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda3 Feb 9 19:17:23.396065 extend-filesystems[1283]: Found usr Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda4 Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda6 Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda7 Feb 9 19:17:23.396065 extend-filesystems[1283]: Found sda9 Feb 9 19:17:23.396065 extend-filesystems[1283]: Checking size of /dev/sda9 Feb 9 19:17:23.433359 jq[1282]: false Feb 9 19:17:23.433585 jq[1295]: true Feb 9 19:17:23.419266 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:17:23.419481 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:17:23.441173 jq[1313]: true Feb 9 19:17:23.474150 env[1308]: time="2024-02-09T19:17:23.474090031Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:17:23.487499 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:17:23.488448 systemd-logind[1293]: New seat seat0. Feb 9 19:17:23.500212 extend-filesystems[1283]: Old size kept for /dev/sda9 Feb 9 19:17:23.503323 extend-filesystems[1283]: Found sr0 Feb 9 19:17:23.513900 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:17:23.514572 systemd[1]: Finished extend-filesystems.service. Feb 9 19:17:23.547390 env[1308]: time="2024-02-09T19:17:23.547296456Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:17:23.547559 env[1308]: time="2024-02-09T19:17:23.547532367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:23.555485 tar[1297]: ./ Feb 9 19:17:23.555485 tar[1297]: ./loopback Feb 9 19:17:23.557212 tar[1298]: crictl Feb 9 19:17:23.562100 tar[1299]: linux-amd64/helm Feb 9 19:17:23.566206 env[1308]: time="2024-02-09T19:17:23.566152313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:17:23.566206 env[1308]: time="2024-02-09T19:17:23.566207015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:23.566566 env[1308]: time="2024-02-09T19:17:23.566535830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:17:23.566643 env[1308]: time="2024-02-09T19:17:23.566568232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:23.566643 env[1308]: time="2024-02-09T19:17:23.566586633Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:17:23.566643 env[1308]: time="2024-02-09T19:17:23.566601933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:23.566927 env[1308]: time="2024-02-09T19:17:23.566901047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:23.567271 env[1308]: time="2024-02-09T19:17:23.567244062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:23.567491 env[1308]: time="2024-02-09T19:17:23.567461972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:17:23.567553 env[1308]: time="2024-02-09T19:17:23.567493374Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:17:23.567595 env[1308]: time="2024-02-09T19:17:23.567563577Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:17:23.567595 env[1308]: time="2024-02-09T19:17:23.567581178Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.581926929Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.581969431Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.581991032Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582042735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582062035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582081836Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582145439Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582166640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582185441Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582204542Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582223343Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582250244Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582382750Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:17:23.582664 env[1308]: time="2024-02-09T19:17:23.582476254Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:17:23.583173 env[1308]: time="2024-02-09T19:17:23.582915974Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:17:23.583173 env[1308]: time="2024-02-09T19:17:23.582959576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583173 env[1308]: time="2024-02-09T19:17:23.582990778Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:17:23.583173 env[1308]: time="2024-02-09T19:17:23.583064181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583173 env[1308]: time="2024-02-09T19:17:23.583084482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583173 env[1308]: time="2024-02-09T19:17:23.583102383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583182886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583201887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583242489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583263190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583280991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583303192Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583455799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583478300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583495401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583512801Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583532202Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583548103Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583573404Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:17:23.583987 env[1308]: time="2024-02-09T19:17:23.583625407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:17:23.584472 env[1308]: time="2024-02-09T19:17:23.583993423Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:17:23.584472 env[1308]: time="2024-02-09T19:17:23.584073927Z" level=info msg="Connect containerd service" Feb 9 19:17:23.584472 env[1308]: time="2024-02-09T19:17:23.584121229Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.584931866Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585063572Z" level=info msg="Start subscribing containerd event" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585128375Z" level=info msg="Start recovering state" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585253880Z" level=info msg="Start event monitor" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585268181Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585271081Z" level=info msg="Start snapshots syncer" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585301883Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585329384Z" level=info msg="Start streaming server" Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585331484Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:17:23.629482 env[1308]: time="2024-02-09T19:17:23.585389187Z" level=info msg="containerd successfully booted in 0.112191s" Feb 9 19:17:23.629818 bash[1333]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:17:23.585489 systemd[1]: Started containerd.service. Feb 9 19:17:23.612819 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:17:23.653751 dbus-daemon[1281]: [system] SELinux support is enabled Feb 9 19:17:23.653959 systemd[1]: Started dbus.service. Feb 9 19:17:23.658959 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:17:23.658987 systemd[1]: Reached target system-config.target. Feb 9 19:17:23.663440 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:17:23.663465 systemd[1]: Reached target user-config.target. Feb 9 19:17:23.666945 systemd[1]: Started systemd-logind.service. Feb 9 19:17:23.670082 dbus-daemon[1281]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:17:23.683402 tar[1297]: ./bandwidth Feb 9 19:17:23.793634 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:17:23.812215 tar[1297]: ./ptp Feb 9 19:17:23.902650 tar[1297]: ./vlan Feb 9 19:17:23.960943 tar[1297]: ./host-device Feb 9 19:17:24.009843 tar[1297]: ./tuning Feb 9 19:17:24.053084 tar[1297]: ./vrf Feb 9 19:17:24.127346 tar[1297]: ./sbr Feb 9 19:17:24.205398 tar[1297]: ./tap Feb 9 19:17:24.299471 tar[1297]: ./dhcp Feb 9 19:17:24.331699 update_engine[1294]: I0209 19:17:24.313853 1294 main.cc:92] Flatcar Update Engine starting Feb 9 19:17:24.383291 systemd[1]: Started update-engine.service. Feb 9 19:17:24.394124 update_engine[1294]: I0209 19:17:24.383363 1294 update_check_scheduler.cc:74] Next update check in 4m47s Feb 9 19:17:24.388813 systemd[1]: Started locksmithd.service. Feb 9 19:17:24.538839 tar[1297]: ./static Feb 9 19:17:24.582066 tar[1297]: ./firewall Feb 9 19:17:24.676202 tar[1297]: ./macvlan Feb 9 19:17:24.765406 tar[1297]: ./dummy Feb 9 19:17:24.858154 tar[1297]: ./bridge Feb 9 19:17:24.885809 systemd[1]: Finished prepare-critools.service. Feb 9 19:17:24.900168 tar[1299]: linux-amd64/LICENSE Feb 9 19:17:24.900758 tar[1299]: linux-amd64/README.md Feb 9 19:17:24.909611 systemd[1]: Finished prepare-helm.service. Feb 9 19:17:24.931596 tar[1297]: ./ipvlan Feb 9 19:17:24.979561 tar[1297]: ./portmap Feb 9 19:17:25.024224 tar[1297]: ./host-local Feb 9 19:17:25.120645 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:17:26.310794 sshd_keygen[1305]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:17:26.331026 systemd[1]: Finished sshd-keygen.service. Feb 9 19:17:26.335501 systemd[1]: Starting issuegen.service... Feb 9 19:17:26.339552 systemd[1]: Started waagent.service. Feb 9 19:17:26.342606 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:17:26.342830 systemd[1]: Finished issuegen.service. Feb 9 19:17:26.346394 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:17:26.354038 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:17:26.357857 systemd[1]: Started getty@tty1.service. Feb 9 19:17:26.361577 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:17:26.364009 systemd[1]: Reached target getty.target. Feb 9 19:17:26.366018 systemd[1]: Reached target multi-user.target. Feb 9 19:17:26.369472 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:17:26.379670 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:17:26.379797 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:17:26.382504 systemd[1]: Startup finished in 1.156s (firmware) + 29.590s (loader) + 1.185s (kernel) + 18.375s (initrd) + 31.188s (userspace) = 1min 21.495s. Feb 9 19:17:26.778053 login[1406]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:17:26.779825 login[1407]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:17:26.822760 systemd[1]: Created slice user-500.slice. Feb 9 19:17:26.824276 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:17:26.827209 systemd-logind[1293]: New session 2 of user core. Feb 9 19:17:26.830283 systemd-logind[1293]: New session 1 of user core. Feb 9 19:17:26.865205 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:17:26.867543 systemd[1]: Starting user@500.service... Feb 9 19:17:26.920677 locksmithd[1384]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:17:26.921901 (systemd)[1410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:27.039442 systemd[1410]: Queued start job for default target default.target. Feb 9 19:17:27.040090 systemd[1410]: Reached target paths.target. Feb 9 19:17:27.040119 systemd[1410]: Reached target sockets.target. Feb 9 19:17:27.040135 systemd[1410]: Reached target timers.target. Feb 9 19:17:27.040150 systemd[1410]: Reached target basic.target. Feb 9 19:17:27.040295 systemd[1]: Started user@500.service. Feb 9 19:17:27.041610 systemd[1]: Started session-1.scope. Feb 9 19:17:27.042498 systemd[1]: Started session-2.scope. Feb 9 19:17:27.043483 systemd[1410]: Reached target default.target. Feb 9 19:17:27.043670 systemd[1410]: Startup finished in 115ms. Feb 9 19:17:33.933341 waagent[1402]: 2024-02-09T19:17:33.933169Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:17:33.939871 waagent[1402]: 2024-02-09T19:17:33.939778Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:17:33.942771 waagent[1402]: 2024-02-09T19:17:33.942700Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:17:33.945380 waagent[1402]: 2024-02-09T19:17:33.945308Z INFO Daemon Daemon Run daemon Feb 9 19:17:33.947786 waagent[1402]: 2024-02-09T19:17:33.947724Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:17:33.961148 waagent[1402]: 2024-02-09T19:17:33.961020Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:17:33.968697 waagent[1402]: 2024-02-09T19:17:33.968566Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.969108Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.970200Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.971675Z INFO Daemon Daemon Activate resource disk Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.972962Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.980741Z INFO Daemon Daemon Found device: None Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.981998Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:17:33.984687 waagent[1402]: 2024-02-09T19:17:33.982879Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:17:34.013392 waagent[1402]: 2024-02-09T19:17:33.984823Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:17:34.013392 waagent[1402]: 2024-02-09T19:17:33.985191Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:17:34.013392 waagent[1402]: 2024-02-09T19:17:33.995453Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:17:34.013392 waagent[1402]: 2024-02-09T19:17:33.998287Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:17:34.013392 waagent[1402]: 2024-02-09T19:17:33.999425Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:17:34.013392 waagent[1402]: 2024-02-09T19:17:34.000270Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:17:34.020909 waagent[1402]: 2024-02-09T19:17:34.020729Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:17:34.162307 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:17:34.183089 waagent[1402]: 2024-02-09T19:17:34.182947Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:17:34.198171 waagent[1402]: 2024-02-09T19:17:34.183564Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:17:34.198171 waagent[1402]: 2024-02-09T19:17:34.184951Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:17:34.198171 waagent[1402]: 2024-02-09T19:17:34.185919Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:17:34.198171 waagent[1402]: 2024-02-09T19:17:34.187127Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:17:34.198171 waagent[1402]: 2024-02-09T19:17:34.187788Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:17:34.303328 waagent[1402]: 2024-02-09T19:17:34.303226Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:17:34.307812 waagent[1402]: 2024-02-09T19:17:34.307755Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:17:34.310812 waagent[1402]: 2024-02-09T19:17:34.310741Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:17:35.033920 waagent[1402]: 2024-02-09T19:17:35.033762Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:17:35.046855 waagent[1402]: 2024-02-09T19:17:35.046766Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:17:35.049923 waagent[1402]: 2024-02-09T19:17:35.049844Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:17:35.164711 waagent[1402]: 2024-02-09T19:17:35.164561Z INFO Daemon Daemon Found private key matching thumbprint C76FFEC65B655EFC851FC9FE8AF1829EBBFB089F Feb 9 19:17:35.177926 waagent[1402]: 2024-02-09T19:17:35.165172Z INFO Daemon Daemon Certificate with thumbprint 4EAAF19B68E80807EA88A63C98DFE6B7E9C2E9AF has no matching private key. Feb 9 19:17:35.177926 waagent[1402]: 2024-02-09T19:17:35.166436Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:17:35.218030 waagent[1402]: 2024-02-09T19:17:35.217930Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: f3e1d68c-084d-4e0c-bb64-79ccc2c26202 New eTag: 14481982387543041998] Feb 9 19:17:35.231171 waagent[1402]: 2024-02-09T19:17:35.221458Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:17:35.236363 waagent[1402]: 2024-02-09T19:17:35.236282Z INFO Daemon Daemon Starting provisioning Feb 9 19:17:35.239551 waagent[1402]: 2024-02-09T19:17:35.237915Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:17:35.250400 waagent[1402]: 2024-02-09T19:17:35.240469Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-19528a6d7a] Feb 9 19:17:35.274174 waagent[1402]: 2024-02-09T19:17:35.274004Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-19528a6d7a] Feb 9 19:17:35.281919 waagent[1402]: 2024-02-09T19:17:35.276209Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:17:35.282203 waagent[1402]: 2024-02-09T19:17:35.282119Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:17:35.298491 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:17:35.298755 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:17:35.298836 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:17:35.302150 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:17:35.307287 systemd-networkd[1162]: eth0: DHCPv6 lease lost Feb 9 19:17:35.308889 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:17:35.309095 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:17:35.311718 systemd[1]: Starting systemd-networkd.service... Feb 9 19:17:35.345132 systemd-networkd[1452]: enP28076s1: Link UP Feb 9 19:17:35.345146 systemd-networkd[1452]: enP28076s1: Gained carrier Feb 9 19:17:35.346732 systemd-networkd[1452]: eth0: Link UP Feb 9 19:17:35.346741 systemd-networkd[1452]: eth0: Gained carrier Feb 9 19:17:35.347174 systemd-networkd[1452]: lo: Link UP Feb 9 19:17:35.347183 systemd-networkd[1452]: lo: Gained carrier Feb 9 19:17:35.347564 systemd-networkd[1452]: eth0: Gained IPv6LL Feb 9 19:17:35.347848 systemd-networkd[1452]: Enumeration completed Feb 9 19:17:35.347985 systemd[1]: Started systemd-networkd.service. Feb 9 19:17:35.350794 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:17:35.353676 waagent[1402]: 2024-02-09T19:17:35.351503Z INFO Daemon Daemon Create user account if not exists Feb 9 19:17:35.356503 waagent[1402]: 2024-02-09T19:17:35.353877Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:17:35.359196 waagent[1402]: 2024-02-09T19:17:35.359092Z INFO Daemon Daemon Configure sudoer Feb 9 19:17:35.362222 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:17:35.362887 waagent[1402]: 2024-02-09T19:17:35.362822Z INFO Daemon Daemon Configure sshd Feb 9 19:17:35.367100 waagent[1402]: 2024-02-09T19:17:35.365742Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:17:35.396442 waagent[1402]: 2024-02-09T19:17:35.396316Z INFO Daemon Daemon Decode custom data Feb 9 19:17:35.399498 waagent[1402]: 2024-02-09T19:17:35.399418Z INFO Daemon Daemon Save custom data Feb 9 19:17:35.405350 systemd-networkd[1452]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:17:35.409363 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:17:36.625225 waagent[1402]: 2024-02-09T19:17:36.625114Z INFO Daemon Daemon Provisioning complete Feb 9 19:17:36.643122 waagent[1402]: 2024-02-09T19:17:36.643023Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:17:36.646748 waagent[1402]: 2024-02-09T19:17:36.646671Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:17:36.652668 waagent[1402]: 2024-02-09T19:17:36.652588Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:17:36.937610 waagent[1461]: 2024-02-09T19:17:36.937415Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:17:36.938357 waagent[1461]: 2024-02-09T19:17:36.938290Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:17:36.938516 waagent[1461]: 2024-02-09T19:17:36.938461Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:17:36.950127 waagent[1461]: 2024-02-09T19:17:36.950051Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:17:36.950302 waagent[1461]: 2024-02-09T19:17:36.950245Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:17:37.014940 waagent[1461]: 2024-02-09T19:17:37.014788Z INFO ExtHandler ExtHandler Found private key matching thumbprint C76FFEC65B655EFC851FC9FE8AF1829EBBFB089F Feb 9 19:17:37.015205 waagent[1461]: 2024-02-09T19:17:37.015134Z INFO ExtHandler ExtHandler Certificate with thumbprint 4EAAF19B68E80807EA88A63C98DFE6B7E9C2E9AF has no matching private key. Feb 9 19:17:37.015480 waagent[1461]: 2024-02-09T19:17:37.015426Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:17:37.029389 waagent[1461]: 2024-02-09T19:17:37.029329Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 4b6e2430-bc59-4103-be86-6c5d62d8b71e New eTag: 14481982387543041998] Feb 9 19:17:37.030041 waagent[1461]: 2024-02-09T19:17:37.029974Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:17:37.111820 waagent[1461]: 2024-02-09T19:17:37.111636Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:17:37.138088 waagent[1461]: 2024-02-09T19:17:37.137974Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1461 Feb 9 19:17:37.141677 waagent[1461]: 2024-02-09T19:17:37.141600Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:17:37.142973 waagent[1461]: 2024-02-09T19:17:37.142913Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:17:37.228132 waagent[1461]: 2024-02-09T19:17:37.228002Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:17:37.228677 waagent[1461]: 2024-02-09T19:17:37.228584Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:17:37.238739 waagent[1461]: 2024-02-09T19:17:37.238679Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:17:37.239245 waagent[1461]: 2024-02-09T19:17:37.239175Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:17:37.240356 waagent[1461]: 2024-02-09T19:17:37.240291Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:17:37.241666 waagent[1461]: 2024-02-09T19:17:37.241606Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:17:37.242301 waagent[1461]: 2024-02-09T19:17:37.242213Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:17:37.242497 waagent[1461]: 2024-02-09T19:17:37.242444Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:17:37.243111 waagent[1461]: 2024-02-09T19:17:37.243051Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:17:37.243252 waagent[1461]: 2024-02-09T19:17:37.243166Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:17:37.243481 waagent[1461]: 2024-02-09T19:17:37.243431Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:17:37.243649 waagent[1461]: 2024-02-09T19:17:37.243600Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:17:37.244143 waagent[1461]: 2024-02-09T19:17:37.244089Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:17:37.245663 waagent[1461]: 2024-02-09T19:17:37.245602Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:17:37.246021 waagent[1461]: 2024-02-09T19:17:37.245962Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:17:37.246522 waagent[1461]: 2024-02-09T19:17:37.246465Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:17:37.246737 waagent[1461]: 2024-02-09T19:17:37.246669Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:17:37.247158 waagent[1461]: 2024-02-09T19:17:37.247105Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:17:37.247387 waagent[1461]: 2024-02-09T19:17:37.247334Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:17:37.249583 waagent[1461]: 2024-02-09T19:17:37.249531Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:17:37.257407 waagent[1461]: 2024-02-09T19:17:37.257323Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:17:37.257407 waagent[1461]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:17:37.257407 waagent[1461]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:17:37.257407 waagent[1461]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:17:37.257407 waagent[1461]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:17:37.257407 waagent[1461]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:17:37.257407 waagent[1461]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:17:37.275151 waagent[1461]: 2024-02-09T19:17:37.274887Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:17:37.278430 waagent[1461]: 2024-02-09T19:17:37.277861Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:17:37.280227 waagent[1461]: 2024-02-09T19:17:37.280165Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:17:37.297255 waagent[1461]: 2024-02-09T19:17:37.297155Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1452' Feb 9 19:17:37.319249 waagent[1461]: 2024-02-09T19:17:37.319147Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:17:37.564668 waagent[1461]: 2024-02-09T19:17:37.564472Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:17:37.564668 waagent[1461]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:17:37.564668 waagent[1461]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:17:37.564668 waagent[1461]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a2:84:54 brd ff:ff:ff:ff:ff:ff Feb 9 19:17:37.564668 waagent[1461]: 3: enP28076s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a2:84:54 brd ff:ff:ff:ff:ff:ff\ altname enP28076p0s2 Feb 9 19:17:37.564668 waagent[1461]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:17:37.564668 waagent[1461]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:17:37.564668 waagent[1461]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:17:37.564668 waagent[1461]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:17:37.564668 waagent[1461]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:17:37.564668 waagent[1461]: 2: eth0 inet6 fe80::222:48ff:fea2:8454/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:17:37.629061 waagent[1461]: 2024-02-09T19:17:37.628984Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:17:38.657557 waagent[1402]: 2024-02-09T19:17:38.657387Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:17:38.662680 waagent[1402]: 2024-02-09T19:17:38.662615Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:17:39.771609 waagent[1500]: 2024-02-09T19:17:39.771487Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:17:39.772393 waagent[1500]: 2024-02-09T19:17:39.772321Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:17:39.772546 waagent[1500]: 2024-02-09T19:17:39.772490Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:17:39.782581 waagent[1500]: 2024-02-09T19:17:39.782476Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:17:39.782978 waagent[1500]: 2024-02-09T19:17:39.782920Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:17:39.783144 waagent[1500]: 2024-02-09T19:17:39.783093Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:17:39.795089 waagent[1500]: 2024-02-09T19:17:39.795003Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:17:39.804947 waagent[1500]: 2024-02-09T19:17:39.804878Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:17:39.805947 waagent[1500]: 2024-02-09T19:17:39.805878Z INFO ExtHandler Feb 9 19:17:39.806104 waagent[1500]: 2024-02-09T19:17:39.806047Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2992f369-5d5d-4198-ac26-986a22aca79f eTag: 14481982387543041998 source: Fabric] Feb 9 19:17:39.806819 waagent[1500]: 2024-02-09T19:17:39.806760Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:17:39.807916 waagent[1500]: 2024-02-09T19:17:39.807853Z INFO ExtHandler Feb 9 19:17:39.808050 waagent[1500]: 2024-02-09T19:17:39.807997Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:17:39.814202 waagent[1500]: 2024-02-09T19:17:39.814140Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:17:39.814670 waagent[1500]: 2024-02-09T19:17:39.814620Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:17:39.833781 waagent[1500]: 2024-02-09T19:17:39.833693Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:17:39.899261 waagent[1500]: 2024-02-09T19:17:39.899082Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C76FFEC65B655EFC851FC9FE8AF1829EBBFB089F', 'hasPrivateKey': True} Feb 9 19:17:39.900272 waagent[1500]: 2024-02-09T19:17:39.900189Z INFO ExtHandler Downloaded certificate {'thumbprint': '4EAAF19B68E80807EA88A63C98DFE6B7E9C2E9AF', 'hasPrivateKey': False} Feb 9 19:17:39.901301 waagent[1500]: 2024-02-09T19:17:39.901220Z INFO ExtHandler Fetch goal state completed Feb 9 19:17:39.922551 waagent[1500]: 2024-02-09T19:17:39.922453Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1500 Feb 9 19:17:39.926580 waagent[1500]: 2024-02-09T19:17:39.926490Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:17:39.928079 waagent[1500]: 2024-02-09T19:17:39.928014Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:17:39.933711 waagent[1500]: 2024-02-09T19:17:39.933648Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:17:39.934119 waagent[1500]: 2024-02-09T19:17:39.934056Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:17:39.943320 waagent[1500]: 2024-02-09T19:17:39.943259Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:17:39.943827 waagent[1500]: 2024-02-09T19:17:39.943767Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:17:39.950462 waagent[1500]: 2024-02-09T19:17:39.950355Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 19:17:39.955249 waagent[1500]: 2024-02-09T19:17:39.955171Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:17:39.956783 waagent[1500]: 2024-02-09T19:17:39.956719Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:17:39.958860 waagent[1500]: 2024-02-09T19:17:39.958797Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:17:39.959032 waagent[1500]: 2024-02-09T19:17:39.958982Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:17:39.959618 waagent[1500]: 2024-02-09T19:17:39.959559Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:17:39.959916 waagent[1500]: 2024-02-09T19:17:39.959858Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:17:39.959916 waagent[1500]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:17:39.959916 waagent[1500]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:17:39.959916 waagent[1500]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:17:39.959916 waagent[1500]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:17:39.959916 waagent[1500]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:17:39.959916 waagent[1500]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:17:39.963632 waagent[1500]: 2024-02-09T19:17:39.963395Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:17:39.963774 waagent[1500]: 2024-02-09T19:17:39.963705Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:17:39.964034 waagent[1500]: 2024-02-09T19:17:39.963981Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:17:39.965151 waagent[1500]: 2024-02-09T19:17:39.965087Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:17:39.965324 waagent[1500]: 2024-02-09T19:17:39.965272Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:17:39.965464 waagent[1500]: 2024-02-09T19:17:39.965416Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:17:39.972252 waagent[1500]: 2024-02-09T19:17:39.972149Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:17:39.973301 waagent[1500]: 2024-02-09T19:17:39.971765Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:17:39.980512 waagent[1500]: 2024-02-09T19:17:39.980400Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:17:39.980721 waagent[1500]: 2024-02-09T19:17:39.979926Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:17:39.985980 waagent[1500]: 2024-02-09T19:17:39.985865Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:17:39.985980 waagent[1500]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:17:39.985980 waagent[1500]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:17:39.985980 waagent[1500]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a2:84:54 brd ff:ff:ff:ff:ff:ff Feb 9 19:17:39.985980 waagent[1500]: 3: enP28076s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a2:84:54 brd ff:ff:ff:ff:ff:ff\ altname enP28076p0s2 Feb 9 19:17:39.985980 waagent[1500]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:17:39.985980 waagent[1500]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:17:39.985980 waagent[1500]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:17:39.985980 waagent[1500]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:17:39.985980 waagent[1500]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:17:39.985980 waagent[1500]: 2: eth0 inet6 fe80::222:48ff:fea2:8454/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:17:39.993756 waagent[1500]: 2024-02-09T19:17:39.993527Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:17:40.009862 waagent[1500]: 2024-02-09T19:17:40.009786Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:17:40.010882 waagent[1500]: 2024-02-09T19:17:40.010802Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:17:40.066892 waagent[1500]: 2024-02-09T19:17:40.066766Z INFO ExtHandler ExtHandler Feb 9 19:17:40.067389 waagent[1500]: 2024-02-09T19:17:40.067270Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ac0e9ddf-6a3e-4fdd-96e2-dba1074f5ab9 correlation 9a922ca7-a05f-4dde-a608-3caa6b18c961 created: 2024-02-09T19:15:54.904846Z] Feb 9 19:17:40.070365 waagent[1500]: 2024-02-09T19:17:40.070288Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:17:40.082329 waagent[1500]: 2024-02-09T19:17:40.082219Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 15 ms] Feb 9 19:17:40.113194 waagent[1500]: 2024-02-09T19:17:40.112837Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:17:40.128459 waagent[1500]: 2024-02-09T19:17:40.128365Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 19:17:40.128459 waagent[1500]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:17:40.128459 waagent[1500]: pkts bytes target prot opt in out source destination Feb 9 19:17:40.128459 waagent[1500]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:17:40.128459 waagent[1500]: pkts bytes target prot opt in out source destination Feb 9 19:17:40.128459 waagent[1500]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:17:40.128459 waagent[1500]: pkts bytes target prot opt in out source destination Feb 9 19:17:40.128459 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:17:40.128459 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:17:40.128459 waagent[1500]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:17:40.128926 waagent[1500]: 2024-02-09T19:17:40.128594Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D3185141-8162-4D8C-A4E3-67C9ED7305B9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:17:40.136152 waagent[1500]: 2024-02-09T19:17:40.136042Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:17:40.136152 waagent[1500]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:17:40.136152 waagent[1500]: pkts bytes target prot opt in out source destination Feb 9 19:17:40.136152 waagent[1500]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:17:40.136152 waagent[1500]: pkts bytes target prot opt in out source destination Feb 9 19:17:40.136152 waagent[1500]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:17:40.136152 waagent[1500]: pkts bytes target prot opt in out source destination Feb 9 19:17:40.136152 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:17:40.136152 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:17:40.136152 waagent[1500]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:17:40.136791 waagent[1500]: 2024-02-09T19:17:40.136732Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:17:59.770779 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:18:01.775008 systemd[1]: Created slice system-sshd.slice. Feb 9 19:18:01.777130 systemd[1]: Started sshd@0-10.200.8.14:22-10.200.12.6:34392.service. Feb 9 19:18:02.996744 sshd[1545]: Accepted publickey for core from 10.200.12.6 port 34392 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:18:02.998322 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:03.002644 systemd-logind[1293]: New session 3 of user core. Feb 9 19:18:03.005403 systemd[1]: Started session-3.scope. Feb 9 19:18:03.543699 systemd[1]: Started sshd@1-10.200.8.14:22-10.200.12.6:34400.service. Feb 9 19:18:04.164824 sshd[1550]: Accepted publickey for core from 10.200.12.6 port 34400 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:18:04.166685 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:04.172328 systemd-logind[1293]: New session 4 of user core. Feb 9 19:18:04.172970 systemd[1]: Started session-4.scope. Feb 9 19:18:04.606014 sshd[1550]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:04.609452 systemd[1]: sshd@1-10.200.8.14:22-10.200.12.6:34400.service: Deactivated successfully. Feb 9 19:18:04.610533 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:18:04.611160 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:18:04.611985 systemd-logind[1293]: Removed session 4. Feb 9 19:18:04.709787 systemd[1]: Started sshd@2-10.200.8.14:22-10.200.12.6:34412.service. Feb 9 19:18:05.361539 sshd[1559]: Accepted publickey for core from 10.200.12.6 port 34412 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:18:05.363076 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:05.368187 systemd[1]: Started session-5.scope. Feb 9 19:18:05.368664 systemd-logind[1293]: New session 5 of user core. Feb 9 19:18:05.793676 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:05.797357 systemd[1]: sshd@2-10.200.8.14:22-10.200.12.6:34412.service: Deactivated successfully. Feb 9 19:18:05.798323 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:18:05.798939 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:18:05.799709 systemd-logind[1293]: Removed session 5. Feb 9 19:18:05.899125 systemd[1]: Started sshd@3-10.200.8.14:22-10.200.12.6:34426.service. Feb 9 19:18:06.521588 sshd[1565]: Accepted publickey for core from 10.200.12.6 port 34426 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:18:06.523186 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:06.528101 systemd[1]: Started session-6.scope. Feb 9 19:18:06.528820 systemd-logind[1293]: New session 6 of user core. Feb 9 19:18:06.962124 sshd[1565]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:06.965546 systemd[1]: sshd@3-10.200.8.14:22-10.200.12.6:34426.service: Deactivated successfully. Feb 9 19:18:06.966557 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:18:06.967394 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:18:06.968333 systemd-logind[1293]: Removed session 6. Feb 9 19:18:07.070388 systemd[1]: Started sshd@4-10.200.8.14:22-10.200.12.6:34722.service. Feb 9 19:18:07.696471 sshd[1571]: Accepted publickey for core from 10.200.12.6 port 34722 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:18:07.698340 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:07.704330 systemd-logind[1293]: New session 7 of user core. Feb 9 19:18:07.704853 systemd[1]: Started session-7.scope. Feb 9 19:18:08.348177 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:18:08.348462 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:18:09.464330 systemd[1]: Starting docker.service... Feb 9 19:18:09.519343 env[1589]: time="2024-02-09T19:18:09.519274253Z" level=info msg="Starting up" Feb 9 19:18:09.520678 env[1589]: time="2024-02-09T19:18:09.520647357Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:18:09.520816 env[1589]: time="2024-02-09T19:18:09.520804157Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:18:09.520880 env[1589]: time="2024-02-09T19:18:09.520868157Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:18:09.520927 env[1589]: time="2024-02-09T19:18:09.520918357Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:18:09.522820 env[1589]: time="2024-02-09T19:18:09.522800162Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:18:09.522919 env[1589]: time="2024-02-09T19:18:09.522909262Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:18:09.522970 env[1589]: time="2024-02-09T19:18:09.522960662Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:18:09.523012 env[1589]: time="2024-02-09T19:18:09.523004662Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:18:09.620758 env[1589]: time="2024-02-09T19:18:09.620705790Z" level=info msg="Loading containers: start." Feb 9 19:18:09.633547 update_engine[1294]: I0209 19:18:09.633502 1294 update_attempter.cc:509] Updating boot flags... Feb 9 19:18:09.830267 kernel: Initializing XFRM netlink socket Feb 9 19:18:09.868548 env[1589]: time="2024-02-09T19:18:09.868501268Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:18:09.971606 systemd-networkd[1452]: docker0: Link UP Feb 9 19:18:09.993315 env[1589]: time="2024-02-09T19:18:09.993276259Z" level=info msg="Loading containers: done." Feb 9 19:18:10.005323 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1827564439-merged.mount: Deactivated successfully. Feb 9 19:18:10.017672 env[1589]: time="2024-02-09T19:18:10.017623314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:18:10.017888 env[1589]: time="2024-02-09T19:18:10.017862914Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:18:10.018002 env[1589]: time="2024-02-09T19:18:10.017982815Z" level=info msg="Daemon has completed initialization" Feb 9 19:18:10.082462 systemd[1]: Started docker.service. Feb 9 19:18:10.092364 env[1589]: time="2024-02-09T19:18:10.092313877Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:18:10.109515 systemd[1]: Reloading. Feb 9 19:18:10.183946 /usr/lib/systemd/system-generators/torcx-generator[1756]: time="2024-02-09T19:18:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:18:10.197570 /usr/lib/systemd/system-generators/torcx-generator[1756]: time="2024-02-09T19:18:10Z" level=info msg="torcx already run" Feb 9 19:18:10.279978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:18:10.279999 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:18:10.298384 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:18:10.383998 systemd[1]: Started kubelet.service. Feb 9 19:18:10.459891 kubelet[1818]: E0209 19:18:10.459816 1818 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:18:10.461816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:18:10.461984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:18:15.673844 env[1308]: time="2024-02-09T19:18:15.673774300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 19:18:16.391328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074251146.mount: Deactivated successfully. Feb 9 19:18:18.589582 env[1308]: time="2024-02-09T19:18:18.589516165Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:18.599416 env[1308]: time="2024-02-09T19:18:18.599346578Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:18.603303 env[1308]: time="2024-02-09T19:18:18.603259183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:18.608047 env[1308]: time="2024-02-09T19:18:18.607992689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:18.608758 env[1308]: time="2024-02-09T19:18:18.608717690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 19:18:18.619650 env[1308]: time="2024-02-09T19:18:18.619620304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 19:18:20.568674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:18:20.568958 systemd[1]: Stopped kubelet.service. Feb 9 19:18:20.571082 systemd[1]: Started kubelet.service. Feb 9 19:18:20.653101 kubelet[1844]: E0209 19:18:20.653045 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:18:20.657741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:18:20.657903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:18:20.935896 env[1308]: time="2024-02-09T19:18:20.935743927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.942167 env[1308]: time="2024-02-09T19:18:20.942095644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.947882 env[1308]: time="2024-02-09T19:18:20.947843459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.951921 env[1308]: time="2024-02-09T19:18:20.951884469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.952533 env[1308]: time="2024-02-09T19:18:20.952496771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 19:18:20.963089 env[1308]: time="2024-02-09T19:18:20.963053398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 19:18:21.380155 env[1308]: time="2024-02-09T19:18:21.380061338Z" level=error msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.28.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Feb 9 19:18:21.421926 env[1308]: time="2024-02-09T19:18:21.421887383Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 19:18:22.821817 env[1308]: time="2024-02-09T19:18:22.821754245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:22.828541 env[1308]: time="2024-02-09T19:18:22.828489130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:22.834990 env[1308]: time="2024-02-09T19:18:22.834956412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:22.839121 env[1308]: time="2024-02-09T19:18:22.839078364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:22.839806 env[1308]: time="2024-02-09T19:18:22.839768173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 19:18:22.854383 env[1308]: time="2024-02-09T19:18:22.854351058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:18:23.964226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424616157.mount: Deactivated successfully. Feb 9 19:18:24.561396 env[1308]: time="2024-02-09T19:18:24.561333966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:24.567420 env[1308]: time="2024-02-09T19:18:24.567372838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:24.571194 env[1308]: time="2024-02-09T19:18:24.571141683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:24.575083 env[1308]: time="2024-02-09T19:18:24.575048730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:24.575440 env[1308]: time="2024-02-09T19:18:24.575407534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 19:18:24.585283 env[1308]: time="2024-02-09T19:18:24.585254552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:18:25.065041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994243391.mount: Deactivated successfully. Feb 9 19:18:25.086140 env[1308]: time="2024-02-09T19:18:25.086085830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:25.092475 env[1308]: time="2024-02-09T19:18:25.092425604Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:25.096263 env[1308]: time="2024-02-09T19:18:25.096211548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:25.100322 env[1308]: time="2024-02-09T19:18:25.100288196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:25.100797 env[1308]: time="2024-02-09T19:18:25.100766901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:18:25.111508 env[1308]: time="2024-02-09T19:18:25.111478626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 19:18:25.739571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838041304.mount: Deactivated successfully. Feb 9 19:18:30.416726 env[1308]: time="2024-02-09T19:18:30.416654170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:30.421336 env[1308]: time="2024-02-09T19:18:30.421293117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:30.424333 env[1308]: time="2024-02-09T19:18:30.424299447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:30.429443 env[1308]: time="2024-02-09T19:18:30.429411999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:30.430123 env[1308]: time="2024-02-09T19:18:30.430088706Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 19:18:30.441847 env[1308]: time="2024-02-09T19:18:30.441812325Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:18:30.818804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:18:30.819222 systemd[1]: Stopped kubelet.service. Feb 9 19:18:30.821710 systemd[1]: Started kubelet.service. Feb 9 19:18:30.876635 kubelet[1882]: E0209 19:18:30.876580 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:18:30.879349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:18:30.879538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:18:31.015156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654386203.mount: Deactivated successfully. Feb 9 19:18:31.692057 env[1308]: time="2024-02-09T19:18:31.691995240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:31.700430 env[1308]: time="2024-02-09T19:18:31.700389423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:31.704398 env[1308]: time="2024-02-09T19:18:31.704360162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:31.708393 env[1308]: time="2024-02-09T19:18:31.708355902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:31.708901 env[1308]: time="2024-02-09T19:18:31.708871107Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 19:18:34.451300 systemd[1]: Stopped kubelet.service. Feb 9 19:18:34.468282 systemd[1]: Reloading. Feb 9 19:18:34.553152 /usr/lib/systemd/system-generators/torcx-generator[1969]: time="2024-02-09T19:18:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:18:34.553194 /usr/lib/systemd/system-generators/torcx-generator[1969]: time="2024-02-09T19:18:34Z" level=info msg="torcx already run" Feb 9 19:18:34.651919 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:18:34.651948 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:18:34.670718 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:18:34.768726 systemd[1]: Started kubelet.service. Feb 9 19:18:34.826453 kubelet[2032]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:18:34.826453 kubelet[2032]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:18:34.826453 kubelet[2032]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:18:34.826993 kubelet[2032]: I0209 19:18:34.826513 2032 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:18:35.178141 kubelet[2032]: I0209 19:18:35.178097 2032 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:18:35.178141 kubelet[2032]: I0209 19:18:35.178128 2032 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:18:35.178442 kubelet[2032]: I0209 19:18:35.178421 2032 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:18:35.182958 kubelet[2032]: E0209 19:18:35.182931 2032 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.183175 kubelet[2032]: I0209 19:18:35.183154 2032 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:18:35.190874 kubelet[2032]: I0209 19:18:35.190841 2032 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:18:35.191136 kubelet[2032]: I0209 19:18:35.191117 2032 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:18:35.191357 kubelet[2032]: I0209 19:18:35.191335 2032 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:18:35.191495 kubelet[2032]: I0209 19:18:35.191372 2032 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:18:35.191495 kubelet[2032]: I0209 19:18:35.191384 2032 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:18:35.191585 kubelet[2032]: I0209 19:18:35.191510 2032 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:18:35.191625 kubelet[2032]: I0209 19:18:35.191607 2032 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:18:35.191625 kubelet[2032]: I0209 19:18:35.191624 2032 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:18:35.191697 kubelet[2032]: I0209 19:18:35.191655 2032 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:18:35.191697 kubelet[2032]: I0209 19:18:35.191674 2032 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:18:35.192574 kubelet[2032]: W0209 19:18:35.192520 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-19528a6d7a&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.192669 kubelet[2032]: E0209 19:18:35.192586 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-19528a6d7a&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.192720 kubelet[2032]: I0209 19:18:35.192686 2032 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:18:35.192990 kubelet[2032]: W0209 19:18:35.192971 2032 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:18:35.193576 kubelet[2032]: I0209 19:18:35.193554 2032 server.go:1232] "Started kubelet" Feb 9 19:18:35.198045 kubelet[2032]: W0209 19:18:35.198003 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.198170 kubelet[2032]: E0209 19:18:35.198159 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.198515 kubelet[2032]: E0209 19:18:35.198436 2032 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-19528a6d7a.17b247f268d8b003", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-19528a6d7a", UID:"ci-3510.3.2-a-19528a6d7a", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-19528a6d7a"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 193528323, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 193528323, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-19528a6d7a"}': 'Post "https://10.200.8.14:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.14:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:18:35.198758 kubelet[2032]: I0209 19:18:35.198746 2032 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:18:35.199079 kubelet[2032]: I0209 19:18:35.199067 2032 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:18:35.199194 kubelet[2032]: I0209 19:18:35.199186 2032 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:18:35.199889 kubelet[2032]: I0209 19:18:35.199874 2032 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:18:35.201603 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:18:35.202164 kubelet[2032]: I0209 19:18:35.202142 2032 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:18:35.202460 kubelet[2032]: E0209 19:18:35.202446 2032 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:18:35.202578 kubelet[2032]: E0209 19:18:35.202569 2032 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:18:35.205094 kubelet[2032]: E0209 19:18:35.205078 2032 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-19528a6d7a\" not found" Feb 9 19:18:35.205336 kubelet[2032]: I0209 19:18:35.205323 2032 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:18:35.205538 kubelet[2032]: I0209 19:18:35.205526 2032 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:18:35.205686 kubelet[2032]: I0209 19:18:35.205675 2032 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:18:35.206221 kubelet[2032]: W0209 19:18:35.206176 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.206374 kubelet[2032]: E0209 19:18:35.206361 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.207051 kubelet[2032]: E0209 19:18:35.207036 2032 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-19528a6d7a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="200ms" Feb 9 19:18:35.306394 kubelet[2032]: I0209 19:18:35.306359 2032 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:18:35.308095 kubelet[2032]: I0209 19:18:35.308075 2032 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:18:35.308334 kubelet[2032]: I0209 19:18:35.308317 2032 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:18:35.308852 kubelet[2032]: I0209 19:18:35.308835 2032 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:18:35.309036 kubelet[2032]: E0209 19:18:35.309022 2032 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:18:35.310464 kubelet[2032]: W0209 19:18:35.310439 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.310592 kubelet[2032]: E0209 19:18:35.310581 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:35.312145 kubelet[2032]: I0209 19:18:35.312124 2032 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.313068 kubelet[2032]: E0209 19:18:35.313053 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.313572 kubelet[2032]: I0209 19:18:35.313558 2032 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:18:35.313755 kubelet[2032]: I0209 19:18:35.313743 2032 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:18:35.314032 kubelet[2032]: I0209 19:18:35.314018 2032 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:18:35.319195 kubelet[2032]: I0209 19:18:35.319180 2032 policy_none.go:49] "None policy: Start" Feb 9 19:18:35.319920 kubelet[2032]: I0209 19:18:35.319899 2032 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:18:35.319920 kubelet[2032]: I0209 19:18:35.319923 2032 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:18:35.328174 systemd[1]: Created slice kubepods.slice. Feb 9 19:18:35.334492 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:18:35.338945 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:18:35.343838 kubelet[2032]: I0209 19:18:35.343819 2032 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:18:35.344177 kubelet[2032]: I0209 19:18:35.344165 2032 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:18:35.346582 kubelet[2032]: E0209 19:18:35.346557 2032 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-19528a6d7a\" not found" Feb 9 19:18:35.408100 kubelet[2032]: E0209 19:18:35.408052 2032 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-19528a6d7a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="400ms" Feb 9 19:18:35.409224 kubelet[2032]: I0209 19:18:35.409195 2032 topology_manager.go:215] "Topology Admit Handler" podUID="592aee8233a588cf619c65c72a53b8ee" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.411212 kubelet[2032]: I0209 19:18:35.411171 2032 topology_manager.go:215] "Topology Admit Handler" podUID="4f7937a15d4841f6a047867e6f6977fd" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.413147 kubelet[2032]: I0209 19:18:35.413123 2032 topology_manager.go:215] "Topology Admit Handler" podUID="8fcbca9b3ced454188a2cbd25ae328d5" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.420345 systemd[1]: Created slice kubepods-burstable-pod592aee8233a588cf619c65c72a53b8ee.slice. Feb 9 19:18:35.435183 systemd[1]: Created slice kubepods-burstable-pod4f7937a15d4841f6a047867e6f6977fd.slice. Feb 9 19:18:35.443453 systemd[1]: Created slice kubepods-burstable-pod8fcbca9b3ced454188a2cbd25ae328d5.slice. Feb 9 19:18:35.507546 kubelet[2032]: I0209 19:18:35.507498 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fcbca9b3ced454188a2cbd25ae328d5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-19528a6d7a\" (UID: \"8fcbca9b3ced454188a2cbd25ae328d5\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507546 kubelet[2032]: I0209 19:18:35.507553 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/592aee8233a588cf619c65c72a53b8ee-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" (UID: \"592aee8233a588cf619c65c72a53b8ee\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507789 kubelet[2032]: I0209 19:18:35.507584 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507789 kubelet[2032]: I0209 19:18:35.507609 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507789 kubelet[2032]: I0209 19:18:35.507633 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507789 kubelet[2032]: I0209 19:18:35.507658 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507789 kubelet[2032]: I0209 19:18:35.507681 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/592aee8233a588cf619c65c72a53b8ee-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" (UID: \"592aee8233a588cf619c65c72a53b8ee\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507938 kubelet[2032]: I0209 19:18:35.507705 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/592aee8233a588cf619c65c72a53b8ee-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" (UID: \"592aee8233a588cf619c65c72a53b8ee\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.507938 kubelet[2032]: I0209 19:18:35.507729 2032 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.515247 kubelet[2032]: I0209 19:18:35.515199 2032 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.515596 kubelet[2032]: E0209 19:18:35.515576 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.735547 env[1308]: time="2024-02-09T19:18:35.734910832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-19528a6d7a,Uid:592aee8233a588cf619c65c72a53b8ee,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:35.742642 env[1308]: time="2024-02-09T19:18:35.742589700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-19528a6d7a,Uid:4f7937a15d4841f6a047867e6f6977fd,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:35.747632 env[1308]: time="2024-02-09T19:18:35.747564344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-19528a6d7a,Uid:8fcbca9b3ced454188a2cbd25ae328d5,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:35.809604 kubelet[2032]: E0209 19:18:35.809558 2032 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-19528a6d7a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="800ms" Feb 9 19:18:35.917593 kubelet[2032]: I0209 19:18:35.917555 2032 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:35.918075 kubelet[2032]: E0209 19:18:35.918012 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:36.291803 kubelet[2032]: W0209 19:18:36.291722 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-19528a6d7a&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.291803 kubelet[2032]: E0209 19:18:36.291796 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-19528a6d7a&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.417419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354599890.mount: Deactivated successfully. Feb 9 19:18:36.448527 kubelet[2032]: W0209 19:18:36.448457 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.448527 kubelet[2032]: E0209 19:18:36.448531 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.610892 kubelet[2032]: E0209 19:18:36.610848 2032 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-19528a6d7a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="1.6s" Feb 9 19:18:36.662651 kubelet[2032]: W0209 19:18:36.662576 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.662651 kubelet[2032]: E0209 19:18:36.662652 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.720713 kubelet[2032]: I0209 19:18:36.720669 2032 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:36.721175 kubelet[2032]: E0209 19:18:36.721084 2032 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:36.791195 kubelet[2032]: W0209 19:18:36.791125 2032 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.791195 kubelet[2032]: E0209 19:18:36.791197 2032 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:36.923087 env[1308]: time="2024-02-09T19:18:36.922945570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.926854 env[1308]: time="2024-02-09T19:18:36.926807603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.942085 env[1308]: time="2024-02-09T19:18:36.942035535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.944760 env[1308]: time="2024-02-09T19:18:36.944719658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.949786 env[1308]: time="2024-02-09T19:18:36.949739302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.952618 env[1308]: time="2024-02-09T19:18:36.952584426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.959619 env[1308]: time="2024-02-09T19:18:36.959578187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.969219 env[1308]: time="2024-02-09T19:18:36.969176170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.973684 env[1308]: time="2024-02-09T19:18:36.973646308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:36.999740 env[1308]: time="2024-02-09T19:18:36.999630233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:37.004662 env[1308]: time="2024-02-09T19:18:37.004419574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:37.021869 env[1308]: time="2024-02-09T19:18:37.021507018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:37.080712 env[1308]: time="2024-02-09T19:18:37.080593616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:37.080712 env[1308]: time="2024-02-09T19:18:37.080638516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:37.080712 env[1308]: time="2024-02-09T19:18:37.080653116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:37.081007 env[1308]: time="2024-02-09T19:18:37.080791817Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e523a8028165e8f0dcc3da0c972d0f5340124651b3e03e12796f62fda505cd2 pid=2080 runtime=io.containerd.runc.v2 Feb 9 19:18:37.089066 env[1308]: time="2024-02-09T19:18:37.082570832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:37.089066 env[1308]: time="2024-02-09T19:18:37.082610833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:37.089066 env[1308]: time="2024-02-09T19:18:37.082650633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:37.089066 env[1308]: time="2024-02-09T19:18:37.082827434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77e015d3f72981847ee3d72c6a09bd611624b03a47113f3135f256deece6f3c6 pid=2071 runtime=io.containerd.runc.v2 Feb 9 19:18:37.105811 systemd[1]: Started cri-containerd-77e015d3f72981847ee3d72c6a09bd611624b03a47113f3135f256deece6f3c6.scope. Feb 9 19:18:37.118731 systemd[1]: Started cri-containerd-7e523a8028165e8f0dcc3da0c972d0f5340124651b3e03e12796f62fda505cd2.scope. Feb 9 19:18:37.142473 env[1308]: time="2024-02-09T19:18:37.142381836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:37.142763 env[1308]: time="2024-02-09T19:18:37.142714439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:37.142915 env[1308]: time="2024-02-09T19:18:37.142875840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:37.144472 env[1308]: time="2024-02-09T19:18:37.144424953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1895afa10db7c4bc90cb69f44df011bd1857e18f2579f947f09c6a62f1a7d8a0 pid=2139 runtime=io.containerd.runc.v2 Feb 9 19:18:37.172828 systemd[1]: Started cri-containerd-1895afa10db7c4bc90cb69f44df011bd1857e18f2579f947f09c6a62f1a7d8a0.scope. Feb 9 19:18:37.203593 env[1308]: time="2024-02-09T19:18:37.203531351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-19528a6d7a,Uid:592aee8233a588cf619c65c72a53b8ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e523a8028165e8f0dcc3da0c972d0f5340124651b3e03e12796f62fda505cd2\"" Feb 9 19:18:37.215777 env[1308]: time="2024-02-09T19:18:37.214309842Z" level=info msg="CreateContainer within sandbox \"7e523a8028165e8f0dcc3da0c972d0f5340124651b3e03e12796f62fda505cd2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:18:37.217855 env[1308]: time="2024-02-09T19:18:37.217809272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-19528a6d7a,Uid:4f7937a15d4841f6a047867e6f6977fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"77e015d3f72981847ee3d72c6a09bd611624b03a47113f3135f256deece6f3c6\"" Feb 9 19:18:37.222145 env[1308]: time="2024-02-09T19:18:37.222104108Z" level=info msg="CreateContainer within sandbox \"77e015d3f72981847ee3d72c6a09bd611624b03a47113f3135f256deece6f3c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:18:37.253744 env[1308]: time="2024-02-09T19:18:37.253693074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-19528a6d7a,Uid:8fcbca9b3ced454188a2cbd25ae328d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1895afa10db7c4bc90cb69f44df011bd1857e18f2579f947f09c6a62f1a7d8a0\"" Feb 9 19:18:37.256749 env[1308]: time="2024-02-09T19:18:37.256687299Z" level=info msg="CreateContainer within sandbox \"1895afa10db7c4bc90cb69f44df011bd1857e18f2579f947f09c6a62f1a7d8a0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:18:37.260723 env[1308]: time="2024-02-09T19:18:37.260686733Z" level=info msg="CreateContainer within sandbox \"7e523a8028165e8f0dcc3da0c972d0f5340124651b3e03e12796f62fda505cd2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7898030aec5256a549350b6f57b1576a3a14421af626bbe4ee22cb704d112bcb\"" Feb 9 19:18:37.261261 env[1308]: time="2024-02-09T19:18:37.261218337Z" level=info msg="StartContainer for \"7898030aec5256a549350b6f57b1576a3a14421af626bbe4ee22cb704d112bcb\"" Feb 9 19:18:37.283503 systemd[1]: Started cri-containerd-7898030aec5256a549350b6f57b1576a3a14421af626bbe4ee22cb704d112bcb.scope. Feb 9 19:18:37.289059 env[1308]: time="2024-02-09T19:18:37.289000571Z" level=info msg="CreateContainer within sandbox \"77e015d3f72981847ee3d72c6a09bd611624b03a47113f3135f256deece6f3c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"edf3ecd1667aba0ee601c1292f12d5bb84b16449ac0c07333884ee72feba5004\"" Feb 9 19:18:37.290565 env[1308]: time="2024-02-09T19:18:37.290524384Z" level=info msg="StartContainer for \"edf3ecd1667aba0ee601c1292f12d5bb84b16449ac0c07333884ee72feba5004\"" Feb 9 19:18:37.331396 env[1308]: time="2024-02-09T19:18:37.331344228Z" level=info msg="CreateContainer within sandbox \"1895afa10db7c4bc90cb69f44df011bd1857e18f2579f947f09c6a62f1a7d8a0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"85b6fd9760cb00135c897156ab05015e11093ed7ffb4044fae4fa7617f77a9b3\"" Feb 9 19:18:37.334080 systemd[1]: Started cri-containerd-edf3ecd1667aba0ee601c1292f12d5bb84b16449ac0c07333884ee72feba5004.scope. Feb 9 19:18:37.341344 env[1308]: time="2024-02-09T19:18:37.340956009Z" level=info msg="StartContainer for \"85b6fd9760cb00135c897156ab05015e11093ed7ffb4044fae4fa7617f77a9b3\"" Feb 9 19:18:37.368068 kubelet[2032]: E0209 19:18:37.368015 2032 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.14:6443: connect: connection refused Feb 9 19:18:37.391372 env[1308]: time="2024-02-09T19:18:37.388489609Z" level=info msg="StartContainer for \"7898030aec5256a549350b6f57b1576a3a14421af626bbe4ee22cb704d112bcb\" returns successfully" Feb 9 19:18:37.389376 systemd[1]: Started cri-containerd-85b6fd9760cb00135c897156ab05015e11093ed7ffb4044fae4fa7617f77a9b3.scope. Feb 9 19:18:37.463515 env[1308]: time="2024-02-09T19:18:37.463387240Z" level=info msg="StartContainer for \"edf3ecd1667aba0ee601c1292f12d5bb84b16449ac0c07333884ee72feba5004\" returns successfully" Feb 9 19:18:37.500063 env[1308]: time="2024-02-09T19:18:37.500012049Z" level=info msg="StartContainer for \"85b6fd9760cb00135c897156ab05015e11093ed7ffb4044fae4fa7617f77a9b3\" returns successfully" Feb 9 19:18:38.322892 kubelet[2032]: I0209 19:18:38.322864 2032 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:39.735499 kubelet[2032]: E0209 19:18:39.735376 2032 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-19528a6d7a\" not found" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:39.744781 kubelet[2032]: I0209 19:18:39.744739 2032 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:39.833774 kubelet[2032]: E0209 19:18:39.833600 2032 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-19528a6d7a.17b247f268d8b003", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-19528a6d7a", UID:"ci-3510.3.2-a-19528a6d7a", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-19528a6d7a"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 193528323, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 193528323, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-19528a6d7a"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:18:39.928718 kubelet[2032]: E0209 19:18:39.928306 2032 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-19528a6d7a.17b247f269626ed3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-19528a6d7a", UID:"ci-3510.3.2-a-19528a6d7a", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-19528a6d7a"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 202555603, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 202555603, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-19528a6d7a"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:18:40.022648 kubelet[2032]: E0209 19:18:40.022146 2032 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-19528a6d7a.17b247f26fe7f8ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-19528a6d7a", UID:"ci-3510.3.2-a-19528a6d7a", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-19528a6d7a status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-19528a6d7a"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 311970475, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 311970475, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-19528a6d7a"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:18:40.085809 kubelet[2032]: E0209 19:18:40.085669 2032 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-19528a6d7a.17b247f26fe81723", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-19528a6d7a", UID:"ci-3510.3.2-a-19528a6d7a", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-19528a6d7a status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-19528a6d7a"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 311978275, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 18, 35, 311978275, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-19528a6d7a"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:18:40.196998 kubelet[2032]: I0209 19:18:40.196950 2032 apiserver.go:52] "Watching apiserver" Feb 9 19:18:40.205899 kubelet[2032]: I0209 19:18:40.205861 2032 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:18:40.387140 kubelet[2032]: E0209 19:18:40.387096 2032 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:43.226732 kubelet[2032]: W0209 19:18:43.226692 2032 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:44.732214 systemd[1]: Reloading. Feb 9 19:18:44.777844 kubelet[2032]: W0209 19:18:44.777808 2032 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:44.881199 /usr/lib/systemd/system-generators/torcx-generator[2327]: time="2024-02-09T19:18:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:18:44.881257 /usr/lib/systemd/system-generators/torcx-generator[2327]: time="2024-02-09T19:18:44Z" level=info msg="torcx already run" Feb 9 19:18:44.989952 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:18:44.989972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:18:45.012735 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:18:45.140539 systemd[1]: Stopping kubelet.service... Feb 9 19:18:45.154819 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:18:45.155071 systemd[1]: Stopped kubelet.service. Feb 9 19:18:45.157513 systemd[1]: Started kubelet.service. Feb 9 19:18:45.232551 kubelet[2390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:18:45.232551 kubelet[2390]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:18:45.232551 kubelet[2390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:18:45.236618 kubelet[2390]: I0209 19:18:45.232620 2390 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:18:45.248864 kubelet[2390]: I0209 19:18:45.248769 2390 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:18:45.249046 kubelet[2390]: I0209 19:18:45.249032 2390 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:18:45.249447 kubelet[2390]: I0209 19:18:45.249426 2390 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:18:45.251198 kubelet[2390]: I0209 19:18:45.251179 2390 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:18:45.252485 kubelet[2390]: I0209 19:18:45.252441 2390 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:18:45.259530 kubelet[2390]: I0209 19:18:45.259512 2390 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:18:45.259872 kubelet[2390]: I0209 19:18:45.259858 2390 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:18:45.260087 kubelet[2390]: I0209 19:18:45.260073 2390 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:18:45.260213 kubelet[2390]: I0209 19:18:45.260205 2390 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:18:45.260279 kubelet[2390]: I0209 19:18:45.260273 2390 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:18:45.260375 kubelet[2390]: I0209 19:18:45.260366 2390 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:18:45.260533 kubelet[2390]: I0209 19:18:45.260522 2390 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:18:45.260603 kubelet[2390]: I0209 19:18:45.260594 2390 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:18:45.260724 kubelet[2390]: I0209 19:18:45.260711 2390 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:18:45.261607 kubelet[2390]: I0209 19:18:45.261586 2390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:18:45.274445 kubelet[2390]: I0209 19:18:45.274420 2390 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:18:45.275111 kubelet[2390]: I0209 19:18:45.275086 2390 server.go:1232] "Started kubelet" Feb 9 19:18:45.285253 kubelet[2390]: I0209 19:18:45.280578 2390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:18:45.292855 kubelet[2390]: I0209 19:18:45.289871 2390 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:18:45.292855 kubelet[2390]: I0209 19:18:45.290696 2390 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:18:45.292855 kubelet[2390]: I0209 19:18:45.292331 2390 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:18:45.292855 kubelet[2390]: I0209 19:18:45.292527 2390 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:18:45.295943 kubelet[2390]: E0209 19:18:45.295923 2390 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:18:45.296081 kubelet[2390]: E0209 19:18:45.296070 2390 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:18:45.296248 kubelet[2390]: I0209 19:18:45.296217 2390 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:18:45.301259 kubelet[2390]: I0209 19:18:45.298615 2390 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:18:45.301259 kubelet[2390]: I0209 19:18:45.298760 2390 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:18:45.301917 kubelet[2390]: I0209 19:18:45.301794 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:18:45.302977 kubelet[2390]: I0209 19:18:45.302952 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:18:45.303224 kubelet[2390]: I0209 19:18:45.302996 2390 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:18:45.303224 kubelet[2390]: I0209 19:18:45.303017 2390 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:18:45.303224 kubelet[2390]: E0209 19:18:45.303073 2390 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:18:45.379147 kubelet[2390]: I0209 19:18:45.379102 2390 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:18:45.379147 kubelet[2390]: I0209 19:18:45.379131 2390 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:18:45.379147 kubelet[2390]: I0209 19:18:45.379152 2390 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:18:45.379468 kubelet[2390]: I0209 19:18:45.379375 2390 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:18:45.379468 kubelet[2390]: I0209 19:18:45.379401 2390 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 19:18:45.379468 kubelet[2390]: I0209 19:18:45.379410 2390 policy_none.go:49] "None policy: Start" Feb 9 19:18:45.380784 kubelet[2390]: I0209 19:18:45.380768 2390 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:18:45.380913 kubelet[2390]: I0209 19:18:45.380903 2390 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:18:45.381169 kubelet[2390]: I0209 19:18:45.381156 2390 state_mem.go:75] "Updated machine memory state" Feb 9 19:18:45.385762 kubelet[2390]: I0209 19:18:45.385748 2390 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:18:45.386047 kubelet[2390]: I0209 19:18:45.386037 2390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:18:45.400670 kubelet[2390]: I0209 19:18:45.400643 2390 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.403347 kubelet[2390]: I0209 19:18:45.403266 2390 topology_manager.go:215] "Topology Admit Handler" podUID="592aee8233a588cf619c65c72a53b8ee" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.403501 kubelet[2390]: I0209 19:18:45.403482 2390 topology_manager.go:215] "Topology Admit Handler" podUID="4f7937a15d4841f6a047867e6f6977fd" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.405899 kubelet[2390]: I0209 19:18:45.405871 2390 topology_manager.go:215] "Topology Admit Handler" podUID="8fcbca9b3ced454188a2cbd25ae328d5" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.437459 kubelet[2390]: W0209 19:18:45.437425 2390 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:45.483452 kubelet[2390]: W0209 19:18:45.483407 2390 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:45.483714 kubelet[2390]: W0209 19:18:45.483693 2390 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:45.483875 kubelet[2390]: E0209 19:18:45.483752 2390 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.484002 kubelet[2390]: E0209 19:18:45.483986 2390 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-19528a6d7a\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.486264 kubelet[2390]: I0209 19:18:45.486238 2390 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.486350 kubelet[2390]: I0209 19:18:45.486336 2390 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.500412 kubelet[2390]: I0209 19:18:45.500327 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.500658 kubelet[2390]: I0209 19:18:45.500636 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.500814 kubelet[2390]: I0209 19:18:45.500802 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.500936 kubelet[2390]: I0209 19:18:45.500926 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.501051 kubelet[2390]: I0209 19:18:45.501041 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7937a15d4841f6a047867e6f6977fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" (UID: \"4f7937a15d4841f6a047867e6f6977fd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.501157 kubelet[2390]: I0209 19:18:45.501148 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fcbca9b3ced454188a2cbd25ae328d5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-19528a6d7a\" (UID: \"8fcbca9b3ced454188a2cbd25ae328d5\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.501282 kubelet[2390]: I0209 19:18:45.501271 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/592aee8233a588cf619c65c72a53b8ee-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" (UID: \"592aee8233a588cf619c65c72a53b8ee\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.501434 kubelet[2390]: I0209 19:18:45.501423 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/592aee8233a588cf619c65c72a53b8ee-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" (UID: \"592aee8233a588cf619c65c72a53b8ee\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:45.501604 kubelet[2390]: I0209 19:18:45.501592 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/592aee8233a588cf619c65c72a53b8ee-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" (UID: \"592aee8233a588cf619c65c72a53b8ee\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:46.263490 kubelet[2390]: I0209 19:18:46.263441 2390 apiserver.go:52] "Watching apiserver" Feb 9 19:18:46.299643 kubelet[2390]: I0209 19:18:46.299601 2390 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:18:46.366442 sudo[2418]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:18:46.366728 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:18:46.368945 kubelet[2390]: W0209 19:18:46.368923 2390 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:46.369193 kubelet[2390]: E0209 19:18:46.369178 2390 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-19528a6d7a\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:46.371481 kubelet[2390]: W0209 19:18:46.371457 2390 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:18:46.371697 kubelet[2390]: E0209 19:18:46.371683 2390 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-19528a6d7a\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" Feb 9 19:18:46.392026 kubelet[2390]: I0209 19:18:46.391996 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-19528a6d7a" podStartSLOduration=1.391945183 podCreationTimestamp="2024-02-09 19:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:46.391219178 +0000 UTC m=+1.227218554" watchObservedRunningTime="2024-02-09 19:18:46.391945183 +0000 UTC m=+1.227944659" Feb 9 19:18:46.419954 kubelet[2390]: I0209 19:18:46.419913 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-19528a6d7a" podStartSLOduration=3.419840769 podCreationTimestamp="2024-02-09 19:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:46.409898303 +0000 UTC m=+1.245897679" watchObservedRunningTime="2024-02-09 19:18:46.419840769 +0000 UTC m=+1.255840145" Feb 9 19:18:46.935610 sudo[2418]: pam_unix(sudo:session): session closed for user root Feb 9 19:18:48.017896 sudo[1574]: pam_unix(sudo:session): session closed for user root Feb 9 19:18:48.117463 sshd[1571]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:48.121384 systemd[1]: sshd@4-10.200.8.14:22-10.200.12.6:34722.service: Deactivated successfully. Feb 9 19:18:48.122479 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:18:48.122693 systemd[1]: session-7.scope: Consumed 3.878s CPU time. Feb 9 19:18:48.123345 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:18:48.124325 systemd-logind[1293]: Removed session 7. Feb 9 19:18:48.169979 kubelet[2390]: I0209 19:18:48.169924 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-19528a6d7a" podStartSLOduration=4.169889361 podCreationTimestamp="2024-02-09 19:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:46.420838776 +0000 UTC m=+1.256838152" watchObservedRunningTime="2024-02-09 19:18:48.169889361 +0000 UTC m=+3.005888737" Feb 9 19:18:56.105560 kubelet[2390]: I0209 19:18:56.105519 2390 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:18:56.106163 env[1308]: time="2024-02-09T19:18:56.105972241Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:18:56.106593 kubelet[2390]: I0209 19:18:56.106347 2390 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:18:56.610968 kubelet[2390]: I0209 19:18:56.610257 2390 topology_manager.go:215] "Topology Admit Handler" podUID="c71d0007-8e5b-47b5-bc78-b67b86ef9daa" podNamespace="kube-system" podName="kube-proxy-q56q6" Feb 9 19:18:56.617387 systemd[1]: Created slice kubepods-besteffort-podc71d0007_8e5b_47b5_bc78_b67b86ef9daa.slice. Feb 9 19:18:56.629584 kubelet[2390]: I0209 19:18:56.629550 2390 topology_manager.go:215] "Topology Admit Handler" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" podNamespace="kube-system" podName="cilium-7jr42" Feb 9 19:18:56.635628 systemd[1]: Created slice kubepods-burstable-pod2898c470_7495_4f3a_9daf_fecbbd553b97.slice. Feb 9 19:18:56.675569 kubelet[2390]: I0209 19:18:56.675524 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6r6h\" (UniqueName: \"kubernetes.io/projected/c71d0007-8e5b-47b5-bc78-b67b86ef9daa-kube-api-access-t6r6h\") pod \"kube-proxy-q56q6\" (UID: \"c71d0007-8e5b-47b5-bc78-b67b86ef9daa\") " pod="kube-system/kube-proxy-q56q6" Feb 9 19:18:56.675943 kubelet[2390]: I0209 19:18:56.675919 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cni-path\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.676179 kubelet[2390]: I0209 19:18:56.676162 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-lib-modules\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.676365 kubelet[2390]: I0209 19:18:56.676349 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-config-path\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.676525 kubelet[2390]: I0209 19:18:56.676511 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-kernel\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.676730 kubelet[2390]: I0209 19:18:56.676676 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-bpf-maps\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677027 kubelet[2390]: I0209 19:18:56.676991 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-etc-cni-netd\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677121 kubelet[2390]: I0209 19:18:56.677047 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-run\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677121 kubelet[2390]: I0209 19:18:56.677088 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-hostproc\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677209 kubelet[2390]: I0209 19:18:56.677123 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-net\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677209 kubelet[2390]: I0209 19:18:56.677151 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-hubble-tls\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677209 kubelet[2390]: I0209 19:18:56.677182 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c71d0007-8e5b-47b5-bc78-b67b86ef9daa-kube-proxy\") pod \"kube-proxy-q56q6\" (UID: \"c71d0007-8e5b-47b5-bc78-b67b86ef9daa\") " pod="kube-system/kube-proxy-q56q6" Feb 9 19:18:56.677356 kubelet[2390]: I0209 19:18:56.677212 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c71d0007-8e5b-47b5-bc78-b67b86ef9daa-xtables-lock\") pod \"kube-proxy-q56q6\" (UID: \"c71d0007-8e5b-47b5-bc78-b67b86ef9daa\") " pod="kube-system/kube-proxy-q56q6" Feb 9 19:18:56.677356 kubelet[2390]: I0209 19:18:56.677252 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c71d0007-8e5b-47b5-bc78-b67b86ef9daa-lib-modules\") pod \"kube-proxy-q56q6\" (UID: \"c71d0007-8e5b-47b5-bc78-b67b86ef9daa\") " pod="kube-system/kube-proxy-q56q6" Feb 9 19:18:56.677356 kubelet[2390]: I0209 19:18:56.677283 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-cgroup\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677356 kubelet[2390]: I0209 19:18:56.677311 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-xtables-lock\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677356 kubelet[2390]: I0209 19:18:56.677341 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgp5f\" (UniqueName: \"kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-kube-api-access-bgp5f\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.677546 kubelet[2390]: I0209 19:18:56.677371 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2898c470-7495-4f3a-9daf-fecbbd553b97-clustermesh-secrets\") pod \"cilium-7jr42\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " pod="kube-system/cilium-7jr42" Feb 9 19:18:56.805108 kubelet[2390]: E0209 19:18:56.805076 2390 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 19:18:56.805108 kubelet[2390]: E0209 19:18:56.805110 2390 projected.go:198] Error preparing data for projected volume kube-api-access-bgp5f for pod kube-system/cilium-7jr42: configmap "kube-root-ca.crt" not found Feb 9 19:18:56.805340 kubelet[2390]: E0209 19:18:56.805178 2390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-kube-api-access-bgp5f podName:2898c470-7495-4f3a-9daf-fecbbd553b97 nodeName:}" failed. No retries permitted until 2024-02-09 19:18:57.305156022 +0000 UTC m=+12.141155398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bgp5f" (UniqueName: "kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-kube-api-access-bgp5f") pod "cilium-7jr42" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97") : configmap "kube-root-ca.crt" not found Feb 9 19:18:56.816209 kubelet[2390]: E0209 19:18:56.816176 2390 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 19:18:56.816209 kubelet[2390]: E0209 19:18:56.816208 2390 projected.go:198] Error preparing data for projected volume kube-api-access-t6r6h for pod kube-system/kube-proxy-q56q6: configmap "kube-root-ca.crt" not found Feb 9 19:18:56.816412 kubelet[2390]: E0209 19:18:56.816264 2390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c71d0007-8e5b-47b5-bc78-b67b86ef9daa-kube-api-access-t6r6h podName:c71d0007-8e5b-47b5-bc78-b67b86ef9daa nodeName:}" failed. No retries permitted until 2024-02-09 19:18:57.31624658 +0000 UTC m=+12.152245956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t6r6h" (UniqueName: "kubernetes.io/projected/c71d0007-8e5b-47b5-bc78-b67b86ef9daa-kube-api-access-t6r6h") pod "kube-proxy-q56q6" (UID: "c71d0007-8e5b-47b5-bc78-b67b86ef9daa") : configmap "kube-root-ca.crt" not found Feb 9 19:18:57.126646 kubelet[2390]: I0209 19:18:57.126593 2390 topology_manager.go:215] "Topology Admit Handler" podUID="6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-5bm4d" Feb 9 19:18:57.134486 systemd[1]: Created slice kubepods-besteffort-pod6dc4c187_4f9a_4f9e_90b6_3be4e750b3ec.slice. Feb 9 19:18:57.181572 kubelet[2390]: I0209 19:18:57.181526 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qwk\" (UniqueName: \"kubernetes.io/projected/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-kube-api-access-n8qwk\") pod \"cilium-operator-6bc8ccdb58-5bm4d\" (UID: \"6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec\") " pod="kube-system/cilium-operator-6bc8ccdb58-5bm4d" Feb 9 19:18:57.181765 kubelet[2390]: I0209 19:18:57.181588 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-5bm4d\" (UID: \"6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec\") " pod="kube-system/cilium-operator-6bc8ccdb58-5bm4d" Feb 9 19:18:57.444487 env[1308]: time="2024-02-09T19:18:57.444339434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-5bm4d,Uid:6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:57.484463 env[1308]: time="2024-02-09T19:18:57.484375140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:57.484984 env[1308]: time="2024-02-09T19:18:57.484420341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:57.484984 env[1308]: time="2024-02-09T19:18:57.484434441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:57.484984 env[1308]: time="2024-02-09T19:18:57.484568141Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756 pid=2473 runtime=io.containerd.runc.v2 Feb 9 19:18:57.499541 systemd[1]: Started cri-containerd-dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756.scope. Feb 9 19:18:57.527271 env[1308]: time="2024-02-09T19:18:57.527207761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q56q6,Uid:c71d0007-8e5b-47b5-bc78-b67b86ef9daa,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:57.540171 env[1308]: time="2024-02-09T19:18:57.540132627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jr42,Uid:2898c470-7495-4f3a-9daf-fecbbd553b97,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:57.547418 env[1308]: time="2024-02-09T19:18:57.547382565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-5bm4d,Uid:6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756\"" Feb 9 19:18:57.549579 env[1308]: time="2024-02-09T19:18:57.549548376Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:18:57.567346 env[1308]: time="2024-02-09T19:18:57.567043066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:57.567346 env[1308]: time="2024-02-09T19:18:57.567094166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:57.567346 env[1308]: time="2024-02-09T19:18:57.567103966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:57.567758 env[1308]: time="2024-02-09T19:18:57.567689069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79131a414c3d76b58a4723f4f7859f853f4c408b718f2efa98381071c25d1f40 pid=2513 runtime=io.containerd.runc.v2 Feb 9 19:18:57.584429 systemd[1]: Started cri-containerd-79131a414c3d76b58a4723f4f7859f853f4c408b718f2efa98381071c25d1f40.scope. Feb 9 19:18:57.591227 env[1308]: time="2024-02-09T19:18:57.590600387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:57.591227 env[1308]: time="2024-02-09T19:18:57.590644087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:57.591227 env[1308]: time="2024-02-09T19:18:57.590660087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:57.591227 env[1308]: time="2024-02-09T19:18:57.590901689Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd pid=2542 runtime=io.containerd.runc.v2 Feb 9 19:18:57.614960 systemd[1]: Started cri-containerd-5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd.scope. Feb 9 19:18:57.639297 env[1308]: time="2024-02-09T19:18:57.639242537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q56q6,Uid:c71d0007-8e5b-47b5-bc78-b67b86ef9daa,Namespace:kube-system,Attempt:0,} returns sandbox id \"79131a414c3d76b58a4723f4f7859f853f4c408b718f2efa98381071c25d1f40\"" Feb 9 19:18:57.646175 env[1308]: time="2024-02-09T19:18:57.646130773Z" level=info msg="CreateContainer within sandbox \"79131a414c3d76b58a4723f4f7859f853f4c408b718f2efa98381071c25d1f40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:18:57.661821 env[1308]: time="2024-02-09T19:18:57.661770453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jr42,Uid:2898c470-7495-4f3a-9daf-fecbbd553b97,Namespace:kube-system,Attempt:0,} returns sandbox id \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\"" Feb 9 19:18:57.687746 env[1308]: time="2024-02-09T19:18:57.687695187Z" level=info msg="CreateContainer within sandbox \"79131a414c3d76b58a4723f4f7859f853f4c408b718f2efa98381071c25d1f40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"799c861cd38e701d6f3152848910aedfa491995255cc515f9014179be89a614a\"" Feb 9 19:18:57.688457 env[1308]: time="2024-02-09T19:18:57.688423890Z" level=info msg="StartContainer for \"799c861cd38e701d6f3152848910aedfa491995255cc515f9014179be89a614a\"" Feb 9 19:18:57.712982 systemd[1]: Started cri-containerd-799c861cd38e701d6f3152848910aedfa491995255cc515f9014179be89a614a.scope. Feb 9 19:18:57.754772 env[1308]: time="2024-02-09T19:18:57.754715632Z" level=info msg="StartContainer for \"799c861cd38e701d6f3152848910aedfa491995255cc515f9014179be89a614a\" returns successfully" Feb 9 19:18:59.100320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477271454.mount: Deactivated successfully. Feb 9 19:18:59.986852 env[1308]: time="2024-02-09T19:18:59.986799178Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:59.993546 env[1308]: time="2024-02-09T19:18:59.993501011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:59.997387 env[1308]: time="2024-02-09T19:18:59.997352530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:59.997821 env[1308]: time="2024-02-09T19:18:59.997785632Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:18:59.999606 env[1308]: time="2024-02-09T19:18:59.999569941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:19:00.000919 env[1308]: time="2024-02-09T19:19:00.000887147Z" level=info msg="CreateContainer within sandbox \"dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:19:00.052436 env[1308]: time="2024-02-09T19:19:00.052380095Z" level=info msg="CreateContainer within sandbox \"dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\"" Feb 9 19:19:00.055431 env[1308]: time="2024-02-09T19:19:00.053372800Z" level=info msg="StartContainer for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\"" Feb 9 19:19:00.077906 systemd[1]: Started cri-containerd-f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31.scope. Feb 9 19:19:00.117368 env[1308]: time="2024-02-09T19:19:00.117316807Z" level=info msg="StartContainer for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" returns successfully" Feb 9 19:19:00.461393 kubelet[2390]: I0209 19:19:00.461352 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q56q6" podStartSLOduration=4.461209961 podCreationTimestamp="2024-02-09 19:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:58.392064766 +0000 UTC m=+13.228064142" watchObservedRunningTime="2024-02-09 19:19:00.461209961 +0000 UTC m=+15.297209337" Feb 9 19:19:05.318897 kubelet[2390]: I0209 19:19:05.318858 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-5bm4d" podStartSLOduration=5.869103689 podCreationTimestamp="2024-02-09 19:18:57 +0000 UTC" firstStartedPulling="2024-02-09 19:18:57.548721072 +0000 UTC m=+12.384720548" lastFinishedPulling="2024-02-09 19:18:59.998431035 +0000 UTC m=+14.834430411" observedRunningTime="2024-02-09 19:19:00.462386067 +0000 UTC m=+15.298385543" watchObservedRunningTime="2024-02-09 19:19:05.318813552 +0000 UTC m=+20.154812928" Feb 9 19:19:05.862037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974652108.mount: Deactivated successfully. Feb 9 19:19:08.602353 env[1308]: time="2024-02-09T19:19:08.602289103Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:19:08.610104 env[1308]: time="2024-02-09T19:19:08.610017334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:19:08.617169 env[1308]: time="2024-02-09T19:19:08.617116763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:19:08.617962 env[1308]: time="2024-02-09T19:19:08.617919667Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:19:08.622562 env[1308]: time="2024-02-09T19:19:08.622522185Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:19:08.650848 env[1308]: time="2024-02-09T19:19:08.650785200Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\"" Feb 9 19:19:08.653584 env[1308]: time="2024-02-09T19:19:08.653495111Z" level=info msg="StartContainer for \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\"" Feb 9 19:19:08.677504 systemd[1]: Started cri-containerd-83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9.scope. Feb 9 19:19:08.685443 systemd[1]: run-containerd-runc-k8s.io-83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9-runc.EWAyOO.mount: Deactivated successfully. Feb 9 19:19:08.728131 systemd[1]: cri-containerd-83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9.scope: Deactivated successfully. Feb 9 19:19:08.729649 env[1308]: time="2024-02-09T19:19:08.729601120Z" level=info msg="StartContainer for \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\" returns successfully" Feb 9 19:19:09.640798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9-rootfs.mount: Deactivated successfully. Feb 9 19:19:12.872650 env[1308]: time="2024-02-09T19:19:12.872581475Z" level=info msg="shim disconnected" id=83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9 Feb 9 19:19:12.872650 env[1308]: time="2024-02-09T19:19:12.872647675Z" level=warning msg="cleaning up after shim disconnected" id=83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9 namespace=k8s.io Feb 9 19:19:12.872650 env[1308]: time="2024-02-09T19:19:12.872661475Z" level=info msg="cleaning up dead shim" Feb 9 19:19:12.881718 env[1308]: time="2024-02-09T19:19:12.881673109Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2832 runtime=io.containerd.runc.v2\n" Feb 9 19:19:13.433280 env[1308]: time="2024-02-09T19:19:13.432454241Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:19:13.471132 env[1308]: time="2024-02-09T19:19:13.471075282Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\"" Feb 9 19:19:13.471729 env[1308]: time="2024-02-09T19:19:13.471689585Z" level=info msg="StartContainer for \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\"" Feb 9 19:19:13.498911 systemd[1]: run-containerd-runc-k8s.io-3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e-runc.gQA25Z.mount: Deactivated successfully. Feb 9 19:19:13.502615 systemd[1]: Started cri-containerd-3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e.scope. Feb 9 19:19:13.543263 env[1308]: time="2024-02-09T19:19:13.543034547Z" level=info msg="StartContainer for \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\" returns successfully" Feb 9 19:19:13.550974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:19:13.551703 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:19:13.551962 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:19:13.555753 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:19:13.563112 systemd[1]: cri-containerd-3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e.scope: Deactivated successfully. Feb 9 19:19:13.568964 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:19:13.596474 env[1308]: time="2024-02-09T19:19:13.596386543Z" level=info msg="shim disconnected" id=3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e Feb 9 19:19:13.596474 env[1308]: time="2024-02-09T19:19:13.596447243Z" level=warning msg="cleaning up after shim disconnected" id=3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e namespace=k8s.io Feb 9 19:19:13.596474 env[1308]: time="2024-02-09T19:19:13.596460743Z" level=info msg="cleaning up dead shim" Feb 9 19:19:13.605301 env[1308]: time="2024-02-09T19:19:13.605258675Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2898 runtime=io.containerd.runc.v2\n" Feb 9 19:19:14.434365 env[1308]: time="2024-02-09T19:19:14.434298391Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:19:14.455286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e-rootfs.mount: Deactivated successfully. Feb 9 19:19:14.482685 env[1308]: time="2024-02-09T19:19:14.482631065Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\"" Feb 9 19:19:14.485076 env[1308]: time="2024-02-09T19:19:14.483407068Z" level=info msg="StartContainer for \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\"" Feb 9 19:19:14.508422 systemd[1]: Started cri-containerd-eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed.scope. Feb 9 19:19:14.554177 systemd[1]: cri-containerd-eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed.scope: Deactivated successfully. Feb 9 19:19:14.558221 env[1308]: time="2024-02-09T19:19:14.558169437Z" level=info msg="StartContainer for \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\" returns successfully" Feb 9 19:19:14.592320 env[1308]: time="2024-02-09T19:19:14.592264760Z" level=info msg="shim disconnected" id=eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed Feb 9 19:19:14.592320 env[1308]: time="2024-02-09T19:19:14.592318960Z" level=warning msg="cleaning up after shim disconnected" id=eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed namespace=k8s.io Feb 9 19:19:14.592320 env[1308]: time="2024-02-09T19:19:14.592333460Z" level=info msg="cleaning up dead shim" Feb 9 19:19:14.603364 env[1308]: time="2024-02-09T19:19:14.603316900Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2956 runtime=io.containerd.runc.v2\n" Feb 9 19:19:15.444222 env[1308]: time="2024-02-09T19:19:15.443132897Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:19:15.455253 systemd[1]: run-containerd-runc-k8s.io-eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed-runc.wQvcQd.mount: Deactivated successfully. Feb 9 19:19:15.455380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed-rootfs.mount: Deactivated successfully. Feb 9 19:19:15.485814 env[1308]: time="2024-02-09T19:19:15.485757648Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\"" Feb 9 19:19:15.486612 env[1308]: time="2024-02-09T19:19:15.486573350Z" level=info msg="StartContainer for \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\"" Feb 9 19:19:15.517606 systemd[1]: Started cri-containerd-beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86.scope. Feb 9 19:19:15.557016 systemd[1]: cri-containerd-beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86.scope: Deactivated successfully. Feb 9 19:19:15.563340 env[1308]: time="2024-02-09T19:19:15.563286022Z" level=info msg="StartContainer for \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\" returns successfully" Feb 9 19:19:15.595110 env[1308]: time="2024-02-09T19:19:15.594930634Z" level=info msg="shim disconnected" id=beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86 Feb 9 19:19:15.595110 env[1308]: time="2024-02-09T19:19:15.595102034Z" level=warning msg="cleaning up after shim disconnected" id=beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86 namespace=k8s.io Feb 9 19:19:15.595395 env[1308]: time="2024-02-09T19:19:15.595124634Z" level=info msg="cleaning up dead shim" Feb 9 19:19:15.604545 env[1308]: time="2024-02-09T19:19:15.604505068Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3012 runtime=io.containerd.runc.v2\n" Feb 9 19:19:16.449273 env[1308]: time="2024-02-09T19:19:16.447954221Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:19:16.455228 systemd[1]: run-containerd-runc-k8s.io-beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86-runc.cu5p1X.mount: Deactivated successfully. Feb 9 19:19:16.455389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86-rootfs.mount: Deactivated successfully. Feb 9 19:19:16.489867 env[1308]: time="2024-02-09T19:19:16.489813167Z" level=info msg="CreateContainer within sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\"" Feb 9 19:19:16.490659 env[1308]: time="2024-02-09T19:19:16.490621769Z" level=info msg="StartContainer for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\"" Feb 9 19:19:16.514642 systemd[1]: Started cri-containerd-d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7.scope. Feb 9 19:19:16.567955 env[1308]: time="2024-02-09T19:19:16.567890638Z" level=info msg="StartContainer for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" returns successfully" Feb 9 19:19:16.742006 kubelet[2390]: I0209 19:19:16.741162 2390 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:19:16.773473 kubelet[2390]: I0209 19:19:16.773419 2390 topology_manager.go:215] "Topology Admit Handler" podUID="0669b516-d8ae-44e7-9353-af5e89c90e89" podNamespace="kube-system" podName="coredns-5dd5756b68-xcx7p" Feb 9 19:19:16.781984 systemd[1]: Created slice kubepods-burstable-pod0669b516_d8ae_44e7_9353_af5e89c90e89.slice. Feb 9 19:19:16.793138 kubelet[2390]: I0209 19:19:16.793105 2390 topology_manager.go:215] "Topology Admit Handler" podUID="022b63e9-96c5-4339-a197-bbf4f4408afa" podNamespace="kube-system" podName="coredns-5dd5756b68-tb4gz" Feb 9 19:19:16.801111 systemd[1]: Created slice kubepods-burstable-pod022b63e9_96c5_4339_a197_bbf4f4408afa.slice. Feb 9 19:19:16.839125 kubelet[2390]: I0209 19:19:16.839077 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0669b516-d8ae-44e7-9353-af5e89c90e89-config-volume\") pod \"coredns-5dd5756b68-xcx7p\" (UID: \"0669b516-d8ae-44e7-9353-af5e89c90e89\") " pod="kube-system/coredns-5dd5756b68-xcx7p" Feb 9 19:19:16.839499 kubelet[2390]: I0209 19:19:16.839474 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7z86\" (UniqueName: \"kubernetes.io/projected/0669b516-d8ae-44e7-9353-af5e89c90e89-kube-api-access-g7z86\") pod \"coredns-5dd5756b68-xcx7p\" (UID: \"0669b516-d8ae-44e7-9353-af5e89c90e89\") " pod="kube-system/coredns-5dd5756b68-xcx7p" Feb 9 19:19:16.839665 kubelet[2390]: I0209 19:19:16.839650 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/022b63e9-96c5-4339-a197-bbf4f4408afa-config-volume\") pod \"coredns-5dd5756b68-tb4gz\" (UID: \"022b63e9-96c5-4339-a197-bbf4f4408afa\") " pod="kube-system/coredns-5dd5756b68-tb4gz" Feb 9 19:19:16.839827 kubelet[2390]: I0209 19:19:16.839806 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfpk\" (UniqueName: \"kubernetes.io/projected/022b63e9-96c5-4339-a197-bbf4f4408afa-kube-api-access-hrfpk\") pod \"coredns-5dd5756b68-tb4gz\" (UID: \"022b63e9-96c5-4339-a197-bbf4f4408afa\") " pod="kube-system/coredns-5dd5756b68-tb4gz" Feb 9 19:19:17.086274 env[1308]: time="2024-02-09T19:19:17.086189632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xcx7p,Uid:0669b516-d8ae-44e7-9353-af5e89c90e89,Namespace:kube-system,Attempt:0,}" Feb 9 19:19:17.106305 env[1308]: time="2024-02-09T19:19:17.105794498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tb4gz,Uid:022b63e9-96c5-4339-a197-bbf4f4408afa,Namespace:kube-system,Attempt:0,}" Feb 9 19:19:18.903623 systemd-networkd[1452]: cilium_host: Link UP Feb 9 19:19:18.906007 systemd-networkd[1452]: cilium_net: Link UP Feb 9 19:19:18.917504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:19:18.917574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:19:18.924609 systemd-networkd[1452]: cilium_net: Gained carrier Feb 9 19:19:18.924881 systemd-networkd[1452]: cilium_host: Gained carrier Feb 9 19:19:19.077356 systemd-networkd[1452]: cilium_host: Gained IPv6LL Feb 9 19:19:19.185955 systemd-networkd[1452]: cilium_vxlan: Link UP Feb 9 19:19:19.185969 systemd-networkd[1452]: cilium_vxlan: Gained carrier Feb 9 19:19:19.491333 kernel: NET: Registered PF_ALG protocol family Feb 9 19:19:19.725444 systemd-networkd[1452]: cilium_net: Gained IPv6LL Feb 9 19:19:20.232457 systemd-networkd[1452]: lxc_health: Link UP Feb 9 19:19:20.250294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:19:20.251026 systemd-networkd[1452]: lxc_health: Gained carrier Feb 9 19:19:20.642906 systemd-networkd[1452]: lxc275f9dbf4e62: Link UP Feb 9 19:19:20.654272 kernel: eth0: renamed from tmp796e3 Feb 9 19:19:20.667390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc275f9dbf4e62: link becomes ready Feb 9 19:19:20.671370 systemd-networkd[1452]: lxc275f9dbf4e62: Gained carrier Feb 9 19:19:20.689661 systemd-networkd[1452]: lxc02f204e7df26: Link UP Feb 9 19:19:20.696321 kernel: eth0: renamed from tmp03208 Feb 9 19:19:20.720522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc02f204e7df26: link becomes ready Feb 9 19:19:20.721197 systemd-networkd[1452]: lxc02f204e7df26: Gained carrier Feb 9 19:19:20.813544 systemd-networkd[1452]: cilium_vxlan: Gained IPv6LL Feb 9 19:19:21.517529 systemd-networkd[1452]: lxc_health: Gained IPv6LL Feb 9 19:19:21.565355 kubelet[2390]: I0209 19:19:21.565319 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7jr42" podStartSLOduration=14.609925598 podCreationTimestamp="2024-02-09 19:18:56 +0000 UTC" firstStartedPulling="2024-02-09 19:18:57.66297336 +0000 UTC m=+12.498972736" lastFinishedPulling="2024-02-09 19:19:08.618318968 +0000 UTC m=+23.454318444" observedRunningTime="2024-02-09 19:19:17.479612672 +0000 UTC m=+32.315612148" watchObservedRunningTime="2024-02-09 19:19:21.565271306 +0000 UTC m=+36.401270682" Feb 9 19:19:22.029533 systemd-networkd[1452]: lxc275f9dbf4e62: Gained IPv6LL Feb 9 19:19:22.733524 systemd-networkd[1452]: lxc02f204e7df26: Gained IPv6LL Feb 9 19:19:24.837962 env[1308]: time="2024-02-09T19:19:24.837891204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:19:24.838493 env[1308]: time="2024-02-09T19:19:24.838459005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:19:24.838636 env[1308]: time="2024-02-09T19:19:24.838610806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:19:24.838989 env[1308]: time="2024-02-09T19:19:24.838950707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/796e34c605215f3fdeedef3cd3170fecfb9afe60911966ad4069419f727010a9 pid=3568 runtime=io.containerd.runc.v2 Feb 9 19:19:24.853532 env[1308]: time="2024-02-09T19:19:24.853448951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:19:24.853782 env[1308]: time="2024-02-09T19:19:24.853742652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:19:24.853936 env[1308]: time="2024-02-09T19:19:24.853910352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:19:24.854286 env[1308]: time="2024-02-09T19:19:24.854226453Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03208c6e2bd2d2bf940eb6a97e5bf7d0b40595a73f66620dce02c0e3a0913a72 pid=3585 runtime=io.containerd.runc.v2 Feb 9 19:19:24.882841 systemd[1]: Started cri-containerd-03208c6e2bd2d2bf940eb6a97e5bf7d0b40595a73f66620dce02c0e3a0913a72.scope. Feb 9 19:19:24.884955 systemd[1]: run-containerd-runc-k8s.io-03208c6e2bd2d2bf940eb6a97e5bf7d0b40595a73f66620dce02c0e3a0913a72-runc.CQj1zv.mount: Deactivated successfully. Feb 9 19:19:24.907348 systemd[1]: Started cri-containerd-796e34c605215f3fdeedef3cd3170fecfb9afe60911966ad4069419f727010a9.scope. Feb 9 19:19:24.964201 env[1308]: time="2024-02-09T19:19:24.964147785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tb4gz,Uid:022b63e9-96c5-4339-a197-bbf4f4408afa,Namespace:kube-system,Attempt:0,} returns sandbox id \"03208c6e2bd2d2bf940eb6a97e5bf7d0b40595a73f66620dce02c0e3a0913a72\"" Feb 9 19:19:24.989463 env[1308]: time="2024-02-09T19:19:24.989410761Z" level=info msg="CreateContainer within sandbox \"03208c6e2bd2d2bf940eb6a97e5bf7d0b40595a73f66620dce02c0e3a0913a72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:19:25.024060 env[1308]: time="2024-02-09T19:19:25.023676263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xcx7p,Uid:0669b516-d8ae-44e7-9353-af5e89c90e89,Namespace:kube-system,Attempt:0,} returns sandbox id \"796e34c605215f3fdeedef3cd3170fecfb9afe60911966ad4069419f727010a9\"" Feb 9 19:19:25.024220 env[1308]: time="2024-02-09T19:19:25.024135865Z" level=info msg="CreateContainer within sandbox \"03208c6e2bd2d2bf940eb6a97e5bf7d0b40595a73f66620dce02c0e3a0913a72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52fdc01b6c8e6961f9c72dda0fa68dcace7278c89b98c3a0c4b91a428aa1a0a1\"" Feb 9 19:19:25.027156 env[1308]: time="2024-02-09T19:19:25.027102973Z" level=info msg="CreateContainer within sandbox \"796e34c605215f3fdeedef3cd3170fecfb9afe60911966ad4069419f727010a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:19:25.029479 env[1308]: time="2024-02-09T19:19:25.029446480Z" level=info msg="StartContainer for \"52fdc01b6c8e6961f9c72dda0fa68dcace7278c89b98c3a0c4b91a428aa1a0a1\"" Feb 9 19:19:25.060358 systemd[1]: Started cri-containerd-52fdc01b6c8e6961f9c72dda0fa68dcace7278c89b98c3a0c4b91a428aa1a0a1.scope. Feb 9 19:19:25.071103 env[1308]: time="2024-02-09T19:19:25.071031104Z" level=info msg="CreateContainer within sandbox \"796e34c605215f3fdeedef3cd3170fecfb9afe60911966ad4069419f727010a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bf08b56289a82cb9d3e096ef1475b5f6f179323b169da2cbef6d8bf1601eaaa\"" Feb 9 19:19:25.071849 env[1308]: time="2024-02-09T19:19:25.071811506Z" level=info msg="StartContainer for \"8bf08b56289a82cb9d3e096ef1475b5f6f179323b169da2cbef6d8bf1601eaaa\"" Feb 9 19:19:25.107159 systemd[1]: Started cri-containerd-8bf08b56289a82cb9d3e096ef1475b5f6f179323b169da2cbef6d8bf1601eaaa.scope. Feb 9 19:19:25.147927 env[1308]: time="2024-02-09T19:19:25.147867732Z" level=info msg="StartContainer for \"52fdc01b6c8e6961f9c72dda0fa68dcace7278c89b98c3a0c4b91a428aa1a0a1\" returns successfully" Feb 9 19:19:25.195102 env[1308]: time="2024-02-09T19:19:25.195046472Z" level=info msg="StartContainer for \"8bf08b56289a82cb9d3e096ef1475b5f6f179323b169da2cbef6d8bf1601eaaa\" returns successfully" Feb 9 19:19:25.502732 kubelet[2390]: I0209 19:19:25.502615 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xcx7p" podStartSLOduration=28.502567685 podCreationTimestamp="2024-02-09 19:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:19:25.501600182 +0000 UTC m=+40.337599558" watchObservedRunningTime="2024-02-09 19:19:25.502567685 +0000 UTC m=+40.338567061" Feb 9 19:19:25.502732 kubelet[2390]: I0209 19:19:25.502712 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tb4gz" podStartSLOduration=28.502690485 podCreationTimestamp="2024-02-09 19:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:19:25.487015639 +0000 UTC m=+40.323015115" watchObservedRunningTime="2024-02-09 19:19:25.502690485 +0000 UTC m=+40.338689861" Feb 9 19:22:11.728160 update_engine[1294]: I0209 19:22:11.728094 1294 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:22:11.728160 update_engine[1294]: I0209 19:22:11.728149 1294 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.728363 1294 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729182 1294 omaha_request_params.cc:62] Current group set to lts Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729382 1294 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729391 1294 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729414 1294 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729451 1294 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729533 1294 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729542 1294 omaha_request_action.cc:271] Request: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.729548 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.731292 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:22:11.732562 update_engine[1294]: I0209 19:22:11.731544 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:22:11.733002 locksmithd[1384]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:22:11.753216 update_engine[1294]: E0209 19:22:11.753168 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:22:11.753444 update_engine[1294]: I0209 19:22:11.753350 1294 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 19:22:21.636978 update_engine[1294]: I0209 19:22:21.636914 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:22:21.637550 update_engine[1294]: I0209 19:22:21.637290 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:22:21.637627 update_engine[1294]: I0209 19:22:21.637595 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:22:21.668389 update_engine[1294]: E0209 19:22:21.668335 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:22:21.668599 update_engine[1294]: I0209 19:22:21.668510 1294 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 19:22:31.635322 update_engine[1294]: I0209 19:22:31.635219 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:22:31.635790 update_engine[1294]: I0209 19:22:31.635563 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:22:31.635847 update_engine[1294]: I0209 19:22:31.635836 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:22:31.653148 update_engine[1294]: E0209 19:22:31.653095 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:22:31.653367 update_engine[1294]: I0209 19:22:31.653277 1294 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 19:22:41.642962 update_engine[1294]: I0209 19:22:41.642899 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:22:41.643528 update_engine[1294]: I0209 19:22:41.643205 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:22:41.643590 update_engine[1294]: I0209 19:22:41.643537 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:22:41.665224 update_engine[1294]: E0209 19:22:41.665164 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:22:41.665484 update_engine[1294]: I0209 19:22:41.665359 1294 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:22:41.665484 update_engine[1294]: I0209 19:22:41.665374 1294 omaha_request_action.cc:621] Omaha request response: Feb 9 19:22:41.665484 update_engine[1294]: E0209 19:22:41.665482 1294 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 19:22:41.665643 update_engine[1294]: I0209 19:22:41.665501 1294 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 19:22:41.665643 update_engine[1294]: I0209 19:22:41.665508 1294 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:22:41.665643 update_engine[1294]: I0209 19:22:41.665513 1294 update_attempter.cc:306] Processing Done. Feb 9 19:22:41.665643 update_engine[1294]: E0209 19:22:41.665532 1294 update_attempter.cc:619] Update failed. Feb 9 19:22:41.665643 update_engine[1294]: I0209 19:22:41.665539 1294 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 19:22:41.665643 update_engine[1294]: I0209 19:22:41.665545 1294 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 19:22:41.665643 update_engine[1294]: I0209 19:22:41.665552 1294 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 19:22:41.666011 update_engine[1294]: I0209 19:22:41.665680 1294 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:22:41.666011 update_engine[1294]: I0209 19:22:41.665710 1294 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:22:41.666011 update_engine[1294]: I0209 19:22:41.665717 1294 omaha_request_action.cc:271] Request: Feb 9 19:22:41.666011 update_engine[1294]: Feb 9 19:22:41.666011 update_engine[1294]: Feb 9 19:22:41.666011 update_engine[1294]: Feb 9 19:22:41.666011 update_engine[1294]: Feb 9 19:22:41.666011 update_engine[1294]: Feb 9 19:22:41.666011 update_engine[1294]: Feb 9 19:22:41.666011 update_engine[1294]: I0209 19:22:41.665724 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:22:41.666011 update_engine[1294]: I0209 19:22:41.665942 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:22:41.667147 update_engine[1294]: I0209 19:22:41.666795 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:22:41.667413 locksmithd[1384]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 19:22:41.680474 update_engine[1294]: E0209 19:22:41.680438 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680567 1294 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680581 1294 omaha_request_action.cc:621] Omaha request response: Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680590 1294 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680594 1294 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680598 1294 update_attempter.cc:306] Processing Done. Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680606 1294 update_attempter.cc:310] Error event sent. Feb 9 19:22:41.680625 update_engine[1294]: I0209 19:22:41.680616 1294 update_check_scheduler.cc:74] Next update check in 43m43s Feb 9 19:22:41.681052 locksmithd[1384]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 19:27:15.501012 systemd[1]: Started sshd@5-10.200.8.14:22-10.200.12.6:54316.service. Feb 9 19:27:16.133982 sshd[3781]: Accepted publickey for core from 10.200.12.6 port 54316 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:16.135623 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:16.140217 systemd-logind[1293]: New session 8 of user core. Feb 9 19:27:16.142537 systemd[1]: Started session-8.scope. Feb 9 19:27:16.746118 sshd[3781]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:16.749259 systemd[1]: sshd@5-10.200.8.14:22-10.200.12.6:54316.service: Deactivated successfully. Feb 9 19:27:16.750245 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:27:16.751038 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:27:16.751926 systemd-logind[1293]: Removed session 8. Feb 9 19:27:21.853120 systemd[1]: Started sshd@6-10.200.8.14:22-10.200.12.6:49642.service. Feb 9 19:27:22.472964 sshd[3795]: Accepted publickey for core from 10.200.12.6 port 49642 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:22.474743 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:22.484084 systemd[1]: Started session-9.scope. Feb 9 19:27:22.484797 systemd-logind[1293]: New session 9 of user core. Feb 9 19:27:22.989188 sshd[3795]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:22.992339 systemd[1]: sshd@6-10.200.8.14:22-10.200.12.6:49642.service: Deactivated successfully. Feb 9 19:27:22.993446 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:27:22.994179 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:27:22.995100 systemd-logind[1293]: Removed session 9. Feb 9 19:27:28.094745 systemd[1]: Started sshd@7-10.200.8.14:22-10.200.12.6:56718.service. Feb 9 19:27:28.714924 sshd[3813]: Accepted publickey for core from 10.200.12.6 port 56718 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:28.716527 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:28.722073 systemd[1]: Started session-10.scope. Feb 9 19:27:28.722549 systemd-logind[1293]: New session 10 of user core. Feb 9 19:27:29.221982 sshd[3813]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:29.225192 systemd[1]: sshd@7-10.200.8.14:22-10.200.12.6:56718.service: Deactivated successfully. Feb 9 19:27:29.226279 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:27:29.227040 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:27:29.228052 systemd-logind[1293]: Removed session 10. Feb 9 19:27:34.327937 systemd[1]: Started sshd@8-10.200.8.14:22-10.200.12.6:56720.service. Feb 9 19:27:34.942881 sshd[3826]: Accepted publickey for core from 10.200.12.6 port 56720 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:34.944467 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:34.951004 systemd[1]: Started session-11.scope. Feb 9 19:27:34.952094 systemd-logind[1293]: New session 11 of user core. Feb 9 19:27:35.438290 sshd[3826]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:35.441773 systemd[1]: sshd@8-10.200.8.14:22-10.200.12.6:56720.service: Deactivated successfully. Feb 9 19:27:35.442972 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:27:35.443947 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:27:35.444975 systemd-logind[1293]: Removed session 11. Feb 9 19:27:35.548370 systemd[1]: Started sshd@9-10.200.8.14:22-10.200.12.6:56722.service. Feb 9 19:27:36.177361 sshd[3842]: Accepted publickey for core from 10.200.12.6 port 56722 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:36.179147 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:36.184293 systemd-logind[1293]: New session 12 of user core. Feb 9 19:27:36.185448 systemd[1]: Started session-12.scope. Feb 9 19:27:37.413653 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:37.417267 systemd[1]: sshd@9-10.200.8.14:22-10.200.12.6:56722.service: Deactivated successfully. Feb 9 19:27:37.421444 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:27:37.422347 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:27:37.423201 systemd-logind[1293]: Removed session 12. Feb 9 19:27:37.517758 systemd[1]: Started sshd@10-10.200.8.14:22-10.200.12.6:43184.service. Feb 9 19:27:38.132983 sshd[3852]: Accepted publickey for core from 10.200.12.6 port 43184 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:38.134595 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:38.140295 systemd[1]: Started session-13.scope. Feb 9 19:27:38.141357 systemd-logind[1293]: New session 13 of user core. Feb 9 19:27:38.627190 sshd[3852]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:38.630571 systemd[1]: sshd@10-10.200.8.14:22-10.200.12.6:43184.service: Deactivated successfully. Feb 9 19:27:38.631641 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:27:38.632459 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:27:38.633321 systemd-logind[1293]: Removed session 13. Feb 9 19:27:43.735395 systemd[1]: Started sshd@11-10.200.8.14:22-10.200.12.6:43188.service. Feb 9 19:27:44.360149 sshd[3867]: Accepted publickey for core from 10.200.12.6 port 43188 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:44.361912 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:44.365962 systemd-logind[1293]: New session 14 of user core. Feb 9 19:27:44.367959 systemd[1]: Started session-14.scope. Feb 9 19:27:44.859455 sshd[3867]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:44.862685 systemd[1]: sshd@11-10.200.8.14:22-10.200.12.6:43188.service: Deactivated successfully. Feb 9 19:27:44.863739 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:27:44.864497 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:27:44.865353 systemd-logind[1293]: Removed session 14. Feb 9 19:27:49.966517 systemd[1]: Started sshd@12-10.200.8.14:22-10.200.12.6:33254.service. Feb 9 19:27:50.591275 sshd[3882]: Accepted publickey for core from 10.200.12.6 port 33254 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:50.592211 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:50.598276 systemd[1]: Started session-15.scope. Feb 9 19:27:50.598742 systemd-logind[1293]: New session 15 of user core. Feb 9 19:27:51.090973 sshd[3882]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:51.094804 systemd[1]: sshd@12-10.200.8.14:22-10.200.12.6:33254.service: Deactivated successfully. Feb 9 19:27:51.096323 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:27:51.097439 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:27:51.098820 systemd-logind[1293]: Removed session 15. Feb 9 19:27:51.195524 systemd[1]: Started sshd@13-10.200.8.14:22-10.200.12.6:33266.service. Feb 9 19:27:51.820100 sshd[3895]: Accepted publickey for core from 10.200.12.6 port 33266 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:51.822847 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:51.828305 systemd-logind[1293]: New session 16 of user core. Feb 9 19:27:51.828347 systemd[1]: Started session-16.scope. Feb 9 19:27:52.392833 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:52.396711 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:27:52.396961 systemd[1]: sshd@13-10.200.8.14:22-10.200.12.6:33266.service: Deactivated successfully. Feb 9 19:27:52.398033 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:27:52.398945 systemd-logind[1293]: Removed session 16. Feb 9 19:27:52.500251 systemd[1]: Started sshd@14-10.200.8.14:22-10.200.12.6:33278.service. Feb 9 19:27:53.126031 sshd[3904]: Accepted publickey for core from 10.200.12.6 port 33278 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:53.127588 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:53.132868 systemd-logind[1293]: New session 17 of user core. Feb 9 19:27:53.133429 systemd[1]: Started session-17.scope. Feb 9 19:27:54.628307 sshd[3904]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:54.632630 systemd[1]: sshd@14-10.200.8.14:22-10.200.12.6:33278.service: Deactivated successfully. Feb 9 19:27:54.633864 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:27:54.633941 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:27:54.635332 systemd-logind[1293]: Removed session 17. Feb 9 19:27:54.732103 systemd[1]: Started sshd@15-10.200.8.14:22-10.200.12.6:33294.service. Feb 9 19:27:55.354137 sshd[3921]: Accepted publickey for core from 10.200.12.6 port 33294 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:55.355706 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:55.361078 systemd[1]: Started session-18.scope. Feb 9 19:27:55.361735 systemd-logind[1293]: New session 18 of user core. Feb 9 19:27:56.076162 sshd[3921]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:56.079565 systemd[1]: sshd@15-10.200.8.14:22-10.200.12.6:33294.service: Deactivated successfully. Feb 9 19:27:56.081903 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:27:56.081912 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:27:56.084073 systemd-logind[1293]: Removed session 18. Feb 9 19:27:56.180992 systemd[1]: Started sshd@16-10.200.8.14:22-10.200.12.6:33310.service. Feb 9 19:27:56.801717 sshd[3931]: Accepted publickey for core from 10.200.12.6 port 33310 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:56.803467 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:56.813648 systemd[1]: Started session-19.scope. Feb 9 19:27:56.814555 systemd-logind[1293]: New session 19 of user core. Feb 9 19:27:57.297502 sshd[3931]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:57.300765 systemd[1]: sshd@16-10.200.8.14:22-10.200.12.6:33310.service: Deactivated successfully. Feb 9 19:27:57.301743 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:27:57.302571 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:27:57.303531 systemd-logind[1293]: Removed session 19. Feb 9 19:28:02.402954 systemd[1]: Started sshd@17-10.200.8.14:22-10.200.12.6:59182.service. Feb 9 19:28:03.018508 sshd[3947]: Accepted publickey for core from 10.200.12.6 port 59182 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:03.020227 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:03.025745 systemd[1]: Started session-20.scope. Feb 9 19:28:03.026217 systemd-logind[1293]: New session 20 of user core. Feb 9 19:28:03.516271 sshd[3947]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:03.519923 systemd[1]: sshd@17-10.200.8.14:22-10.200.12.6:59182.service: Deactivated successfully. Feb 9 19:28:03.521134 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:28:03.522396 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:28:03.523442 systemd-logind[1293]: Removed session 20. Feb 9 19:28:08.623949 systemd[1]: Started sshd@18-10.200.8.14:22-10.200.12.6:48592.service. Feb 9 19:28:09.256699 sshd[3959]: Accepted publickey for core from 10.200.12.6 port 48592 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:09.258209 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:09.267600 systemd[1]: Started session-21.scope. Feb 9 19:28:09.268211 systemd-logind[1293]: New session 21 of user core. Feb 9 19:28:09.759147 sshd[3959]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:09.762340 systemd[1]: sshd@18-10.200.8.14:22-10.200.12.6:48592.service: Deactivated successfully. Feb 9 19:28:09.763429 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:28:09.764123 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:28:09.764982 systemd-logind[1293]: Removed session 21. Feb 9 19:28:14.867171 systemd[1]: Started sshd@19-10.200.8.14:22-10.200.12.6:48602.service. Feb 9 19:28:15.492091 sshd[3970]: Accepted publickey for core from 10.200.12.6 port 48602 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:15.493719 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:15.498911 systemd[1]: Started session-22.scope. Feb 9 19:28:15.499534 systemd-logind[1293]: New session 22 of user core. Feb 9 19:28:15.986941 sshd[3970]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:15.990177 systemd[1]: sshd@19-10.200.8.14:22-10.200.12.6:48602.service: Deactivated successfully. Feb 9 19:28:15.991783 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:28:15.991827 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:28:15.993078 systemd-logind[1293]: Removed session 22. Feb 9 19:28:16.091248 systemd[1]: Started sshd@20-10.200.8.14:22-10.200.12.6:48616.service. Feb 9 19:28:16.710818 sshd[3982]: Accepted publickey for core from 10.200.12.6 port 48616 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:16.712880 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:16.718342 systemd[1]: Started session-23.scope. Feb 9 19:28:16.719063 systemd-logind[1293]: New session 23 of user core. Feb 9 19:28:18.682787 env[1308]: time="2024-02-09T19:28:18.680384928Z" level=info msg="StopContainer for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" with timeout 30 (s)" Feb 9 19:28:18.681990 systemd[1]: run-containerd-runc-k8s.io-d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7-runc.XjmZNz.mount: Deactivated successfully. Feb 9 19:28:18.683541 env[1308]: time="2024-02-09T19:28:18.683450657Z" level=info msg="Stop container \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" with signal terminated" Feb 9 19:28:18.719756 systemd[1]: cri-containerd-f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31.scope: Deactivated successfully. Feb 9 19:28:18.724116 env[1308]: time="2024-02-09T19:28:18.724043343Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:28:18.735921 env[1308]: time="2024-02-09T19:28:18.735872955Z" level=info msg="StopContainer for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" with timeout 2 (s)" Feb 9 19:28:18.736344 env[1308]: time="2024-02-09T19:28:18.736307159Z" level=info msg="Stop container \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" with signal terminated" Feb 9 19:28:18.748664 systemd-networkd[1452]: lxc_health: Link DOWN Feb 9 19:28:18.749353 systemd-networkd[1452]: lxc_health: Lost carrier Feb 9 19:28:18.755577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31-rootfs.mount: Deactivated successfully. Feb 9 19:28:18.773647 systemd[1]: cri-containerd-d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7.scope: Deactivated successfully. Feb 9 19:28:18.773958 systemd[1]: cri-containerd-d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7.scope: Consumed 8.961s CPU time. Feb 9 19:28:18.798251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7-rootfs.mount: Deactivated successfully. Feb 9 19:28:18.810769 env[1308]: time="2024-02-09T19:28:18.810712465Z" level=info msg="shim disconnected" id=f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31 Feb 9 19:28:18.810769 env[1308]: time="2024-02-09T19:28:18.810769466Z" level=warning msg="cleaning up after shim disconnected" id=f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31 namespace=k8s.io Feb 9 19:28:18.810769 env[1308]: time="2024-02-09T19:28:18.810783166Z" level=info msg="cleaning up dead shim" Feb 9 19:28:18.823104 env[1308]: time="2024-02-09T19:28:18.821402367Z" level=info msg="shim disconnected" id=d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7 Feb 9 19:28:18.823104 env[1308]: time="2024-02-09T19:28:18.821566168Z" level=warning msg="cleaning up after shim disconnected" id=d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7 namespace=k8s.io Feb 9 19:28:18.823104 env[1308]: time="2024-02-09T19:28:18.821585268Z" level=info msg="cleaning up dead shim" Feb 9 19:28:18.825108 env[1308]: time="2024-02-09T19:28:18.824318494Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4058 runtime=io.containerd.runc.v2\n" Feb 9 19:28:18.829851 env[1308]: time="2024-02-09T19:28:18.829802546Z" level=info msg="StopContainer for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" returns successfully" Feb 9 19:28:18.830706 env[1308]: time="2024-02-09T19:28:18.830669855Z" level=info msg="StopPodSandbox for \"dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756\"" Feb 9 19:28:18.830804 env[1308]: time="2024-02-09T19:28:18.830753855Z" level=info msg="Container to stop \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:18.836811 env[1308]: time="2024-02-09T19:28:18.836778013Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4071 runtime=io.containerd.runc.v2\n" Feb 9 19:28:18.840974 systemd[1]: cri-containerd-dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756.scope: Deactivated successfully. Feb 9 19:28:18.842436 env[1308]: time="2024-02-09T19:28:18.842403666Z" level=info msg="StopContainer for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" returns successfully" Feb 9 19:28:18.843293 env[1308]: time="2024-02-09T19:28:18.843264474Z" level=info msg="StopPodSandbox for \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\"" Feb 9 19:28:18.843395 env[1308]: time="2024-02-09T19:28:18.843337475Z" level=info msg="Container to stop \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:18.843395 env[1308]: time="2024-02-09T19:28:18.843360275Z" level=info msg="Container to stop \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:18.843395 env[1308]: time="2024-02-09T19:28:18.843376175Z" level=info msg="Container to stop \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:18.843523 env[1308]: time="2024-02-09T19:28:18.843391475Z" level=info msg="Container to stop \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:18.843523 env[1308]: time="2024-02-09T19:28:18.843406776Z" level=info msg="Container to stop \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:18.855357 systemd[1]: cri-containerd-5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd.scope: Deactivated successfully. Feb 9 19:28:18.890076 env[1308]: time="2024-02-09T19:28:18.890019918Z" level=info msg="shim disconnected" id=dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756 Feb 9 19:28:18.890343 env[1308]: time="2024-02-09T19:28:18.890085319Z" level=warning msg="cleaning up after shim disconnected" id=dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756 namespace=k8s.io Feb 9 19:28:18.890343 env[1308]: time="2024-02-09T19:28:18.890098319Z" level=info msg="cleaning up dead shim" Feb 9 19:28:18.890507 env[1308]: time="2024-02-09T19:28:18.889806216Z" level=info msg="shim disconnected" id=5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd Feb 9 19:28:18.890624 env[1308]: time="2024-02-09T19:28:18.890601723Z" level=warning msg="cleaning up after shim disconnected" id=5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd namespace=k8s.io Feb 9 19:28:18.890726 env[1308]: time="2024-02-09T19:28:18.890709224Z" level=info msg="cleaning up dead shim" Feb 9 19:28:18.902392 env[1308]: time="2024-02-09T19:28:18.902341835Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Feb 9 19:28:18.902997 env[1308]: time="2024-02-09T19:28:18.902957041Z" level=info msg="TearDown network for sandbox \"dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756\" successfully" Feb 9 19:28:18.903140 env[1308]: time="2024-02-09T19:28:18.903115542Z" level=info msg="StopPodSandbox for \"dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756\" returns successfully" Feb 9 19:28:18.906847 env[1308]: time="2024-02-09T19:28:18.906809677Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Feb 9 19:28:18.907118 env[1308]: time="2024-02-09T19:28:18.907088680Z" level=info msg="TearDown network for sandbox \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" successfully" Feb 9 19:28:18.907118 env[1308]: time="2024-02-09T19:28:18.907113180Z" level=info msg="StopPodSandbox for \"5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd\" returns successfully" Feb 9 19:28:18.933394 kubelet[2390]: I0209 19:28:18.933273 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8qwk\" (UniqueName: \"kubernetes.io/projected/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-kube-api-access-n8qwk\") pod \"6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec\" (UID: \"6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec\") " Feb 9 19:28:18.933951 kubelet[2390]: I0209 19:28:18.933935 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-cilium-config-path\") pod \"6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec\" (UID: \"6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec\") " Feb 9 19:28:18.936838 kubelet[2390]: I0209 19:28:18.936806 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec" (UID: "6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:28:18.939207 kubelet[2390]: I0209 19:28:18.939174 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-kube-api-access-n8qwk" (OuterVolumeSpecName: "kube-api-access-n8qwk") pod "6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec" (UID: "6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec"). InnerVolumeSpecName "kube-api-access-n8qwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:19.034654 kubelet[2390]: I0209 19:28:19.034608 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-config-path\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034654 kubelet[2390]: I0209 19:28:19.034660 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-bpf-maps\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034922 kubelet[2390]: I0209 19:28:19.034685 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-net\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034922 kubelet[2390]: I0209 19:28:19.034705 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-xtables-lock\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034922 kubelet[2390]: I0209 19:28:19.034728 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-lib-modules\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034922 kubelet[2390]: I0209 19:28:19.034748 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-hostproc\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034922 kubelet[2390]: I0209 19:28:19.034777 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-run\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.034922 kubelet[2390]: I0209 19:28:19.034802 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-kernel\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035163 kubelet[2390]: I0209 19:28:19.034832 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2898c470-7495-4f3a-9daf-fecbbd553b97-clustermesh-secrets\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035163 kubelet[2390]: I0209 19:28:19.034855 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-cgroup\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035163 kubelet[2390]: I0209 19:28:19.034884 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgp5f\" (UniqueName: \"kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-kube-api-access-bgp5f\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035163 kubelet[2390]: I0209 19:28:19.034911 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cni-path\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035163 kubelet[2390]: I0209 19:28:19.034939 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-etc-cni-netd\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035163 kubelet[2390]: I0209 19:28:19.034967 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-hubble-tls\") pod \"2898c470-7495-4f3a-9daf-fecbbd553b97\" (UID: \"2898c470-7495-4f3a-9daf-fecbbd553b97\") " Feb 9 19:28:19.035434 kubelet[2390]: I0209 19:28:19.035017 2390 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n8qwk\" (UniqueName: \"kubernetes.io/projected/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-kube-api-access-n8qwk\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.035434 kubelet[2390]: I0209 19:28:19.035034 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec-cilium-config-path\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.035593 kubelet[2390]: I0209 19:28:19.035568 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.038497 kubelet[2390]: I0209 19:28:19.038459 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:28:19.039121 kubelet[2390]: I0209 19:28:19.038723 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039121 kubelet[2390]: I0209 19:28:19.039021 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039121 kubelet[2390]: I0209 19:28:19.039038 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039121 kubelet[2390]: I0209 19:28:19.039050 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039405 kubelet[2390]: I0209 19:28:19.039060 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039405 kubelet[2390]: I0209 19:28:19.039069 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-hostproc" (OuterVolumeSpecName: "hostproc") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039405 kubelet[2390]: I0209 19:28:19.039202 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:19.039552 kubelet[2390]: I0209 19:28:19.039437 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039552 kubelet[2390]: I0209 19:28:19.039467 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cni-path" (OuterVolumeSpecName: "cni-path") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.039552 kubelet[2390]: I0209 19:28:19.039492 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:19.043074 kubelet[2390]: I0209 19:28:19.043037 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2898c470-7495-4f3a-9daf-fecbbd553b97-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:28:19.043564 kubelet[2390]: I0209 19:28:19.043536 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-kube-api-access-bgp5f" (OuterVolumeSpecName: "kube-api-access-bgp5f") pod "2898c470-7495-4f3a-9daf-fecbbd553b97" (UID: "2898c470-7495-4f3a-9daf-fecbbd553b97"). InnerVolumeSpecName "kube-api-access-bgp5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:19.136039 kubelet[2390]: I0209 19:28:19.135991 2390 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bgp5f\" (UniqueName: \"kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-kube-api-access-bgp5f\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136039 kubelet[2390]: I0209 19:28:19.136036 2390 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cni-path\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136059 2390 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-etc-cni-netd\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136077 2390 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2898c470-7495-4f3a-9daf-fecbbd553b97-hubble-tls\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136100 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-config-path\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136115 2390 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-bpf-maps\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136131 2390 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-net\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136146 2390 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-xtables-lock\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136204 2390 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-lib-modules\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136433 kubelet[2390]: I0209 19:28:19.136222 2390 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-hostproc\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136759 kubelet[2390]: I0209 19:28:19.136253 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-run\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136759 kubelet[2390]: I0209 19:28:19.136275 2390 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136759 kubelet[2390]: I0209 19:28:19.136292 2390 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2898c470-7495-4f3a-9daf-fecbbd553b97-clustermesh-secrets\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.136759 kubelet[2390]: I0209 19:28:19.136309 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2898c470-7495-4f3a-9daf-fecbbd553b97-cilium-cgroup\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:19.311618 systemd[1]: Removed slice kubepods-besteffort-pod6dc4c187_4f9a_4f9e_90b6_3be4e750b3ec.slice. Feb 9 19:28:19.314321 systemd[1]: Removed slice kubepods-burstable-pod2898c470_7495_4f3a_9daf_fecbbd553b97.slice. Feb 9 19:28:19.314427 systemd[1]: kubepods-burstable-pod2898c470_7495_4f3a_9daf_fecbbd553b97.slice: Consumed 9.089s CPU time. Feb 9 19:28:19.606011 kubelet[2390]: I0209 19:28:19.605979 2390 scope.go:117] "RemoveContainer" containerID="d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7" Feb 9 19:28:19.610273 env[1308]: time="2024-02-09T19:28:19.609404631Z" level=info msg="RemoveContainer for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\"" Feb 9 19:28:19.625334 env[1308]: time="2024-02-09T19:28:19.625274981Z" level=info msg="RemoveContainer for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" returns successfully" Feb 9 19:28:19.625644 kubelet[2390]: I0209 19:28:19.625571 2390 scope.go:117] "RemoveContainer" containerID="beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86" Feb 9 19:28:19.628220 env[1308]: time="2024-02-09T19:28:19.627834205Z" level=info msg="RemoveContainer for \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\"" Feb 9 19:28:19.635746 env[1308]: time="2024-02-09T19:28:19.635702680Z" level=info msg="RemoveContainer for \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\" returns successfully" Feb 9 19:28:19.636134 kubelet[2390]: I0209 19:28:19.636078 2390 scope.go:117] "RemoveContainer" containerID="eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed" Feb 9 19:28:19.638034 env[1308]: time="2024-02-09T19:28:19.637674098Z" level=info msg="RemoveContainer for \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\"" Feb 9 19:28:19.645068 env[1308]: time="2024-02-09T19:28:19.645029168Z" level=info msg="RemoveContainer for \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\" returns successfully" Feb 9 19:28:19.645278 kubelet[2390]: I0209 19:28:19.645254 2390 scope.go:117] "RemoveContainer" containerID="3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e" Feb 9 19:28:19.646347 env[1308]: time="2024-02-09T19:28:19.646308780Z" level=info msg="RemoveContainer for \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\"" Feb 9 19:28:19.653874 env[1308]: time="2024-02-09T19:28:19.653835651Z" level=info msg="RemoveContainer for \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\" returns successfully" Feb 9 19:28:19.654036 kubelet[2390]: I0209 19:28:19.654014 2390 scope.go:117] "RemoveContainer" containerID="83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9" Feb 9 19:28:19.655008 env[1308]: time="2024-02-09T19:28:19.654981962Z" level=info msg="RemoveContainer for \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\"" Feb 9 19:28:19.663381 env[1308]: time="2024-02-09T19:28:19.663347241Z" level=info msg="RemoveContainer for \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\" returns successfully" Feb 9 19:28:19.663651 kubelet[2390]: I0209 19:28:19.663626 2390 scope.go:117] "RemoveContainer" containerID="d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7" Feb 9 19:28:19.663894 env[1308]: time="2024-02-09T19:28:19.663819946Z" level=error msg="ContainerStatus for \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\": not found" Feb 9 19:28:19.664046 kubelet[2390]: E0209 19:28:19.664027 2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\": not found" containerID="d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7" Feb 9 19:28:19.664153 kubelet[2390]: I0209 19:28:19.664138 2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7"} err="failed to get container status \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0a0bdf2faa8c9479f77e9e2008636a31da68b9347ddccb14df0b307202668e7\": not found" Feb 9 19:28:19.664221 kubelet[2390]: I0209 19:28:19.664160 2390 scope.go:117] "RemoveContainer" containerID="beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86" Feb 9 19:28:19.664413 env[1308]: time="2024-02-09T19:28:19.664360751Z" level=error msg="ContainerStatus for \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\": not found" Feb 9 19:28:19.664551 kubelet[2390]: E0209 19:28:19.664533 2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\": not found" containerID="beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86" Feb 9 19:28:19.664648 kubelet[2390]: I0209 19:28:19.664567 2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86"} err="failed to get container status \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\": rpc error: code = NotFound desc = an error occurred when try to find container \"beb13c7bdb098df48cade7204ea58f1cddc71838cda8c030da9502d7c8084f86\": not found" Feb 9 19:28:19.664648 kubelet[2390]: I0209 19:28:19.664581 2390 scope.go:117] "RemoveContainer" containerID="eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed" Feb 9 19:28:19.664800 env[1308]: time="2024-02-09T19:28:19.664750555Z" level=error msg="ContainerStatus for \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\": not found" Feb 9 19:28:19.664915 kubelet[2390]: E0209 19:28:19.664898 2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\": not found" containerID="eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed" Feb 9 19:28:19.664985 kubelet[2390]: I0209 19:28:19.664929 2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed"} err="failed to get container status \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"eac6a0e737fcb2e85594dcddf6896e6eb69b906439be1d171a802e8a5b0ec1ed\": not found" Feb 9 19:28:19.664985 kubelet[2390]: I0209 19:28:19.664944 2390 scope.go:117] "RemoveContainer" containerID="3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e" Feb 9 19:28:19.665154 env[1308]: time="2024-02-09T19:28:19.665105258Z" level=error msg="ContainerStatus for \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\": not found" Feb 9 19:28:19.665316 kubelet[2390]: E0209 19:28:19.665296 2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\": not found" containerID="3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e" Feb 9 19:28:19.665403 kubelet[2390]: I0209 19:28:19.665330 2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e"} err="failed to get container status \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3266f35c710135c781de52fcc1efd1fca926eba761ab2045a6c31bd7cb0c8a8e\": not found" Feb 9 19:28:19.665403 kubelet[2390]: I0209 19:28:19.665343 2390 scope.go:117] "RemoveContainer" containerID="83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9" Feb 9 19:28:19.665565 env[1308]: time="2024-02-09T19:28:19.665505962Z" level=error msg="ContainerStatus for \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\": not found" Feb 9 19:28:19.665697 kubelet[2390]: E0209 19:28:19.665680 2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\": not found" containerID="83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9" Feb 9 19:28:19.665772 kubelet[2390]: I0209 19:28:19.665710 2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9"} err="failed to get container status \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"83e23eda1a0ba0d17ec219b34f3c9ebb2f7dcfb554f5ae7c93cc837f5b312bb9\": not found" Feb 9 19:28:19.665772 kubelet[2390]: I0209 19:28:19.665723 2390 scope.go:117] "RemoveContainer" containerID="f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31" Feb 9 19:28:19.666758 env[1308]: time="2024-02-09T19:28:19.666731773Z" level=info msg="RemoveContainer for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\"" Feb 9 19:28:19.672976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd-rootfs.mount: Deactivated successfully. Feb 9 19:28:19.673105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5caf9468f9ea1eaa4a37afd55dea82c716f1bd1e84392c6b8cd2f50a5b6519dd-shm.mount: Deactivated successfully. Feb 9 19:28:19.673189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756-rootfs.mount: Deactivated successfully. Feb 9 19:28:19.673286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc7807ffba7b5328789700297131d6f6572007f54720332189b69ff758e98756-shm.mount: Deactivated successfully. Feb 9 19:28:19.673365 systemd[1]: var-lib-kubelet-pods-2898c470\x2d7495\x2d4f3a\x2d9daf\x2dfecbbd553b97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbgp5f.mount: Deactivated successfully. Feb 9 19:28:19.673453 systemd[1]: var-lib-kubelet-pods-6dc4c187\x2d4f9a\x2d4f9e\x2d90b6\x2d3be4e750b3ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8qwk.mount: Deactivated successfully. Feb 9 19:28:19.673536 systemd[1]: var-lib-kubelet-pods-2898c470\x2d7495\x2d4f3a\x2d9daf\x2dfecbbd553b97-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:28:19.673616 systemd[1]: var-lib-kubelet-pods-2898c470\x2d7495\x2d4f3a\x2d9daf\x2dfecbbd553b97-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:28:19.677651 env[1308]: time="2024-02-09T19:28:19.677617476Z" level=info msg="RemoveContainer for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" returns successfully" Feb 9 19:28:19.677880 kubelet[2390]: I0209 19:28:19.677860 2390 scope.go:117] "RemoveContainer" containerID="f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31" Feb 9 19:28:19.678129 env[1308]: time="2024-02-09T19:28:19.678078581Z" level=error msg="ContainerStatus for \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\": not found" Feb 9 19:28:19.678362 kubelet[2390]: E0209 19:28:19.678342 2390 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\": not found" containerID="f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31" Feb 9 19:28:19.678440 kubelet[2390]: I0209 19:28:19.678382 2390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31"} err="failed to get container status \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\": rpc error: code = NotFound desc = an error occurred when try to find container \"f41b1e5187899bdde5236a2a1e0c56af7a2edb39ae6f0fcb1171a02849300c31\": not found" Feb 9 19:28:20.529673 kubelet[2390]: E0209 19:28:20.529610 2390 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:28:20.721019 sshd[3982]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:20.725306 systemd[1]: sshd@20-10.200.8.14:22-10.200.12.6:48616.service: Deactivated successfully. Feb 9 19:28:20.726453 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:28:20.726713 systemd[1]: session-23.scope: Consumed 1.021s CPU time. Feb 9 19:28:20.727392 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:28:20.728270 systemd-logind[1293]: Removed session 23. Feb 9 19:28:20.829346 systemd[1]: Started sshd@21-10.200.8.14:22-10.200.12.6:37254.service. Feb 9 19:28:21.306370 kubelet[2390]: I0209 19:28:21.305893 2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" path="/var/lib/kubelet/pods/2898c470-7495-4f3a-9daf-fecbbd553b97/volumes" Feb 9 19:28:21.306721 kubelet[2390]: I0209 19:28:21.306692 2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec" path="/var/lib/kubelet/pods/6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec/volumes" Feb 9 19:28:21.467094 sshd[4156]: Accepted publickey for core from 10.200.12.6 port 37254 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:21.468887 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:21.474401 systemd[1]: Started session-24.scope. Feb 9 19:28:21.474968 systemd-logind[1293]: New session 24 of user core. Feb 9 19:28:22.601060 kubelet[2390]: I0209 19:28:22.600917 2390 topology_manager.go:215] "Topology Admit Handler" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" podNamespace="kube-system" podName="cilium-blvgh" Feb 9 19:28:22.601571 kubelet[2390]: E0209 19:28:22.601159 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" containerName="mount-cgroup" Feb 9 19:28:22.601571 kubelet[2390]: E0209 19:28:22.601198 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" containerName="apply-sysctl-overwrites" Feb 9 19:28:22.601571 kubelet[2390]: E0209 19:28:22.601210 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" containerName="clean-cilium-state" Feb 9 19:28:22.601571 kubelet[2390]: E0209 19:28:22.601223 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" containerName="cilium-agent" Feb 9 19:28:22.601571 kubelet[2390]: E0209 19:28:22.601258 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec" containerName="cilium-operator" Feb 9 19:28:22.601571 kubelet[2390]: E0209 19:28:22.601276 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" containerName="mount-bpf-fs" Feb 9 19:28:22.601571 kubelet[2390]: I0209 19:28:22.601332 2390 memory_manager.go:346] "RemoveStaleState removing state" podUID="6dc4c187-4f9a-4f9e-90b6-3be4e750b3ec" containerName="cilium-operator" Feb 9 19:28:22.601571 kubelet[2390]: I0209 19:28:22.601346 2390 memory_manager.go:346] "RemoveStaleState removing state" podUID="2898c470-7495-4f3a-9daf-fecbbd553b97" containerName="cilium-agent" Feb 9 19:28:22.617562 systemd[1]: Created slice kubepods-burstable-podddc3840e_fbed_4ded_96be_eb1bdf25ebc0.slice. Feb 9 19:28:22.656747 kubelet[2390]: I0209 19:28:22.656704 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-etc-cni-netd\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657051 kubelet[2390]: I0209 19:28:22.657031 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hostproc\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657202 kubelet[2390]: I0209 19:28:22.657187 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hubble-tls\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657364 kubelet[2390]: I0209 19:28:22.657348 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-config-path\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657473 kubelet[2390]: I0209 19:28:22.657462 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cni-path\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657572 kubelet[2390]: I0209 19:28:22.657562 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-clustermesh-secrets\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657684 kubelet[2390]: I0209 19:28:22.657672 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7xs\" (UniqueName: \"kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-kube-api-access-5x7xs\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657795 kubelet[2390]: I0209 19:28:22.657784 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-run\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.657904 kubelet[2390]: I0209 19:28:22.657894 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-cgroup\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.658001 kubelet[2390]: I0209 19:28:22.657988 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-xtables-lock\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.658120 kubelet[2390]: I0209 19:28:22.658108 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-ipsec-secrets\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.658240 kubelet[2390]: I0209 19:28:22.658219 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-kernel\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.658348 kubelet[2390]: I0209 19:28:22.658336 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-bpf-maps\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.658454 kubelet[2390]: I0209 19:28:22.658443 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-lib-modules\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.658560 kubelet[2390]: I0209 19:28:22.658550 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-net\") pod \"cilium-blvgh\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " pod="kube-system/cilium-blvgh" Feb 9 19:28:22.687569 sshd[4156]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:22.691987 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:28:22.692329 systemd[1]: sshd@21-10.200.8.14:22-10.200.12.6:37254.service: Deactivated successfully. Feb 9 19:28:22.693406 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:28:22.694513 systemd-logind[1293]: Removed session 24. Feb 9 19:28:22.795449 systemd[1]: Started sshd@22-10.200.8.14:22-10.200.12.6:37258.service. Feb 9 19:28:22.924306 env[1308]: time="2024-02-09T19:28:22.922995261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-blvgh,Uid:ddc3840e-fbed-4ded-96be-eb1bdf25ebc0,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:22.973453 env[1308]: time="2024-02-09T19:28:22.973361734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:22.973453 env[1308]: time="2024-02-09T19:28:22.973398835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:22.973453 env[1308]: time="2024-02-09T19:28:22.973414935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:22.973923 env[1308]: time="2024-02-09T19:28:22.973776138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f pid=4180 runtime=io.containerd.runc.v2 Feb 9 19:28:22.988489 systemd[1]: Started cri-containerd-e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f.scope. Feb 9 19:28:23.017296 env[1308]: time="2024-02-09T19:28:23.017225346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-blvgh,Uid:ddc3840e-fbed-4ded-96be-eb1bdf25ebc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\"" Feb 9 19:28:23.024071 env[1308]: time="2024-02-09T19:28:23.024024310Z" level=info msg="CreateContainer within sandbox \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:28:23.051344 env[1308]: time="2024-02-09T19:28:23.051295165Z" level=info msg="CreateContainer within sandbox \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\"" Feb 9 19:28:23.053487 env[1308]: time="2024-02-09T19:28:23.053354685Z" level=info msg="StartContainer for \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\"" Feb 9 19:28:23.072151 systemd[1]: Started cri-containerd-e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9.scope. Feb 9 19:28:23.085260 systemd[1]: cri-containerd-e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9.scope: Deactivated successfully. Feb 9 19:28:23.135707 env[1308]: time="2024-02-09T19:28:23.135637156Z" level=info msg="shim disconnected" id=e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9 Feb 9 19:28:23.135707 env[1308]: time="2024-02-09T19:28:23.135702356Z" level=warning msg="cleaning up after shim disconnected" id=e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9 namespace=k8s.io Feb 9 19:28:23.135707 env[1308]: time="2024-02-09T19:28:23.135715057Z" level=info msg="cleaning up dead shim" Feb 9 19:28:23.145240 env[1308]: time="2024-02-09T19:28:23.145178445Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4237 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:28:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:28:23.145594 env[1308]: time="2024-02-09T19:28:23.145481148Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Feb 9 19:28:23.146146 env[1308]: time="2024-02-09T19:28:23.145760351Z" level=error msg="Failed to pipe stderr of container \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\"" error="reading from a closed fifo" Feb 9 19:28:23.146300 env[1308]: time="2024-02-09T19:28:23.146089554Z" level=error msg="Failed to pipe stdout of container \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\"" error="reading from a closed fifo" Feb 9 19:28:23.149736 env[1308]: time="2024-02-09T19:28:23.149691788Z" level=error msg="StartContainer for \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:28:23.150010 kubelet[2390]: E0209 19:28:23.149978 2390 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9" Feb 9 19:28:23.151838 kubelet[2390]: E0209 19:28:23.150495 2390 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:28:23.151838 kubelet[2390]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:28:23.151838 kubelet[2390]: rm /hostbin/cilium-mount Feb 9 19:28:23.151965 kubelet[2390]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5x7xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-blvgh_kube-system(ddc3840e-fbed-4ded-96be-eb1bdf25ebc0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:28:23.151965 kubelet[2390]: E0209 19:28:23.150558 2390 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-blvgh" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" Feb 9 19:28:23.428794 sshd[4170]: Accepted publickey for core from 10.200.12.6 port 37258 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:23.430392 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:23.435572 systemd[1]: Started session-25.scope. Feb 9 19:28:23.436022 systemd-logind[1293]: New session 25 of user core. Feb 9 19:28:23.632545 env[1308]: time="2024-02-09T19:28:23.632486512Z" level=info msg="CreateContainer within sandbox \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:28:23.671256 env[1308]: time="2024-02-09T19:28:23.671187475Z" level=info msg="CreateContainer within sandbox \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\"" Feb 9 19:28:23.671851 env[1308]: time="2024-02-09T19:28:23.671821081Z" level=info msg="StartContainer for \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\"" Feb 9 19:28:23.691871 systemd[1]: Started cri-containerd-e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f.scope. Feb 9 19:28:23.705063 systemd[1]: cri-containerd-e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f.scope: Deactivated successfully. Feb 9 19:28:23.734511 env[1308]: time="2024-02-09T19:28:23.734451168Z" level=info msg="shim disconnected" id=e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f Feb 9 19:28:23.734712 env[1308]: time="2024-02-09T19:28:23.734514168Z" level=warning msg="cleaning up after shim disconnected" id=e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f namespace=k8s.io Feb 9 19:28:23.734712 env[1308]: time="2024-02-09T19:28:23.734527669Z" level=info msg="cleaning up dead shim" Feb 9 19:28:23.743691 env[1308]: time="2024-02-09T19:28:23.743648154Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4275 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:28:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:28:23.743969 env[1308]: time="2024-02-09T19:28:23.743907856Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Feb 9 19:28:23.744176 env[1308]: time="2024-02-09T19:28:23.744129459Z" level=error msg="Failed to pipe stdout of container \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\"" error="reading from a closed fifo" Feb 9 19:28:23.747430 env[1308]: time="2024-02-09T19:28:23.747354289Z" level=error msg="Failed to pipe stderr of container \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\"" error="reading from a closed fifo" Feb 9 19:28:23.750969 env[1308]: time="2024-02-09T19:28:23.750926522Z" level=error msg="StartContainer for \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:28:23.751214 kubelet[2390]: E0209 19:28:23.751190 2390 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f" Feb 9 19:28:23.752845 kubelet[2390]: E0209 19:28:23.751801 2390 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:28:23.752845 kubelet[2390]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:28:23.752845 kubelet[2390]: rm /hostbin/cilium-mount Feb 9 19:28:23.752845 kubelet[2390]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5x7xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-blvgh_kube-system(ddc3840e-fbed-4ded-96be-eb1bdf25ebc0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:28:23.753218 kubelet[2390]: E0209 19:28:23.753112 2390 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-blvgh" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" Feb 9 19:28:23.957929 sshd[4170]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:23.962739 systemd[1]: sshd@22-10.200.8.14:22-10.200.12.6:37258.service: Deactivated successfully. Feb 9 19:28:23.963945 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:28:23.965136 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:28:23.966292 systemd-logind[1293]: Removed session 25. Feb 9 19:28:24.064780 systemd[1]: Started sshd@23-10.200.8.14:22-10.200.12.6:37262.service. Feb 9 19:28:24.629493 kubelet[2390]: I0209 19:28:24.629455 2390 scope.go:117] "RemoveContainer" containerID="e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9" Feb 9 19:28:24.632184 env[1308]: time="2024-02-09T19:28:24.630822354Z" level=info msg="RemoveContainer for \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\"" Feb 9 19:28:24.632184 env[1308]: time="2024-02-09T19:28:24.631460760Z" level=info msg="StopPodSandbox for \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\"" Feb 9 19:28:24.632184 env[1308]: time="2024-02-09T19:28:24.631539161Z" level=info msg="Container to stop \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:24.632184 env[1308]: time="2024-02-09T19:28:24.631563261Z" level=info msg="Container to stop \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:24.637143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f-shm.mount: Deactivated successfully. Feb 9 19:28:24.643247 env[1308]: time="2024-02-09T19:28:24.643198470Z" level=info msg="RemoveContainer for \"e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9\" returns successfully" Feb 9 19:28:24.649105 systemd[1]: cri-containerd-e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f.scope: Deactivated successfully. Feb 9 19:28:24.675763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f-rootfs.mount: Deactivated successfully. Feb 9 19:28:24.687289 env[1308]: time="2024-02-09T19:28:24.687222081Z" level=info msg="shim disconnected" id=e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f Feb 9 19:28:24.687717 env[1308]: time="2024-02-09T19:28:24.687451083Z" level=warning msg="cleaning up after shim disconnected" id=e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f namespace=k8s.io Feb 9 19:28:24.687846 env[1308]: time="2024-02-09T19:28:24.687820787Z" level=info msg="cleaning up dead shim" Feb 9 19:28:24.692393 sshd[4298]: Accepted publickey for core from 10.200.12.6 port 37262 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:28:24.693529 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:24.700705 systemd[1]: Started session-26.scope. Feb 9 19:28:24.701285 systemd-logind[1293]: New session 26 of user core. Feb 9 19:28:24.709884 env[1308]: time="2024-02-09T19:28:24.709851593Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4318 runtime=io.containerd.runc.v2\n" Feb 9 19:28:24.710334 env[1308]: time="2024-02-09T19:28:24.710212996Z" level=info msg="TearDown network for sandbox \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\" successfully" Feb 9 19:28:24.710731 env[1308]: time="2024-02-09T19:28:24.710335297Z" level=info msg="StopPodSandbox for \"e51c9b671485cd5e7fe95d75c585f32aec68ab053da7fd19acd5186aa40dc92f\" returns successfully" Feb 9 19:28:24.772405 kubelet[2390]: I0209 19:28:24.772349 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-kernel\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.772405 kubelet[2390]: I0209 19:28:24.772410 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-etc-cni-netd\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772447 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hubble-tls\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772473 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cni-path\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772500 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-cgroup\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772527 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-run\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772571 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-config-path\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772618 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-clustermesh-secrets\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772652 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-ipsec-secrets\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772689 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x7xs\" (UniqueName: \"kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-kube-api-access-5x7xs\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772718 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-bpf-maps\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772754 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-net\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772784 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hostproc\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772814 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-xtables-lock\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772870 2390 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-lib-modules\") pod \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\" (UID: \"ddc3840e-fbed-4ded-96be-eb1bdf25ebc0\") " Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.772973 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.773017 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.773053 kubelet[2390]: I0209 19:28:24.773045 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.775536 kubelet[2390]: I0209 19:28:24.774870 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cni-path" (OuterVolumeSpecName: "cni-path") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.775536 kubelet[2390]: I0209 19:28:24.774922 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.775536 kubelet[2390]: I0209 19:28:24.774951 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.778419 kubelet[2390]: I0209 19:28:24.778388 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:28:24.778662 kubelet[2390]: I0209 19:28:24.778637 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.778752 kubelet[2390]: I0209 19:28:24.778678 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.778752 kubelet[2390]: I0209 19:28:24.778700 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hostproc" (OuterVolumeSpecName: "hostproc") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.778752 kubelet[2390]: I0209 19:28:24.778722 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.781736 systemd[1]: var-lib-kubelet-pods-ddc3840e\x2dfbed\x2d4ded\x2d96be\x2deb1bdf25ebc0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:28:24.783397 kubelet[2390]: I0209 19:28:24.783366 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:28:24.786340 systemd[1]: var-lib-kubelet-pods-ddc3840e\x2dfbed\x2d4ded\x2d96be\x2deb1bdf25ebc0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:28:24.787654 kubelet[2390]: I0209 19:28:24.787588 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:24.791851 systemd[1]: var-lib-kubelet-pods-ddc3840e\x2dfbed\x2d4ded\x2d96be\x2deb1bdf25ebc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5x7xs.mount: Deactivated successfully. Feb 9 19:28:24.794620 systemd[1]: var-lib-kubelet-pods-ddc3840e\x2dfbed\x2d4ded\x2d96be\x2deb1bdf25ebc0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:28:24.795395 kubelet[2390]: I0209 19:28:24.795369 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-kube-api-access-5x7xs" (OuterVolumeSpecName: "kube-api-access-5x7xs") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "kube-api-access-5x7xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:24.795702 kubelet[2390]: I0209 19:28:24.795679 2390 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" (UID: "ddc3840e-fbed-4ded-96be-eb1bdf25ebc0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:28:24.874124 kubelet[2390]: I0209 19:28:24.874076 2390 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-net\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874124 kubelet[2390]: I0209 19:28:24.874118 2390 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hostproc\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874124 kubelet[2390]: I0209 19:28:24.874133 2390 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-xtables-lock\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874145 2390 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-lib-modules\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874164 2390 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874177 2390 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-etc-cni-netd\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874194 2390 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-hubble-tls\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874209 2390 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cni-path\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874222 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-cgroup\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874261 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-config-path\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874277 2390 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-clustermesh-secrets\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874290 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-run\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874305 2390 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874319 2390 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5x7xs\" (UniqueName: \"kubernetes.io/projected/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-kube-api-access-5x7xs\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:24.874428 kubelet[2390]: I0209 19:28:24.874332 2390 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0-bpf-maps\") on node \"ci-3510.3.2-a-19528a6d7a\" DevicePath \"\"" Feb 9 19:28:25.310682 systemd[1]: Removed slice kubepods-burstable-podddc3840e_fbed_4ded_96be_eb1bdf25ebc0.slice. Feb 9 19:28:25.531323 kubelet[2390]: E0209 19:28:25.531286 2390 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:28:25.633403 kubelet[2390]: I0209 19:28:25.633373 2390 scope.go:117] "RemoveContainer" containerID="e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f" Feb 9 19:28:25.635364 env[1308]: time="2024-02-09T19:28:25.635318830Z" level=info msg="RemoveContainer for \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\"" Feb 9 19:28:25.642592 env[1308]: time="2024-02-09T19:28:25.642480697Z" level=info msg="RemoveContainer for \"e4ea62e7c1407dc0363284898170e444e686e79c2238a75c935b7625e642c31f\" returns successfully" Feb 9 19:28:25.672444 kubelet[2390]: I0209 19:28:25.672406 2390 topology_manager.go:215] "Topology Admit Handler" podUID="4b4390c7-09c7-443d-b200-39655f3f3964" podNamespace="kube-system" podName="cilium-8djqz" Feb 9 19:28:25.672787 kubelet[2390]: E0209 19:28:25.672768 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" containerName="mount-cgroup" Feb 9 19:28:25.672928 kubelet[2390]: E0209 19:28:25.672915 2390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" containerName="mount-cgroup" Feb 9 19:28:25.673050 kubelet[2390]: I0209 19:28:25.673028 2390 memory_manager.go:346] "RemoveStaleState removing state" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" containerName="mount-cgroup" Feb 9 19:28:25.673050 kubelet[2390]: I0209 19:28:25.673049 2390 memory_manager.go:346] "RemoveStaleState removing state" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" containerName="mount-cgroup" Feb 9 19:28:25.679657 systemd[1]: Created slice kubepods-burstable-pod4b4390c7_09c7_443d_b200_39655f3f3964.slice. Feb 9 19:28:25.778969 kubelet[2390]: I0209 19:28:25.778918 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-lib-modules\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.778969 kubelet[2390]: I0209 19:28:25.778982 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b4390c7-09c7-443d-b200-39655f3f3964-clustermesh-secrets\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779012 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-hostproc\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779035 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b4390c7-09c7-443d-b200-39655f3f3964-cilium-ipsec-secrets\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779057 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b4390c7-09c7-443d-b200-39655f3f3964-hubble-tls\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779082 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-cni-path\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779113 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-cilium-run\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779141 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-bpf-maps\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779173 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284vz\" (UniqueName: \"kubernetes.io/projected/4b4390c7-09c7-443d-b200-39655f3f3964-kube-api-access-284vz\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779197 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-host-proc-sys-kernel\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779241 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-cilium-cgroup\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779270 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b4390c7-09c7-443d-b200-39655f3f3964-cilium-config-path\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779304 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-xtables-lock\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779330 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-host-proc-sys-net\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.779541 kubelet[2390]: I0209 19:28:25.779357 2390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b4390c7-09c7-443d-b200-39655f3f3964-etc-cni-netd\") pod \"cilium-8djqz\" (UID: \"4b4390c7-09c7-443d-b200-39655f3f3964\") " pod="kube-system/cilium-8djqz" Feb 9 19:28:25.985088 env[1308]: time="2024-02-09T19:28:25.984935590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8djqz,Uid:4b4390c7-09c7-443d-b200-39655f3f3964,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:26.015609 env[1308]: time="2024-02-09T19:28:26.015536475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:26.015609 env[1308]: time="2024-02-09T19:28:26.015575876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:26.015835 env[1308]: time="2024-02-09T19:28:26.015590476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:26.016144 env[1308]: time="2024-02-09T19:28:26.016056880Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457 pid=4354 runtime=io.containerd.runc.v2 Feb 9 19:28:26.030999 systemd[1]: Started cri-containerd-9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457.scope. Feb 9 19:28:26.061105 env[1308]: time="2024-02-09T19:28:26.061047099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8djqz,Uid:4b4390c7-09c7-443d-b200-39655f3f3964,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\"" Feb 9 19:28:26.064704 env[1308]: time="2024-02-09T19:28:26.064656932Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:28:26.097373 env[1308]: time="2024-02-09T19:28:26.097322136Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048\"" Feb 9 19:28:26.098304 env[1308]: time="2024-02-09T19:28:26.098270045Z" level=info msg="StartContainer for \"3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048\"" Feb 9 19:28:26.118007 systemd[1]: Started cri-containerd-3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048.scope. Feb 9 19:28:26.150785 env[1308]: time="2024-02-09T19:28:26.150716733Z" level=info msg="StartContainer for \"3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048\" returns successfully" Feb 9 19:28:26.160023 systemd[1]: cri-containerd-3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048.scope: Deactivated successfully. Feb 9 19:28:26.179227 kubelet[2390]: I0209 19:28:26.179067 2390 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-19528a6d7a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:28:26Z","lastTransitionTime":"2024-02-09T19:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 19:28:26.225720 env[1308]: time="2024-02-09T19:28:26.225654730Z" level=info msg="shim disconnected" id=3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048 Feb 9 19:28:26.225720 env[1308]: time="2024-02-09T19:28:26.225714331Z" level=warning msg="cleaning up after shim disconnected" id=3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048 namespace=k8s.io Feb 9 19:28:26.225720 env[1308]: time="2024-02-09T19:28:26.225725931Z" level=info msg="cleaning up dead shim" Feb 9 19:28:26.236648 env[1308]: time="2024-02-09T19:28:26.235987226Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4438 runtime=io.containerd.runc.v2\n" Feb 9 19:28:26.243109 kubelet[2390]: W0209 19:28:26.242893 2390 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc3840e_fbed_4ded_96be_eb1bdf25ebc0.slice/cri-containerd-e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9.scope WatchSource:0}: container "e0c6c8d4e95c60f9196580012206a4ab3b88c1d26baebc1e93b6e222dd1068b9" in namespace "k8s.io": not found Feb 9 19:28:26.303392 kubelet[2390]: E0209 19:28:26.303338 2390 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-xcx7p" podUID="0669b516-d8ae-44e7-9353-af5e89c90e89" Feb 9 19:28:26.642010 env[1308]: time="2024-02-09T19:28:26.641951803Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:28:26.682761 env[1308]: time="2024-02-09T19:28:26.682693582Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23\"" Feb 9 19:28:26.685214 env[1308]: time="2024-02-09T19:28:26.683530189Z" level=info msg="StartContainer for \"67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23\"" Feb 9 19:28:26.707395 systemd[1]: Started cri-containerd-67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23.scope. Feb 9 19:28:26.749582 env[1308]: time="2024-02-09T19:28:26.749521803Z" level=info msg="StartContainer for \"67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23\" returns successfully" Feb 9 19:28:26.753927 systemd[1]: cri-containerd-67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23.scope: Deactivated successfully. Feb 9 19:28:26.785155 env[1308]: time="2024-02-09T19:28:26.785108434Z" level=info msg="shim disconnected" id=67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23 Feb 9 19:28:26.785469 env[1308]: time="2024-02-09T19:28:26.785440338Z" level=warning msg="cleaning up after shim disconnected" id=67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23 namespace=k8s.io Feb 9 19:28:26.785469 env[1308]: time="2024-02-09T19:28:26.785462938Z" level=info msg="cleaning up dead shim" Feb 9 19:28:26.793960 env[1308]: time="2024-02-09T19:28:26.793923216Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4502 runtime=io.containerd.runc.v2\n" Feb 9 19:28:27.306772 kubelet[2390]: I0209 19:28:27.306719 2390 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ddc3840e-fbed-4ded-96be-eb1bdf25ebc0" path="/var/lib/kubelet/pods/ddc3840e-fbed-4ded-96be-eb1bdf25ebc0/volumes" Feb 9 19:28:27.647102 env[1308]: time="2024-02-09T19:28:27.647052938Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:28:27.687533 env[1308]: time="2024-02-09T19:28:27.687480613Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7\"" Feb 9 19:28:27.688579 env[1308]: time="2024-02-09T19:28:27.688544323Z" level=info msg="StartContainer for \"e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7\"" Feb 9 19:28:27.720784 systemd[1]: Started cri-containerd-e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7.scope. Feb 9 19:28:27.771193 systemd[1]: cri-containerd-e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7.scope: Deactivated successfully. Feb 9 19:28:27.777343 env[1308]: time="2024-02-09T19:28:27.777289347Z" level=info msg="StartContainer for \"e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7\" returns successfully" Feb 9 19:28:27.819214 env[1308]: time="2024-02-09T19:28:27.819149735Z" level=info msg="shim disconnected" id=e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7 Feb 9 19:28:27.819214 env[1308]: time="2024-02-09T19:28:27.819213536Z" level=warning msg="cleaning up after shim disconnected" id=e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7 namespace=k8s.io Feb 9 19:28:27.819214 env[1308]: time="2024-02-09T19:28:27.819226936Z" level=info msg="cleaning up dead shim" Feb 9 19:28:27.829539 env[1308]: time="2024-02-09T19:28:27.829482431Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4558 runtime=io.containerd.runc.v2\n" Feb 9 19:28:27.888657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7-rootfs.mount: Deactivated successfully. Feb 9 19:28:28.303810 kubelet[2390]: E0209 19:28:28.303758 2390 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-xcx7p" podUID="0669b516-d8ae-44e7-9353-af5e89c90e89" Feb 9 19:28:28.653574 env[1308]: time="2024-02-09T19:28:28.653516364Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:28:28.693718 env[1308]: time="2024-02-09T19:28:28.693655835Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900\"" Feb 9 19:28:28.694576 env[1308]: time="2024-02-09T19:28:28.694540543Z" level=info msg="StartContainer for \"716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900\"" Feb 9 19:28:28.729668 systemd[1]: Started cri-containerd-716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900.scope. Feb 9 19:28:28.761927 systemd[1]: cri-containerd-716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900.scope: Deactivated successfully. Feb 9 19:28:28.773135 env[1308]: time="2024-02-09T19:28:28.773055370Z" level=info msg="StartContainer for \"716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900\" returns successfully" Feb 9 19:28:28.825049 env[1308]: time="2024-02-09T19:28:28.824989951Z" level=info msg="shim disconnected" id=716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900 Feb 9 19:28:28.825049 env[1308]: time="2024-02-09T19:28:28.825048252Z" level=warning msg="cleaning up after shim disconnected" id=716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900 namespace=k8s.io Feb 9 19:28:28.825049 env[1308]: time="2024-02-09T19:28:28.825060452Z" level=info msg="cleaning up dead shim" Feb 9 19:28:28.836110 env[1308]: time="2024-02-09T19:28:28.836053954Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4617 runtime=io.containerd.runc.v2\n" Feb 9 19:28:28.888731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900-rootfs.mount: Deactivated successfully. Feb 9 19:28:29.356909 kubelet[2390]: W0209 19:28:29.356847 2390 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b4390c7_09c7_443d_b200_39655f3f3964.slice/cri-containerd-3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048.scope WatchSource:0}: task 3a053bfe03a081bc2b5b3462c8ca6b2075843f589a718800f1914d16156f0048 not found: not found Feb 9 19:28:29.661103 env[1308]: time="2024-02-09T19:28:29.659505962Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:28:29.703145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320231336.mount: Deactivated successfully. Feb 9 19:28:29.719775 env[1308]: time="2024-02-09T19:28:29.719713018Z" level=info msg="CreateContainer within sandbox \"9b8dc95df76511e9f7bdc324c33a7ede43241791e8acbf8143241041427a1457\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066\"" Feb 9 19:28:29.723827 env[1308]: time="2024-02-09T19:28:29.723768756Z" level=info msg="StartContainer for \"e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066\"" Feb 9 19:28:29.756260 systemd[1]: Started cri-containerd-e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066.scope. Feb 9 19:28:29.802173 env[1308]: time="2024-02-09T19:28:29.802104879Z" level=info msg="StartContainer for \"e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066\" returns successfully" Feb 9 19:28:30.284336 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:28:30.303850 kubelet[2390]: E0209 19:28:30.303806 2390 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-xcx7p" podUID="0669b516-d8ae-44e7-9353-af5e89c90e89" Feb 9 19:28:32.467591 kubelet[2390]: W0209 19:28:32.467537 2390 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b4390c7_09c7_443d_b200_39655f3f3964.slice/cri-containerd-67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23.scope WatchSource:0}: task 67002705dc4e3376b70fe281087b7d8464170610b5fe9e3cefa96e3001d35b23 not found: not found Feb 9 19:28:32.889780 systemd-networkd[1452]: lxc_health: Link UP Feb 9 19:28:32.934356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:28:32.934017 systemd-networkd[1452]: lxc_health: Gained carrier Feb 9 19:28:33.368633 systemd[1]: run-containerd-runc-k8s.io-e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066-runc.7OIduN.mount: Deactivated successfully. Feb 9 19:28:34.014067 kubelet[2390]: I0209 19:28:34.014020 2390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8djqz" podStartSLOduration=9.013961056 podCreationTimestamp="2024-02-09 19:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:30.67654734 +0000 UTC m=+585.512546716" watchObservedRunningTime="2024-02-09 19:28:34.013961056 +0000 UTC m=+588.849960432" Feb 9 19:28:34.669466 systemd-networkd[1452]: lxc_health: Gained IPv6LL Feb 9 19:28:35.557926 systemd[1]: run-containerd-runc-k8s.io-e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066-runc.GhjRta.mount: Deactivated successfully. Feb 9 19:28:35.577260 kubelet[2390]: W0209 19:28:35.577182 2390 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b4390c7_09c7_443d_b200_39655f3f3964.slice/cri-containerd-e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7.scope WatchSource:0}: task e644e66d23dd8d097cb4c13d200aee599b1d4b827c5088ed5bcece8d98ec30c7 not found: not found Feb 9 19:28:37.745943 systemd[1]: run-containerd-runc-k8s.io-e556fd73ea672e18841c7ef8f7d5bad761acbf70cd3f2aaf4134ea831401a066-runc.DGHxOI.mount: Deactivated successfully. Feb 9 19:28:38.695172 kubelet[2390]: W0209 19:28:38.695124 2390 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b4390c7_09c7_443d_b200_39655f3f3964.slice/cri-containerd-716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900.scope WatchSource:0}: task 716260486831f2922c9cd5e858bfc72aeb59582c0d5313f9463ebcd4a14fe900 not found: not found Feb 9 19:28:40.091736 sshd[4298]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:40.095531 systemd[1]: sshd@23-10.200.8.14:22-10.200.12.6:37262.service: Deactivated successfully. Feb 9 19:28:40.096477 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:28:40.097292 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:28:40.098143 systemd-logind[1293]: Removed session 26. Feb 9 19:28:44.149353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.149719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.178096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.178521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.197717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.198135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.220564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.220985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.240486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.240926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.254528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.254792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.267218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.267504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.278292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.278554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.291769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.292142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.305605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.305888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.319530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.319789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.336062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.336365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.351263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.351561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.365731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.366039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.380245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.380528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.394316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.394603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.410364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.410690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.427817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.428151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.440487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.440809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.454170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.454539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.471455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.472020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.472449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.483745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.512642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.513473 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.513594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.513706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.513818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.523370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.524903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.537696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.552019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.592897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.593060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.593190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.593375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.593506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.593647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.598758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.636459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.636612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.636729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.638681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.638831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.638947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.639058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.639163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.639283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.651154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.659304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.659579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.666863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.699949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.700280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.700430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.711989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.712330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.725877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.733806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.736160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.747879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.789823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.790991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.791211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.791347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.791460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.791571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.791762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.798094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.804770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.818251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.818480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.831132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.837950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.838161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.854656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.854967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.868563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.868823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.882772 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.883007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.896734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.917060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.917197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.917383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.917495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.924728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.938659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.954304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.954646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.955258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.977592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.977813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.977959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:28:44.978099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001