Feb 8 23:17:53.019756 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:17:53.019782 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:17:53.019791 kernel: BIOS-provided physical RAM map: Feb 8 23:17:53.019797 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:17:53.019806 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:17:53.019812 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:17:53.019823 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:17:53.019840 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:17:53.019846 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:17:53.019855 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:17:53.019861 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:17:53.019867 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:17:53.019875 kernel: NX (Execute Disable) protection: active Feb 8 23:17:53.019881 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:17:53.019893 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Feb 8 23:17:53.019900 kernel: random: crng init done Feb 8 23:17:53.019910 kernel: SMBIOS 3.1.0 present. Feb 8 23:17:53.019917 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:17:53.019925 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:17:53.019931 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:17:53.019939 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:17:53.019947 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:17:53.019957 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:17:53.019965 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:17:53.019971 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:17:53.019980 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:17:53.019989 kernel: tsc: Detected 2593.904 MHz processor Feb 8 23:17:53.019997 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:17:53.020003 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:17:53.020013 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:17:53.020020 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:17:53.020029 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:17:53.020037 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:17:53.020048 kernel: Using GB pages for direct mapping Feb 8 23:17:53.020055 kernel: Secure boot disabled Feb 8 23:17:53.020063 kernel: ACPI: Early table checksum verification disabled Feb 8 23:17:53.020070 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:17:53.020078 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020086 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020095 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:17:53.020107 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:17:53.020117 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020125 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020134 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020141 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020151 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020162 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020170 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:53.020177 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:17:53.020187 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:17:53.020195 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:17:53.020203 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:17:53.020210 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:17:53.020220 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:17:53.020230 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:17:53.020239 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:17:53.020246 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:17:53.020256 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:17:53.020263 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:17:53.020272 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:17:53.020279 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:17:53.020288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:17:53.020296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:17:53.020308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:17:53.020315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:17:53.020324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:17:53.020331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:17:53.020340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:17:53.020348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:17:53.020356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:17:53.020365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:17:53.020373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:17:53.020384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:17:53.020391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:17:53.020401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:17:53.020409 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:17:53.020418 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:17:53.020425 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:17:53.020435 kernel: Zone ranges: Feb 8 23:17:53.020443 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:17:53.020451 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:17:53.020461 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:17:53.020470 kernel: Movable zone start for each node Feb 8 23:17:53.020479 kernel: Early memory node ranges Feb 8 23:17:53.020487 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:17:53.020494 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:17:53.020503 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:17:53.020511 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:17:53.020520 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:17:53.020526 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:17:53.020538 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:17:53.020546 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:17:53.020555 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:17:53.020562 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:17:53.020572 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:17:53.020580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:17:53.020589 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:17:53.020596 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:17:53.020606 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:17:53.020616 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:17:53.020625 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:17:53.020632 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:17:53.020642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:17:53.020651 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:17:53.020659 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:17:53.020666 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:17:53.020676 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:17:53.020683 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:17:53.020694 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:17:53.020701 kernel: Policy zone: Normal Feb 8 23:17:53.020712 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:17:53.020722 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:17:53.020730 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:17:53.020737 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:17:53.020747 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:17:53.020755 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:17:53.020766 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:17:53.020774 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:17:53.020792 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:17:53.020802 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:17:53.020811 kernel: rcu: RCU event tracing is enabled. Feb 8 23:17:53.020820 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:17:53.020868 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:17:53.020876 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:17:53.020887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:17:53.020896 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:17:53.020904 kernel: Using NULL legacy PIC Feb 8 23:17:53.020916 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:17:53.020925 kernel: Console: colour dummy device 80x25 Feb 8 23:17:53.020935 kernel: printk: console [tty1] enabled Feb 8 23:17:53.020942 kernel: printk: console [ttyS0] enabled Feb 8 23:17:53.020951 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:17:53.020962 kernel: ACPI: Core revision 20210730 Feb 8 23:17:53.020973 kernel: Failed to register legacy timer interrupt Feb 8 23:17:53.020980 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:17:53.020990 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:17:53.020999 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Feb 8 23:17:53.021008 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:17:53.021015 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:17:53.021026 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:17:53.021034 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:17:53.021043 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:17:53.021052 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:17:53.021062 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:17:53.021071 kernel: RETBleed: Vulnerable Feb 8 23:17:53.021080 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:17:53.021087 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:17:53.021097 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:17:53.021105 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:17:53.021114 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:17:53.021121 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:17:53.021128 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:17:53.021137 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:17:53.021145 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:17:53.021155 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:17:53.021164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:17:53.021173 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:17:53.021180 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:17:53.021190 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:17:53.021199 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:17:53.021208 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:17:53.021215 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:17:53.021225 kernel: LSM: Security Framework initializing Feb 8 23:17:53.021234 kernel: SELinux: Initializing. Feb 8 23:17:53.021245 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:17:53.021253 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:17:53.021263 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:17:53.021273 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:17:53.021280 kernel: signal: max sigframe size: 3632 Feb 8 23:17:53.021290 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:17:53.021299 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:17:53.021309 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:17:53.021316 kernel: x86: Booting SMP configuration: Feb 8 23:17:53.021326 kernel: .... node #0, CPUs: #1 Feb 8 23:17:53.021338 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:17:53.021347 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:17:53.021354 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:17:53.021364 kernel: smpboot: Max logical packages: 1 Feb 8 23:17:53.021374 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Feb 8 23:17:53.021382 kernel: devtmpfs: initialized Feb 8 23:17:53.021390 kernel: x86/mm: Memory block size: 128MB Feb 8 23:17:53.021399 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:17:53.021412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:17:53.021419 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:17:53.021429 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:17:53.021437 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:17:53.021448 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:17:53.021455 kernel: audit: type=2000 audit(1707434271.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:17:53.021466 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:17:53.021474 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:17:53.021484 kernel: cpuidle: using governor menu Feb 8 23:17:53.021493 kernel: ACPI: bus type PCI registered Feb 8 23:17:53.021504 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:17:53.021513 kernel: dca service started, version 1.12.1 Feb 8 23:17:53.021521 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:17:53.021529 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:17:53.021539 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:17:53.021548 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:17:53.021557 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:17:53.021564 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:17:53.021576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:17:53.021587 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:17:53.021594 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:17:53.021604 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:17:53.021612 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:17:53.021622 kernel: ACPI: Interpreter enabled Feb 8 23:17:53.021629 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:17:53.021639 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:17:53.021648 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:17:53.021659 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:17:53.021667 kernel: iommu: Default domain type: Translated Feb 8 23:17:53.021677 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:17:53.021687 kernel: vgaarb: loaded Feb 8 23:17:53.021695 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:17:53.021703 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:17:53.021712 kernel: PTP clock support registered Feb 8 23:17:53.021722 kernel: Registered efivars operations Feb 8 23:17:53.021730 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:17:53.021738 kernel: PCI: System does not support PCI Feb 8 23:17:53.021750 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:17:53.021760 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:17:53.021767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:17:53.021778 kernel: pnp: PnP ACPI init Feb 8 23:17:53.021787 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:17:53.021796 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:17:53.021803 kernel: NET: Registered PF_INET protocol family Feb 8 23:17:53.021814 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:17:53.021833 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:17:53.021842 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:17:53.021851 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:17:53.021862 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:17:53.021870 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:17:53.021880 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:17:53.021888 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:17:53.021899 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:17:53.021906 kernel: NET: Registered PF_XDP protocol family Feb 8 23:17:53.021918 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:17:53.021927 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:17:53.021936 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:17:53.021944 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:17:53.021953 kernel: Initialise system trusted keyrings Feb 8 23:17:53.021963 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:17:53.021971 kernel: Key type asymmetric registered Feb 8 23:17:53.021980 kernel: Asymmetric key parser 'x509' registered Feb 8 23:17:53.021988 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:17:53.021999 kernel: io scheduler mq-deadline registered Feb 8 23:17:53.022009 kernel: io scheduler kyber registered Feb 8 23:17:53.022017 kernel: io scheduler bfq registered Feb 8 23:17:53.022027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:17:53.022036 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:17:53.022045 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:17:53.022053 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:17:53.022063 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:17:53.022202 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:17:53.022294 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:17:52 UTC (1707434272) Feb 8 23:17:53.022376 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:17:53.022388 kernel: fail to initialize ptp_kvm Feb 8 23:17:53.022399 kernel: intel_pstate: CPU model not supported Feb 8 23:17:53.022406 kernel: efifb: probing for efifb Feb 8 23:17:53.022415 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:17:53.022424 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:17:53.022434 kernel: efifb: scrolling: redraw Feb 8 23:17:53.022443 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:17:53.022454 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:17:53.022462 kernel: fb0: EFI VGA frame buffer device Feb 8 23:17:53.022472 kernel: pstore: Registered efi as persistent store backend Feb 8 23:17:53.022479 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:17:53.022489 kernel: Segment Routing with IPv6 Feb 8 23:17:53.022498 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:17:53.022507 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:17:53.022514 kernel: Key type dns_resolver registered Feb 8 23:17:53.022526 kernel: IPI shorthand broadcast: enabled Feb 8 23:17:53.022536 kernel: sched_clock: Marking stable (733525200, 19486700)->(915510200, -162498300) Feb 8 23:17:53.022544 kernel: registered taskstats version 1 Feb 8 23:17:53.022553 kernel: Loading compiled-in X.509 certificates Feb 8 23:17:53.022562 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:17:53.022572 kernel: Key type .fscrypt registered Feb 8 23:17:53.022580 kernel: Key type fscrypt-provisioning registered Feb 8 23:17:53.022588 kernel: pstore: Using crash dump compression: deflate Feb 8 23:17:53.022599 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:17:53.022610 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:17:53.022617 kernel: ima: No architecture policies found Feb 8 23:17:53.022627 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:17:53.022635 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:17:53.022645 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:17:53.022652 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:17:53.022663 kernel: Run /init as init process Feb 8 23:17:53.022670 kernel: with arguments: Feb 8 23:17:53.022680 kernel: /init Feb 8 23:17:53.022690 kernel: with environment: Feb 8 23:17:53.022700 kernel: HOME=/ Feb 8 23:17:53.022708 kernel: TERM=linux Feb 8 23:17:53.022717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:17:53.022727 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:17:53.022738 systemd[1]: Detected virtualization microsoft. Feb 8 23:17:53.022748 systemd[1]: Detected architecture x86-64. Feb 8 23:17:53.022758 systemd[1]: Running in initrd. Feb 8 23:17:53.022768 systemd[1]: No hostname configured, using default hostname. Feb 8 23:17:53.022777 systemd[1]: Hostname set to . Feb 8 23:17:53.022787 systemd[1]: Initializing machine ID from random generator. Feb 8 23:17:53.022795 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:17:53.022805 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:17:53.022815 systemd[1]: Reached target cryptsetup.target. Feb 8 23:17:53.022823 systemd[1]: Reached target paths.target. Feb 8 23:17:53.022840 systemd[1]: Reached target slices.target. Feb 8 23:17:53.022853 systemd[1]: Reached target swap.target. Feb 8 23:17:53.022861 systemd[1]: Reached target timers.target. Feb 8 23:17:53.022873 systemd[1]: Listening on iscsid.socket. Feb 8 23:17:53.022880 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:17:53.022889 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:17:53.022899 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:17:53.022906 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:17:53.022919 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:17:53.022930 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:17:53.022937 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:17:53.022947 systemd[1]: Reached target sockets.target. Feb 8 23:17:53.022956 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:17:53.022967 systemd[1]: Finished network-cleanup.service. Feb 8 23:17:53.022974 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:17:53.022985 systemd[1]: Starting systemd-journald.service... Feb 8 23:17:53.022995 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:17:53.023006 systemd[1]: Starting systemd-resolved.service... Feb 8 23:17:53.023015 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:17:53.023024 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:17:53.023035 kernel: audit: type=1130 audit(1707434273.015:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.023048 systemd-journald[183]: Journal started Feb 8 23:17:53.023094 systemd-journald[183]: Runtime Journal (/run/log/journal/1bc7a11fec074ac8b363e252cf658ff2) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:17:53.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.007205 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:17:53.033851 systemd[1]: Started systemd-journald.service. Feb 8 23:17:53.038878 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:17:53.051942 kernel: audit: type=1130 audit(1707434273.036:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.050502 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:17:53.056839 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:17:53.064161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:17:53.079773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:17:53.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.094632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:17:53.097741 kernel: audit: type=1130 audit(1707434273.050:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.111947 kernel: audit: type=1130 audit(1707434273.053:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.116770 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:17:53.118811 kernel: Bridge firewalling registered Feb 8 23:17:53.118865 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:17:53.123256 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:17:53.134612 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:17:53.134630 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:17:53.134681 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:17:53.160681 kernel: audit: type=1130 audit(1707434273.107:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.165304 kernel: SCSI subsystem initialized Feb 8 23:17:53.164614 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:17:53.169933 dracut-cmdline[200]: dracut-dracut-053 Feb 8 23:17:53.169021 systemd[1]: Started systemd-resolved.service. Feb 8 23:17:53.181367 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:17:53.171631 systemd[1]: Reached target nss-lookup.target. Feb 8 23:17:53.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.206846 kernel: audit: type=1130 audit(1707434273.122:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.206869 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:17:53.225278 kernel: audit: type=1130 audit(1707434273.170:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.225311 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:17:53.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.233431 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:17:53.237447 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:17:53.238108 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:17:53.257064 kernel: audit: type=1130 audit(1707434273.241:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.242073 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:17:53.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.267306 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:17:53.283189 kernel: audit: type=1130 audit(1707434273.269:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.289843 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:17:53.302850 kernel: iscsi: registered transport (tcp) Feb 8 23:17:53.326841 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:17:53.326883 kernel: QLogic iSCSI HBA Driver Feb 8 23:17:53.355546 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:17:53.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.358696 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:17:53.409852 kernel: raid6: avx512x4 gen() 18489 MB/s Feb 8 23:17:53.428841 kernel: raid6: avx512x4 xor() 7024 MB/s Feb 8 23:17:53.447844 kernel: raid6: avx512x2 gen() 18640 MB/s Feb 8 23:17:53.468845 kernel: raid6: avx512x2 xor() 29841 MB/s Feb 8 23:17:53.488838 kernel: raid6: avx512x1 gen() 18313 MB/s Feb 8 23:17:53.514842 kernel: raid6: avx512x1 xor() 26945 MB/s Feb 8 23:17:53.534838 kernel: raid6: avx2x4 gen() 18625 MB/s Feb 8 23:17:53.553839 kernel: raid6: avx2x4 xor() 6900 MB/s Feb 8 23:17:53.573842 kernel: raid6: avx2x2 gen() 18546 MB/s Feb 8 23:17:53.592853 kernel: raid6: avx2x2 xor() 22266 MB/s Feb 8 23:17:53.612840 kernel: raid6: avx2x1 gen() 13810 MB/s Feb 8 23:17:53.632841 kernel: raid6: avx2x1 xor() 19457 MB/s Feb 8 23:17:53.652840 kernel: raid6: sse2x4 gen() 11745 MB/s Feb 8 23:17:53.671853 kernel: raid6: sse2x4 xor() 6536 MB/s Feb 8 23:17:53.691842 kernel: raid6: sse2x2 gen() 12975 MB/s Feb 8 23:17:53.710836 kernel: raid6: sse2x2 xor() 7532 MB/s Feb 8 23:17:53.729840 kernel: raid6: sse2x1 gen() 11619 MB/s Feb 8 23:17:53.752994 kernel: raid6: sse2x1 xor() 5923 MB/s Feb 8 23:17:53.753016 kernel: raid6: using algorithm avx512x2 gen() 18640 MB/s Feb 8 23:17:53.753030 kernel: raid6: .... xor() 29841 MB/s, rmw enabled Feb 8 23:17:53.755989 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:17:53.773850 kernel: xor: automatically using best checksumming function avx Feb 8 23:17:53.870087 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:17:53.878169 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:17:53.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.882000 audit: BPF prog-id=7 op=LOAD Feb 8 23:17:53.882000 audit: BPF prog-id=8 op=LOAD Feb 8 23:17:53.883102 systemd[1]: Starting systemd-udevd.service... Feb 8 23:17:53.896946 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:17:53.901610 systemd[1]: Started systemd-udevd.service. Feb 8 23:17:53.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.909899 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:17:53.926211 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 8 23:17:53.957144 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:17:53.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.961868 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:17:53.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:53.996168 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:17:54.043851 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:17:54.059945 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:17:54.071850 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:17:54.086842 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:17:54.094848 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:17:54.103845 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:17:54.103882 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:17:54.108842 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:17:54.122491 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:17:54.122528 kernel: AES CTR mode by8 optimization enabled Feb 8 23:17:54.131843 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:17:54.136848 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:17:54.146840 kernel: scsi host0: storvsc_host_t Feb 8 23:17:54.151844 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:17:54.158671 kernel: scsi host1: storvsc_host_t Feb 8 23:17:54.158842 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:17:54.194512 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:17:54.194720 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:17:54.194733 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:17:54.194879 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:17:54.198845 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:17:54.199010 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:17:54.199130 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:17:54.199250 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:17:54.207844 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:17:54.211847 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:17:54.352898 kernel: hv_netvsc 000d3a64-d996-000d-3a64-d996000d3a64 eth0: VF slot 1 added Feb 8 23:17:54.361866 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:17:54.367846 kernel: hv_pci 204f4679-ded4-40cd-9754-a29038118733: PCI VMBus probing: Using version 0x10004 Feb 8 23:17:54.378351 kernel: hv_pci 204f4679-ded4-40cd-9754-a29038118733: PCI host bridge to bus ded4:00 Feb 8 23:17:54.378518 kernel: pci_bus ded4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:17:54.378657 kernel: pci_bus ded4:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:17:54.386927 kernel: pci ded4:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:17:54.395572 kernel: pci ded4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:17:54.411857 kernel: pci ded4:00:02.0: enabling Extended Tags Feb 8 23:17:54.423958 kernel: pci ded4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ded4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:17:54.432285 kernel: pci_bus ded4:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:17:54.432462 kernel: pci ded4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:17:54.525855 kernel: mlx5_core ded4:00:02.0: firmware version: 14.30.1224 Feb 8 23:17:54.575853 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:17:54.585840 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (448) Feb 8 23:17:54.605832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:17:54.682844 kernel: mlx5_core ded4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:17:54.735054 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:17:54.738243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:17:54.739963 systemd[1]: Starting disk-uuid.service... Feb 8 23:17:54.780141 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:17:54.842850 kernel: mlx5_core ded4:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:17:54.843108 kernel: mlx5_core ded4:00:02.0: mlx5e_tc_post_act_init:40:(pid 260): firmware level support is missing Feb 8 23:17:54.870407 kernel: hv_netvsc 000d3a64-d996-000d-3a64-d996000d3a64 eth0: VF registering: eth1 Feb 8 23:17:54.870643 kernel: mlx5_core ded4:00:02.0 eth1: joined to eth0 Feb 8 23:17:54.890846 kernel: mlx5_core ded4:00:02.0 enP57044s1: renamed from eth1 Feb 8 23:17:55.763856 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:17:55.764239 disk-uuid[559]: The operation has completed successfully. Feb 8 23:17:55.835213 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:17:55.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:55.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:55.835318 systemd[1]: Finished disk-uuid.service. Feb 8 23:17:55.847216 systemd[1]: Starting verity-setup.service... Feb 8 23:17:55.877853 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:17:56.090177 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:17:56.096783 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:17:56.100457 systemd[1]: Finished verity-setup.service. Feb 8 23:17:56.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.177849 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:17:56.178332 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:17:56.180715 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:17:56.181526 systemd[1]: Starting ignition-setup.service... Feb 8 23:17:56.188848 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:17:56.209875 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:17:56.209917 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:17:56.209942 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:17:56.260487 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:17:56.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.264000 audit: BPF prog-id=9 op=LOAD Feb 8 23:17:56.266232 systemd[1]: Starting systemd-networkd.service... Feb 8 23:17:56.288290 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:17:56.293697 systemd-networkd[830]: lo: Link UP Feb 8 23:17:56.293706 systemd-networkd[830]: lo: Gained carrier Feb 8 23:17:56.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.294602 systemd-networkd[830]: Enumeration completed Feb 8 23:17:56.295223 systemd[1]: Started systemd-networkd.service. Feb 8 23:17:56.298646 systemd-networkd[830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:17:56.298982 systemd[1]: Reached target network.target. Feb 8 23:17:56.307138 systemd[1]: Starting iscsiuio.service... Feb 8 23:17:56.313000 systemd[1]: Started iscsiuio.service. Feb 8 23:17:56.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.317053 systemd[1]: Starting iscsid.service... Feb 8 23:17:56.323111 iscsid[839]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:17:56.323111 iscsid[839]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:17:56.323111 iscsid[839]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:17:56.346052 iscsid[839]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:17:56.346052 iscsid[839]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:17:56.346052 iscsid[839]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:17:56.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.333961 systemd[1]: Started iscsid.service. Feb 8 23:17:56.358355 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:17:56.368713 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:17:56.378726 kernel: mlx5_core ded4:00:02.0 enP57044s1: Link up Feb 8 23:17:56.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.373294 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:17:56.376860 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:17:56.378734 systemd[1]: Reached target remote-fs.target. Feb 8 23:17:56.381355 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:17:56.391654 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:17:56.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.445302 systemd[1]: Finished ignition-setup.service. Feb 8 23:17:56.455375 kernel: hv_netvsc 000d3a64-d996-000d-3a64-d996000d3a64 eth0: Data path switched to VF: enP57044s1 Feb 8 23:17:56.455594 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:17:56.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:56.455779 systemd-networkd[830]: enP57044s1: Link UP Feb 8 23:17:56.457050 systemd-networkd[830]: eth0: Link UP Feb 8 23:17:56.457720 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:17:56.457731 systemd-networkd[830]: eth0: Gained carrier Feb 8 23:17:56.460505 systemd-networkd[830]: enP57044s1: Gained carrier Feb 8 23:17:56.519947 systemd-networkd[830]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:17:57.536082 systemd-networkd[830]: eth0: Gained IPv6LL Feb 8 23:17:59.099759 ignition[854]: Ignition 2.14.0 Feb 8 23:17:59.099775 ignition[854]: Stage: fetch-offline Feb 8 23:17:59.099903 ignition[854]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:59.099958 ignition[854]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:59.211732 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:59.211977 ignition[854]: parsed url from cmdline: "" Feb 8 23:17:59.211982 ignition[854]: no config URL provided Feb 8 23:17:59.211988 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:17:59.246108 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:17:59.246146 kernel: audit: type=1130 audit(1707434279.227:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.223969 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:17:59.211999 ignition[854]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:17:59.228808 systemd[1]: Starting ignition-fetch.service... Feb 8 23:17:59.212005 ignition[854]: failed to fetch config: resource requires networking Feb 8 23:17:59.212288 ignition[854]: Ignition finished successfully Feb 8 23:17:59.237397 ignition[860]: Ignition 2.14.0 Feb 8 23:17:59.237405 ignition[860]: Stage: fetch Feb 8 23:17:59.237514 ignition[860]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:59.237540 ignition[860]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:59.241072 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:59.243961 ignition[860]: parsed url from cmdline: "" Feb 8 23:17:59.243971 ignition[860]: no config URL provided Feb 8 23:17:59.243977 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:17:59.243988 ignition[860]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:17:59.244024 ignition[860]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:17:59.327242 ignition[860]: GET result: OK Feb 8 23:17:59.330239 ignition[860]: config has been read from IMDS userdata Feb 8 23:17:59.330303 ignition[860]: parsing config with SHA512: 5530a525e84bd83d056806a59aa85306fa50b614ae13dbb7f4eb60f2607ebcfebc7b470c65d1edd109acfcba2582489ee4f63fed1980a957672a91c3aaf8f723 Feb 8 23:17:59.360963 unknown[860]: fetched base config from "system" Feb 8 23:17:59.363469 unknown[860]: fetched base config from "system" Feb 8 23:17:59.363485 unknown[860]: fetched user config from "azure" Feb 8 23:17:59.367455 ignition[860]: fetch: fetch complete Feb 8 23:17:59.367465 ignition[860]: fetch: fetch passed Feb 8 23:17:59.367519 ignition[860]: Ignition finished successfully Feb 8 23:17:59.373143 systemd[1]: Finished ignition-fetch.service. Feb 8 23:17:59.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.386150 systemd[1]: Starting ignition-kargs.service... Feb 8 23:17:59.390951 kernel: audit: type=1130 audit(1707434279.375:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.399292 ignition[866]: Ignition 2.14.0 Feb 8 23:17:59.399302 ignition[866]: Stage: kargs Feb 8 23:17:59.399441 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:59.399480 ignition[866]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:59.403321 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:59.405744 ignition[866]: kargs: kargs passed Feb 8 23:17:59.408075 systemd[1]: Finished ignition-kargs.service. Feb 8 23:17:59.423928 kernel: audit: type=1130 audit(1707434279.410:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.405802 ignition[866]: Ignition finished successfully Feb 8 23:17:59.422285 systemd[1]: Starting ignition-disks.service... Feb 8 23:17:59.436205 ignition[872]: Ignition 2.14.0 Feb 8 23:17:59.436215 ignition[872]: Stage: disks Feb 8 23:17:59.436346 ignition[872]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:59.436381 ignition[872]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:59.443663 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:59.446021 ignition[872]: disks: disks passed Feb 8 23:17:59.446066 ignition[872]: Ignition finished successfully Feb 8 23:17:59.449539 systemd[1]: Finished ignition-disks.service. Feb 8 23:17:59.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.451936 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:17:59.469492 kernel: audit: type=1130 audit(1707434279.451:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.465530 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:17:59.469490 systemd[1]: Reached target local-fs.target. Feb 8 23:17:59.472889 systemd[1]: Reached target sysinit.target. Feb 8 23:17:59.476574 systemd[1]: Reached target basic.target. Feb 8 23:17:59.482013 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:17:59.553554 systemd-fsck[880]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:17:59.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.559925 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:17:59.576708 kernel: audit: type=1130 audit(1707434279.562:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:59.576940 systemd[1]: Mounting sysroot.mount... Feb 8 23:17:59.589844 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:17:59.590047 systemd[1]: Mounted sysroot.mount. Feb 8 23:17:59.591954 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:17:59.631156 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:17:59.637354 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:17:59.642666 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:17:59.642713 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:17:59.655613 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:17:59.702969 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:17:59.707952 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:17:59.721852 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (891) Feb 8 23:17:59.721889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:17:59.726122 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:17:59.735608 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:17:59.735635 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:17:59.739294 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:17:59.748596 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:17:59.756967 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:17:59.777410 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:18:00.165125 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:18:00.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:00.179856 kernel: audit: type=1130 audit(1707434280.166:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:00.180055 systemd[1]: Starting ignition-mount.service... Feb 8 23:18:00.185609 systemd[1]: Starting sysroot-boot.service... Feb 8 23:18:00.192701 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:18:00.192824 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:18:00.212040 ignition[957]: INFO : Ignition 2.14.0 Feb 8 23:18:00.212040 ignition[957]: INFO : Stage: mount Feb 8 23:18:00.215476 ignition[957]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:18:00.215476 ignition[957]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:18:00.229667 systemd[1]: Finished sysroot-boot.service. Feb 8 23:18:00.233094 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:18:00.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:00.237655 ignition[957]: INFO : mount: mount passed Feb 8 23:18:00.237655 ignition[957]: INFO : Ignition finished successfully Feb 8 23:18:00.262351 kernel: audit: type=1130 audit(1707434280.235:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:00.262382 kernel: audit: type=1130 audit(1707434280.250:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:00.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:00.238468 systemd[1]: Finished ignition-mount.service. Feb 8 23:18:00.960046 coreos-metadata[890]: Feb 08 23:18:00.959 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:18:00.975129 coreos-metadata[890]: Feb 08 23:18:00.975 INFO Fetch successful Feb 8 23:18:01.007567 coreos-metadata[890]: Feb 08 23:18:01.007 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:18:01.025152 coreos-metadata[890]: Feb 08 23:18:01.025 INFO Fetch successful Feb 8 23:18:01.041547 coreos-metadata[890]: Feb 08 23:18:01.041 INFO wrote hostname ci-3510.3.2-a-5bade47376 to /sysroot/etc/hostname Feb 8 23:18:01.046952 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:18:01.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:01.052746 systemd[1]: Starting ignition-files.service... Feb 8 23:18:01.064422 kernel: audit: type=1130 audit(1707434281.050:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:01.071761 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:18:01.081845 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (969) Feb 8 23:18:01.090170 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:18:01.090206 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:18:01.090221 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:18:01.098773 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:18:01.112418 ignition[988]: INFO : Ignition 2.14.0 Feb 8 23:18:01.114293 ignition[988]: INFO : Stage: files Feb 8 23:18:01.114293 ignition[988]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:18:01.114293 ignition[988]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:18:01.123971 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:18:01.134195 ignition[988]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:18:01.137049 ignition[988]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:18:01.137049 ignition[988]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:18:01.203169 ignition[988]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:18:01.206921 ignition[988]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:18:01.216477 unknown[988]: wrote ssh authorized keys file for user: core Feb 8 23:18:01.219009 ignition[988]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:18:01.233719 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:18:01.238058 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:18:01.853541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:18:01.994702 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:18:02.001931 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:18:02.001931 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:18:02.001931 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:18:02.675301 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:18:02.798203 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:18:02.803007 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:18:02.803007 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:18:03.319762 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:18:03.456039 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:18:03.463516 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:18:03.463516 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:18:03.472335 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:18:03.677715 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:18:03.895906 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:18:03.902534 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:18:03.902534 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:18:03.902534 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:18:04.031400 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:18:04.216291 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:18:04.223248 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:18:04.223248 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:18:04.223248 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:18:04.352203 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:18:04.809532 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:18:04.818169 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:18:04.818169 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:18:04.818169 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:18:04.818169 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:18:04.818169 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:18:05.309196 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:18:05.399280 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:18:05.404161 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:18:05.404161 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:18:05.404161 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:18:05.416710 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:18:05.416710 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:18:05.416710 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:18:05.429781 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:18:05.433620 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:18:06.314405 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:18:06.319664 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:18:06.319664 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:18:06.319664 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:18:06.338753 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (990) Feb 8 23:18:06.338783 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1685065300" Feb 8 23:18:06.338783 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1685065300": device or resource busy Feb 8 23:18:06.338783 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1685065300", trying btrfs: device or resource busy Feb 8 23:18:06.338783 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1685065300" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1685065300" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1685065300" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1685065300" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2994562272" Feb 8 23:18:06.358739 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2994562272": device or resource busy Feb 8 23:18:06.358739 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2994562272", trying btrfs: device or resource busy Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2994562272" Feb 8 23:18:06.358739 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2994562272" Feb 8 23:18:06.426903 kernel: audit: type=1130 audit(1707434286.378:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.341856 systemd[1]: mnt-oem1685065300.mount: Deactivated successfully. Feb 8 23:18:06.432951 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2994562272" Feb 8 23:18:06.432951 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2994562272" Feb 8 23:18:06.432951 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:18:06.432951 ignition[988]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 8 23:18:06.515412 kernel: audit: type=1130 audit(1707434286.438:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.515446 kernel: audit: type=1131 audit(1707434286.450:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.515459 kernel: audit: type=1130 audit(1707434286.477:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.366753 systemd[1]: mnt-oem2994562272.mount: Deactivated successfully. Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(20): [started] setting preset to enabled for "nvidia.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(20): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(21): [started] setting preset to enabled for "waagent.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(21): [finished] setting preset to enabled for "waagent.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:18:06.515787 ignition[988]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:18:06.515787 ignition[988]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:18:06.515787 ignition[988]: INFO : files: files passed Feb 8 23:18:06.515787 ignition[988]: INFO : Ignition finished successfully Feb 8 23:18:06.372631 systemd[1]: Finished ignition-files.service. Feb 8 23:18:06.521816 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:18:06.391920 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:18:06.404311 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:18:06.409325 systemd[1]: Starting ignition-quench.service... Feb 8 23:18:06.427140 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:18:06.429454 systemd[1]: Finished ignition-quench.service. Feb 8 23:18:06.465647 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:18:06.512860 systemd[1]: Reached target ignition-complete.target. Feb 8 23:18:06.594522 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:18:06.608855 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:18:06.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.608953 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:18:06.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.615969 systemd[1]: Reached target initrd-fs.target. Feb 8 23:18:06.640654 kernel: audit: type=1130 audit(1707434286.614:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.640690 kernel: audit: type=1131 audit(1707434286.615:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.640656 systemd[1]: Reached target initrd.target. Feb 8 23:18:06.642259 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:18:06.643113 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:18:06.655997 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:18:06.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.671065 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:18:06.674772 kernel: audit: type=1130 audit(1707434286.659:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.683833 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:18:06.687393 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:18:06.691497 systemd[1]: Stopped target timers.target. Feb 8 23:18:06.694742 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:18:06.696812 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:18:06.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.700420 systemd[1]: Stopped target initrd.target. Feb 8 23:18:06.714689 kernel: audit: type=1131 audit(1707434286.700:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.714810 systemd[1]: Stopped target basic.target. Feb 8 23:18:06.717970 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:18:06.721762 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:18:06.728327 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:18:06.732118 systemd[1]: Stopped target remote-fs.target. Feb 8 23:18:06.735749 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:18:06.739578 systemd[1]: Stopped target sysinit.target. Feb 8 23:18:06.742942 systemd[1]: Stopped target local-fs.target. Feb 8 23:18:06.746391 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:18:06.749876 systemd[1]: Stopped target swap.target. Feb 8 23:18:06.752994 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:18:06.755306 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:18:06.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.758926 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:18:06.773360 kernel: audit: type=1131 audit(1707434286.758:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.773447 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:18:06.775629 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:18:06.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.779654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:18:06.794775 kernel: audit: type=1131 audit(1707434286.779:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.779792 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:18:06.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.796991 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:18:06.799174 systemd[1]: Stopped ignition-files.service. Feb 8 23:18:06.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.802750 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:18:06.805209 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:18:06.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.814703 iscsid[839]: iscsid shutting down. Feb 8 23:18:06.810216 systemd[1]: Stopping ignition-mount.service... Feb 8 23:18:06.814780 systemd[1]: Stopping iscsid.service... Feb 8 23:18:06.820676 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:18:06.820863 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:18:06.826671 ignition[1027]: INFO : Ignition 2.14.0 Feb 8 23:18:06.826671 ignition[1027]: INFO : Stage: umount Feb 8 23:18:06.826671 ignition[1027]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:18:06.826671 ignition[1027]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:18:06.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.843441 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:18:06.843441 ignition[1027]: INFO : umount: umount passed Feb 8 23:18:06.843441 ignition[1027]: INFO : Ignition finished successfully Feb 8 23:18:06.849935 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:18:06.853367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:18:06.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.853630 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:18:06.856519 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:18:06.856651 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:18:06.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.867687 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:18:06.868748 systemd[1]: Stopped iscsid.service. Feb 8 23:18:06.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.873121 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:18:06.873992 systemd[1]: Stopped ignition-mount.service. Feb 8 23:18:06.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.877697 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:18:06.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.877782 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:18:06.880351 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:18:06.880398 systemd[1]: Stopped ignition-disks.service. Feb 8 23:18:06.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.881066 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:18:06.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.881101 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:18:06.889181 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:18:06.889228 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:18:06.891766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:18:06.891808 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:18:06.895148 systemd[1]: Stopped target paths.target. Feb 8 23:18:06.908534 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:18:06.912874 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:18:06.917109 systemd[1]: Stopped target slices.target. Feb 8 23:18:06.920667 systemd[1]: Stopped target sockets.target. Feb 8 23:18:06.924106 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:18:06.924156 systemd[1]: Closed iscsid.socket. Feb 8 23:18:06.929178 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:18:06.929233 systemd[1]: Stopped ignition-setup.service. Feb 8 23:18:06.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.934658 systemd[1]: Stopping iscsiuio.service... Feb 8 23:18:06.937003 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:18:06.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.937087 systemd[1]: Stopped iscsiuio.service. Feb 8 23:18:06.939500 systemd[1]: Stopped target network.target. Feb 8 23:18:06.943595 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:18:06.943631 systemd[1]: Closed iscsiuio.socket. Feb 8 23:18:06.952421 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:18:06.955854 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:18:06.961898 systemd-networkd[830]: eth0: DHCPv6 lease lost Feb 8 23:18:06.965067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:18:06.967409 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:18:06.969535 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:18:06.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.973338 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:18:06.975332 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:18:06.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.977000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:18:06.979000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:18:06.977809 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:18:06.977863 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:18:06.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.980804 systemd[1]: Stopping network-cleanup.service... Feb 8 23:18:06.984045 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:18:07.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.984122 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:18:06.988491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:18:07.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.988545 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:18:06.992001 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:18:06.992043 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:18:06.994128 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:18:07.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.998204 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:18:07.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:07.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:06.998324 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:18:07.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:07.003989 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:18:07.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:07.004075 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:18:07.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:07.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:07.008236 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:18:07.008284 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:18:07.013351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:18:07.013384 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:18:07.016902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:18:07.016952 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:18:07.020492 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:18:07.020540 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:18:07.022282 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:18:07.022324 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:18:07.026414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:18:07.026461 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:18:07.028811 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:18:07.031698 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:18:07.031760 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:18:07.037204 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:18:07.037293 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:18:07.083839 kernel: hv_netvsc 000d3a64-d996-000d-3a64-d996000d3a64 eth0: Data path switched from VF: enP57044s1 Feb 8 23:18:07.103660 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:18:07.103781 systemd[1]: Stopped network-cleanup.service. Feb 8 23:18:07.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:07.109638 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:18:07.114474 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:18:07.125860 systemd[1]: Switching root. Feb 8 23:18:07.151082 systemd-journald[183]: Journal stopped Feb 8 23:18:19.156097 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:18:19.156126 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:18:19.156139 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:18:19.156150 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:18:19.156159 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:18:19.156170 kernel: SELinux: policy capability open_perms=1 Feb 8 23:18:19.156181 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:18:19.156192 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:18:19.156205 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:18:19.156213 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:18:19.156224 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:18:19.156235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:18:19.156244 systemd[1]: Successfully loaded SELinux policy in 369.199ms. Feb 8 23:18:19.156257 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.418ms. Feb 8 23:18:19.156273 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:18:19.156283 systemd[1]: Detected virtualization microsoft. Feb 8 23:18:19.156295 systemd[1]: Detected architecture x86-64. Feb 8 23:18:19.156307 systemd[1]: Detected first boot. Feb 8 23:18:19.156319 systemd[1]: Hostname set to . Feb 8 23:18:19.156330 systemd[1]: Initializing machine ID from random generator. Feb 8 23:18:19.156342 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:18:19.156351 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:18:19.156364 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:18:19.156377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:18:19.156387 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:18:19.156401 kernel: kauditd_printk_skb: 50 callbacks suppressed Feb 8 23:18:19.156412 kernel: audit: type=1334 audit(1707434298.695:91): prog-id=12 op=LOAD Feb 8 23:18:19.156422 kernel: audit: type=1334 audit(1707434298.695:92): prog-id=3 op=UNLOAD Feb 8 23:18:19.156433 kernel: audit: type=1334 audit(1707434298.703:93): prog-id=13 op=LOAD Feb 8 23:18:19.156446 kernel: audit: type=1334 audit(1707434298.708:94): prog-id=14 op=LOAD Feb 8 23:18:19.156455 kernel: audit: type=1334 audit(1707434298.708:95): prog-id=4 op=UNLOAD Feb 8 23:18:19.156466 kernel: audit: type=1334 audit(1707434298.708:96): prog-id=5 op=UNLOAD Feb 8 23:18:19.156477 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:18:19.156490 kernel: audit: type=1131 audit(1707434298.712:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.156502 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:18:19.156515 kernel: audit: type=1130 audit(1707434298.749:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.156524 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:18:19.156536 kernel: audit: type=1131 audit(1707434298.749:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.156548 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:18:19.156559 kernel: audit: type=1334 audit(1707434298.777:100): prog-id=12 op=UNLOAD Feb 8 23:18:19.156573 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:18:19.156587 systemd[1]: Created slice system-getty.slice. Feb 8 23:18:19.156597 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:18:19.156609 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:18:19.156621 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:18:19.156632 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:18:19.156643 systemd[1]: Created slice user.slice. Feb 8 23:18:19.156656 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:18:19.156669 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:18:19.156680 systemd[1]: Set up automount boot.automount. Feb 8 23:18:19.156693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:18:19.156705 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:18:19.156716 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:18:19.156728 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:18:19.156740 systemd[1]: Reached target integritysetup.target. Feb 8 23:18:19.156751 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:18:19.156765 systemd[1]: Reached target remote-fs.target. Feb 8 23:18:19.156777 systemd[1]: Reached target slices.target. Feb 8 23:18:19.156788 systemd[1]: Reached target swap.target. Feb 8 23:18:19.156799 systemd[1]: Reached target torcx.target. Feb 8 23:18:19.156810 systemd[1]: Reached target veritysetup.target. Feb 8 23:18:19.156821 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:18:19.156840 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:18:19.156853 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:18:19.156868 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:18:19.156879 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:18:19.156891 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:18:19.156903 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:18:19.156914 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:18:19.156926 systemd[1]: Mounting media.mount... Feb 8 23:18:19.156940 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:18:19.156954 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:18:19.156965 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:18:19.156976 systemd[1]: Mounting tmp.mount... Feb 8 23:18:19.156987 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:18:19.157000 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:18:19.157012 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:18:19.157024 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:18:19.157035 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:18:19.157049 systemd[1]: Starting modprobe@drm.service... Feb 8 23:18:19.157062 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:18:19.157074 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:18:19.157084 systemd[1]: Starting modprobe@loop.service... Feb 8 23:18:19.157098 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:18:19.157110 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:18:19.157120 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:18:19.157133 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:18:19.157148 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:18:19.157159 systemd[1]: Stopped systemd-journald.service. Feb 8 23:18:19.157171 systemd[1]: Starting systemd-journald.service... Feb 8 23:18:19.157183 kernel: loop: module loaded Feb 8 23:18:19.157193 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:18:19.157206 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:18:19.157219 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:18:19.157230 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:18:19.157241 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:18:19.157256 systemd[1]: Stopped verity-setup.service. Feb 8 23:18:19.157273 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:18:19.157293 kernel: fuse: init (API version 7.34) Feb 8 23:18:19.157312 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:18:19.157332 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:18:19.157354 systemd-journald[1170]: Journal started Feb 8 23:18:19.157425 systemd-journald[1170]: Runtime Journal (/run/log/journal/a4eda1a01767434b84de98486743a3fc) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:18:09.313000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:18:09.940000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:18:09.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:18:09.954000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:18:09.954000 audit: BPF prog-id=10 op=LOAD Feb 8 23:18:09.954000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:18:09.954000 audit: BPF prog-id=11 op=LOAD Feb 8 23:18:09.954000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:18:11.237000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:18:11.237000 audit[1061]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:11.237000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:18:11.244000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:18:11.244000 audit[1061]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:11.244000 audit: CWD cwd="/" Feb 8 23:18:11.244000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:11.244000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:11.244000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:18:18.695000 audit: BPF prog-id=12 op=LOAD Feb 8 23:18:18.695000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:18:18.703000 audit: BPF prog-id=13 op=LOAD Feb 8 23:18:18.708000 audit: BPF prog-id=14 op=LOAD Feb 8 23:18:18.708000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:18:18.708000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:18:18.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:18.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:18.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:18.777000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:18:19.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.085000 audit: BPF prog-id=15 op=LOAD Feb 8 23:18:19.085000 audit: BPF prog-id=16 op=LOAD Feb 8 23:18:19.085000 audit: BPF prog-id=17 op=LOAD Feb 8 23:18:19.085000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:18:19.085000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:18:19.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.152000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:18:19.152000 audit[1170]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd9df0a670 a2=4000 a3=7ffd9df0a70c items=0 ppid=1 pid=1170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:19.152000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:18:11.173645 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:18:18.694044 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:18:11.197447 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:18:18.709732 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:18:11.197509 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:18:11.197587 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:18:11.197616 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:18:11.197730 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:18:11.197767 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:18:11.198295 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:18:11.198389 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:18:11.198406 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:18:11.220649 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:18:11.220750 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:18:11.220804 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:18:11.220901 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:18:11.220956 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:18:11.220972 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:18:17.574577 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:17Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:18:17.574823 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:17Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:18:17.575141 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:17Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:18:17.575628 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:17Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:18:17.575701 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:17Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:18:17.575757 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-02-08T23:18:17Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:18:19.166082 systemd[1]: Started systemd-journald.service. Feb 8 23:18:19.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.168233 systemd[1]: Mounted media.mount. Feb 8 23:18:19.169733 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:18:19.171504 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:18:19.173361 systemd[1]: Mounted tmp.mount. Feb 8 23:18:19.174893 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:18:19.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.177171 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:18:19.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.179170 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:18:19.179327 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:18:19.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.182919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:18:19.183157 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:18:19.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.186362 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:18:19.186501 systemd[1]: Finished modprobe@drm.service. Feb 8 23:18:19.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.188548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:18:19.188689 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:18:19.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.191112 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:18:19.191247 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:18:19.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.193340 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:18:19.193473 systemd[1]: Finished modprobe@loop.service. Feb 8 23:18:19.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.195527 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:18:19.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.198122 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:18:19.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.201069 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:18:19.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.203178 systemd[1]: Reached target network-pre.target. Feb 8 23:18:19.206326 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:18:19.209694 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:18:19.211782 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:18:19.213800 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:18:19.216912 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:18:19.218954 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:18:19.219963 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:18:19.222017 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:18:19.223397 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:18:19.228766 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:18:19.234528 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:18:19.236740 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:18:19.244602 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:18:19.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.246927 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:18:19.260863 systemd-journald[1170]: Time spent on flushing to /var/log/journal/a4eda1a01767434b84de98486743a3fc is 30.842ms for 1182 entries. Feb 8 23:18:19.260863 systemd-journald[1170]: System Journal (/var/log/journal/a4eda1a01767434b84de98486743a3fc) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:18:19.337388 systemd-journald[1170]: Received client request to flush runtime journal. Feb 8 23:18:19.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.284038 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:18:19.338315 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:18:19.298233 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:18:19.302333 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:18:19.338241 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:18:19.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:19.628349 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:18:19.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:20.394532 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:18:20.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:20.396000 audit: BPF prog-id=18 op=LOAD Feb 8 23:18:20.397000 audit: BPF prog-id=19 op=LOAD Feb 8 23:18:20.397000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:18:20.397000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:18:20.398624 systemd[1]: Starting systemd-udevd.service... Feb 8 23:18:20.416911 systemd-udevd[1187]: Using default interface naming scheme 'v252'. Feb 8 23:18:20.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:20.642000 audit: BPF prog-id=20 op=LOAD Feb 8 23:18:20.638433 systemd[1]: Started systemd-udevd.service. Feb 8 23:18:20.644028 systemd[1]: Starting systemd-networkd.service... Feb 8 23:18:20.679068 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:18:20.761332 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:18:20.756000 audit: BPF prog-id=21 op=LOAD Feb 8 23:18:20.757000 audit: BPF prog-id=22 op=LOAD Feb 8 23:18:20.757000 audit: BPF prog-id=23 op=LOAD Feb 8 23:18:20.758801 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:18:20.796000 audit[1189]: AVC avc: denied { confidentiality } for pid=1189 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:18:20.817852 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:18:20.825066 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:18:20.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:20.831099 systemd[1]: Started systemd-userdbd.service. Feb 8 23:18:20.841625 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:18:20.841688 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:18:20.848856 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:18:20.796000 audit[1189]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561795a40990 a1=f884 a2=7f7054e34bc5 a3=5 items=12 ppid=1187 pid=1189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:20.796000 audit: CWD cwd="/" Feb 8 23:18:20.796000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=1 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=2 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=3 name=(null) inode=14947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=4 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=5 name=(null) inode=14948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=6 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=7 name=(null) inode=14949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=8 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=9 name=(null) inode=14950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=10 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PATH item=11 name=(null) inode=14951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:20.796000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:18:20.865770 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:18:20.865820 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:18:20.878478 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:18:20.878536 kernel: Console: switching Feb 8 23:18:20.878578 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:18:20.878607 kernel: to colour dummy device 80x25 Feb 8 23:18:20.878631 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:18:21.295469 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:18:21.430587 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1203) Feb 8 23:18:21.446429 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:18:21.504397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:18:21.513813 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:18:21.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:21.517812 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:18:21.528826 systemd-networkd[1198]: lo: Link UP Feb 8 23:18:21.528835 systemd-networkd[1198]: lo: Gained carrier Feb 8 23:18:21.529387 systemd-networkd[1198]: Enumeration completed Feb 8 23:18:21.529666 systemd[1]: Started systemd-networkd.service. Feb 8 23:18:21.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:21.532939 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:18:21.554184 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:18:21.607428 kernel: mlx5_core ded4:00:02.0 enP57044s1: Link up Feb 8 23:18:21.644432 kernel: hv_netvsc 000d3a64-d996-000d-3a64-d996000d3a64 eth0: Data path switched to VF: enP57044s1 Feb 8 23:18:21.645591 systemd-networkd[1198]: enP57044s1: Link UP Feb 8 23:18:21.645873 systemd-networkd[1198]: eth0: Link UP Feb 8 23:18:21.645950 systemd-networkd[1198]: eth0: Gained carrier Feb 8 23:18:21.650693 systemd-networkd[1198]: enP57044s1: Gained carrier Feb 8 23:18:21.684537 systemd-networkd[1198]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:18:21.785477 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:18:21.812353 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:18:21.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:21.815545 systemd[1]: Reached target cryptsetup.target. Feb 8 23:18:21.819123 systemd[1]: Starting lvm2-activation.service... Feb 8 23:18:21.823767 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:18:21.850648 systemd[1]: Finished lvm2-activation.service. Feb 8 23:18:21.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:21.853528 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:18:21.855804 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:18:21.855854 systemd[1]: Reached target local-fs.target. Feb 8 23:18:21.857833 systemd[1]: Reached target machines.target. Feb 8 23:18:21.861146 systemd[1]: Starting ldconfig.service... Feb 8 23:18:21.863301 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:18:21.863428 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:18:21.864601 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:18:21.867639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:18:21.871143 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:18:21.873268 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:18:21.873341 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:18:21.874524 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:18:21.947153 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1267 (bootctl) Feb 8 23:18:21.948629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:18:22.581031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:18:22.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:22.815819 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:18:23.006611 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:18:23.218659 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:18:23.249442 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:18:23.250146 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:18:23.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.343775 systemd-networkd[1198]: eth0: Gained IPv6LL Feb 8 23:18:23.349701 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:18:23.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.593441 systemd-fsck[1275]: fsck.fat 4.2 (2021-01-31) Feb 8 23:18:23.593441 systemd-fsck[1275]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:18:23.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.595926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:18:23.600953 systemd[1]: Mounting boot.mount... Feb 8 23:18:23.613016 systemd[1]: Mounted boot.mount. Feb 8 23:18:23.657759 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:18:23.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.767509 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:18:23.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.771434 systemd[1]: Starting audit-rules.service... Feb 8 23:18:23.774962 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:18:23.779187 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:18:23.784884 systemd[1]: Starting systemd-resolved.service... Feb 8 23:18:23.783000 audit: BPF prog-id=24 op=LOAD Feb 8 23:18:23.788998 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:18:23.787000 audit: BPF prog-id=25 op=LOAD Feb 8 23:18:23.792319 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:18:23.819000 audit[1287]: SYSTEM_BOOT pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.820983 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:18:23.873094 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:18:23.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.881169 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:18:23.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.884197 systemd[1]: Reached target time-set.target. Feb 8 23:18:23.888339 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:18:23.890652 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:18:23.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:23.954363 systemd-resolved[1284]: Positive Trust Anchors: Feb 8 23:18:23.954401 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:18:23.954578 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:18:24.055738 systemd-resolved[1284]: Using system hostname 'ci-3510.3.2-a-5bade47376'. Feb 8 23:18:24.057270 systemd[1]: Started systemd-resolved.service. Feb 8 23:18:24.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:24.059848 systemd[1]: Reached target network.target. Feb 8 23:18:24.061607 systemd[1]: Reached target network-online.target. Feb 8 23:18:24.063583 systemd[1]: Reached target nss-lookup.target. Feb 8 23:18:24.076000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:18:24.076000 audit[1302]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1f928fa0 a2=420 a3=0 items=0 ppid=1281 pid=1302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:24.076000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:18:24.077102 augenrules[1302]: No rules Feb 8 23:18:24.077500 systemd[1]: Finished audit-rules.service. Feb 8 23:18:24.157526 systemd-timesyncd[1286]: Contacted time server 77.68.25.145:123 (0.flatcar.pool.ntp.org). Feb 8 23:18:24.157612 systemd-timesyncd[1286]: Initial clock synchronization to Thu 2024-02-08 23:18:24.164058 UTC. Feb 8 23:18:29.527676 ldconfig[1266]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:18:29.536901 systemd[1]: Finished ldconfig.service. Feb 8 23:18:29.541342 systemd[1]: Starting systemd-update-done.service... Feb 8 23:18:29.560487 systemd[1]: Finished systemd-update-done.service. Feb 8 23:18:29.562864 systemd[1]: Reached target sysinit.target. Feb 8 23:18:29.564810 systemd[1]: Started motdgen.path. Feb 8 23:18:29.566456 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:18:29.569132 systemd[1]: Started logrotate.timer. Feb 8 23:18:29.570773 systemd[1]: Started mdadm.timer. Feb 8 23:18:29.572203 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:18:29.576004 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:18:29.576039 systemd[1]: Reached target paths.target. Feb 8 23:18:29.577720 systemd[1]: Reached target timers.target. Feb 8 23:18:29.579752 systemd[1]: Listening on dbus.socket. Feb 8 23:18:29.582450 systemd[1]: Starting docker.socket... Feb 8 23:18:29.586877 systemd[1]: Listening on sshd.socket. Feb 8 23:18:29.589359 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:18:29.589804 systemd[1]: Listening on docker.socket. Feb 8 23:18:29.592039 systemd[1]: Reached target sockets.target. Feb 8 23:18:29.594113 systemd[1]: Reached target basic.target. Feb 8 23:18:29.596158 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:18:29.596193 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:18:29.597121 systemd[1]: Starting containerd.service... Feb 8 23:18:29.600027 systemd[1]: Starting dbus.service... Feb 8 23:18:29.602712 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:18:29.605832 systemd[1]: Starting extend-filesystems.service... Feb 8 23:18:29.607936 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:18:29.609209 systemd[1]: Starting motdgen.service... Feb 8 23:18:29.612352 systemd[1]: Started nvidia.service. Feb 8 23:18:29.615563 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:18:29.618832 systemd[1]: Starting prepare-critools.service... Feb 8 23:18:29.622803 systemd[1]: Starting prepare-helm.service... Feb 8 23:18:29.626245 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:18:29.629480 systemd[1]: Starting sshd-keygen.service... Feb 8 23:18:29.634200 systemd[1]: Starting systemd-logind.service... Feb 8 23:18:29.636540 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:18:29.636626 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:18:29.637127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:18:29.637946 systemd[1]: Starting update-engine.service... Feb 8 23:18:29.641105 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:18:29.651061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:18:29.652176 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:18:29.679794 extend-filesystems[1313]: Found sda Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda1 Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda2 Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda3 Feb 8 23:18:29.681534 extend-filesystems[1313]: Found usr Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda4 Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda6 Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda7 Feb 8 23:18:29.681534 extend-filesystems[1313]: Found sda9 Feb 8 23:18:29.681534 extend-filesystems[1313]: Checking size of /dev/sda9 Feb 8 23:18:29.718979 jq[1328]: true Feb 8 23:18:29.719105 jq[1312]: false Feb 8 23:18:29.704948 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:18:29.705125 systemd[1]: Finished motdgen.service. Feb 8 23:18:29.712314 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:18:29.712500 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:18:29.734772 jq[1346]: true Feb 8 23:18:29.769233 systemd-logind[1326]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:18:29.774106 systemd-logind[1326]: New seat seat0. Feb 8 23:18:29.787922 extend-filesystems[1313]: Old size kept for /dev/sda9 Feb 8 23:18:29.791633 extend-filesystems[1313]: Found sr0 Feb 8 23:18:29.810487 tar[1334]: linux-amd64/helm Feb 8 23:18:29.810868 tar[1332]: ./ Feb 8 23:18:29.810868 tar[1332]: ./macvlan Feb 8 23:18:29.811109 tar[1333]: crictl Feb 8 23:18:29.794061 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:18:29.794267 systemd[1]: Finished extend-filesystems.service. Feb 8 23:18:29.891521 dbus-daemon[1311]: [system] SELinux support is enabled Feb 8 23:18:29.901396 env[1345]: time="2024-02-08T23:18:29.900537849Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:18:29.891689 systemd[1]: Started dbus.service. Feb 8 23:18:29.896315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:18:29.896345 systemd[1]: Reached target system-config.target. Feb 8 23:18:29.898982 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:18:29.899010 systemd[1]: Reached target user-config.target. Feb 8 23:18:29.905259 systemd[1]: Started systemd-logind.service. Feb 8 23:18:29.910279 dbus-daemon[1311]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:18:29.918289 bash[1369]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:18:29.915441 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:18:29.928546 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:18:29.988188 tar[1332]: ./static Feb 8 23:18:30.041064 env[1345]: time="2024-02-08T23:18:30.040970919Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:18:30.041187 env[1345]: time="2024-02-08T23:18:30.041161777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:18:30.046522 env[1345]: time="2024-02-08T23:18:30.046483085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:18:30.046522 env[1345]: time="2024-02-08T23:18:30.046520496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:18:30.046840 env[1345]: time="2024-02-08T23:18:30.046810184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:18:30.046907 env[1345]: time="2024-02-08T23:18:30.046843794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:18:30.046907 env[1345]: time="2024-02-08T23:18:30.046862599Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:18:30.046907 env[1345]: time="2024-02-08T23:18:30.046876704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:18:30.047029 env[1345]: time="2024-02-08T23:18:30.046982436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:18:30.047266 env[1345]: time="2024-02-08T23:18:30.047242214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:18:30.047540 env[1345]: time="2024-02-08T23:18:30.047511496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:18:30.047609 env[1345]: time="2024-02-08T23:18:30.047541105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:18:30.047653 env[1345]: time="2024-02-08T23:18:30.047608425Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:18:30.047653 env[1345]: time="2024-02-08T23:18:30.047627131Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:18:30.069197 env[1345]: time="2024-02-08T23:18:30.069166540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:18:30.069285 env[1345]: time="2024-02-08T23:18:30.069205351Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:18:30.069285 env[1345]: time="2024-02-08T23:18:30.069224157Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:18:30.069285 env[1345]: time="2024-02-08T23:18:30.069277073Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069429 env[1345]: time="2024-02-08T23:18:30.069297479Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069429 env[1345]: time="2024-02-08T23:18:30.069358098Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069429 env[1345]: time="2024-02-08T23:18:30.069378504Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069429 env[1345]: time="2024-02-08T23:18:30.069397910Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069570 env[1345]: time="2024-02-08T23:18:30.069427819Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069570 env[1345]: time="2024-02-08T23:18:30.069447825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069570 env[1345]: time="2024-02-08T23:18:30.069465530Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.069570 env[1345]: time="2024-02-08T23:18:30.069482635Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:18:30.069715 env[1345]: time="2024-02-08T23:18:30.069591868Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:18:30.069715 env[1345]: time="2024-02-08T23:18:30.069687297Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:18:30.070129 env[1345]: time="2024-02-08T23:18:30.070105023Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:18:30.070184 env[1345]: time="2024-02-08T23:18:30.070149637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070184 env[1345]: time="2024-02-08T23:18:30.070169643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:18:30.070263 env[1345]: time="2024-02-08T23:18:30.070246666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070302 env[1345]: time="2024-02-08T23:18:30.070272374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070366 env[1345]: time="2024-02-08T23:18:30.070349697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070422 env[1345]: time="2024-02-08T23:18:30.070373404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070422 env[1345]: time="2024-02-08T23:18:30.070393010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070502 env[1345]: time="2024-02-08T23:18:30.070428821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070502 env[1345]: time="2024-02-08T23:18:30.070447927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070502 env[1345]: time="2024-02-08T23:18:30.070466633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070502 env[1345]: time="2024-02-08T23:18:30.070486238Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:18:30.070659 env[1345]: time="2024-02-08T23:18:30.070630482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070659 env[1345]: time="2024-02-08T23:18:30.070650088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070735 env[1345]: time="2024-02-08T23:18:30.070666793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.070735 env[1345]: time="2024-02-08T23:18:30.070684798Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:18:30.070735 env[1345]: time="2024-02-08T23:18:30.070706305Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:18:30.070735 env[1345]: time="2024-02-08T23:18:30.070722310Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:18:30.070874 env[1345]: time="2024-02-08T23:18:30.070747417Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:18:30.070874 env[1345]: time="2024-02-08T23:18:30.070811437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:18:30.071158 env[1345]: time="2024-02-08T23:18:30.071091221Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.071177147Z" level=info msg="Connect containerd service" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.071223961Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.071945179Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073066818Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073113833Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073171550Z" level=info msg="containerd successfully booted in 0.186892s" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073523256Z" level=info msg="Start subscribing containerd event" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073584475Z" level=info msg="Start recovering state" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073653396Z" level=info msg="Start event monitor" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073669100Z" level=info msg="Start snapshots syncer" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073678203Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:18:30.097814 env[1345]: time="2024-02-08T23:18:30.073689306Z" level=info msg="Start streaming server" Feb 8 23:18:30.073260 systemd[1]: Started containerd.service. Feb 8 23:18:30.134786 tar[1332]: ./vlan Feb 8 23:18:30.266557 tar[1332]: ./portmap Feb 8 23:18:30.348493 tar[1332]: ./host-local Feb 8 23:18:30.409414 tar[1332]: ./vrf Feb 8 23:18:30.488320 tar[1332]: ./bridge Feb 8 23:18:30.492903 update_engine[1327]: I0208 23:18:30.492491 1327 main.cc:92] Flatcar Update Engine starting Feb 8 23:18:30.540620 systemd[1]: Started update-engine.service. Feb 8 23:18:30.548679 update_engine[1327]: I0208 23:18:30.540668 1327 update_check_scheduler.cc:74] Next update check in 2m7s Feb 8 23:18:30.545891 systemd[1]: Started locksmithd.service. Feb 8 23:18:30.574589 tar[1332]: ./tuning Feb 8 23:18:30.644159 tar[1332]: ./firewall Feb 8 23:18:30.739094 tar[1332]: ./host-device Feb 8 23:18:30.811392 tar[1334]: linux-amd64/LICENSE Feb 8 23:18:30.811916 tar[1334]: linux-amd64/README.md Feb 8 23:18:30.821577 tar[1332]: ./sbr Feb 8 23:18:30.825587 systemd[1]: Finished prepare-helm.service. Feb 8 23:18:30.860739 tar[1332]: ./loopback Feb 8 23:18:30.895525 tar[1332]: ./dhcp Feb 8 23:18:31.031917 tar[1332]: ./ptp Feb 8 23:18:31.081392 systemd[1]: Finished prepare-critools.service. Feb 8 23:18:31.090336 tar[1332]: ./ipvlan Feb 8 23:18:31.131294 tar[1332]: ./bandwidth Feb 8 23:18:31.209612 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:18:32.116616 sshd_keygen[1335]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:18:32.136619 systemd[1]: Finished sshd-keygen.service. Feb 8 23:18:32.141084 systemd[1]: Starting issuegen.service... Feb 8 23:18:32.144156 systemd[1]: Started waagent.service. Feb 8 23:18:32.148962 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:18:32.149147 systemd[1]: Finished issuegen.service. Feb 8 23:18:32.152672 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:18:32.160340 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:18:32.166938 systemd[1]: Started getty@tty1.service. Feb 8 23:18:32.170302 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:18:32.173075 systemd[1]: Reached target getty.target. Feb 8 23:18:32.175096 systemd[1]: Reached target multi-user.target. Feb 8 23:18:32.178726 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:18:32.187934 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:18:32.188114 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:18:32.190790 systemd[1]: Startup finished in 875ms (firmware) + 23.190s (loader) + 894ms (kernel) + 16.013s (initrd) + 23.065s (userspace) = 1min 4.039s. Feb 8 23:18:32.270165 locksmithd[1418]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:18:32.589050 login[1441]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 8 23:18:32.601067 login[1440]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:18:32.626500 systemd[1]: Created slice user-500.slice. Feb 8 23:18:32.627915 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:18:32.631524 systemd-logind[1326]: New session 2 of user core. Feb 8 23:18:32.640081 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:18:32.641707 systemd[1]: Starting user@500.service... Feb 8 23:18:32.668273 (systemd)[1444]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:18:32.813377 systemd[1444]: Queued start job for default target default.target. Feb 8 23:18:32.813995 systemd[1444]: Reached target paths.target. Feb 8 23:18:32.814027 systemd[1444]: Reached target sockets.target. Feb 8 23:18:32.814045 systemd[1444]: Reached target timers.target. Feb 8 23:18:32.814061 systemd[1444]: Reached target basic.target. Feb 8 23:18:32.814184 systemd[1]: Started user@500.service. Feb 8 23:18:32.815454 systemd[1]: Started session-2.scope. Feb 8 23:18:32.816462 systemd[1444]: Reached target default.target. Feb 8 23:18:32.816667 systemd[1444]: Startup finished in 142ms. Feb 8 23:18:33.591311 login[1441]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:18:33.596998 systemd-logind[1326]: New session 1 of user core. Feb 8 23:18:33.597618 systemd[1]: Started session-1.scope. Feb 8 23:18:38.537312 waagent[1435]: 2024-02-08T23:18:38.537192Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:18:38.552633 waagent[1435]: 2024-02-08T23:18:38.552550Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:18:38.554988 waagent[1435]: 2024-02-08T23:18:38.554923Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:18:38.557317 waagent[1435]: 2024-02-08T23:18:38.557246Z INFO Daemon Daemon Run daemon Feb 8 23:18:38.559839 waagent[1435]: 2024-02-08T23:18:38.559772Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:18:38.572491 waagent[1435]: 2024-02-08T23:18:38.572362Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:18:38.579897 waagent[1435]: 2024-02-08T23:18:38.579790Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:18:38.584639 waagent[1435]: 2024-02-08T23:18:38.584578Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:18:38.596458 waagent[1435]: 2024-02-08T23:18:38.585935Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:18:38.596458 waagent[1435]: 2024-02-08T23:18:38.587927Z INFO Daemon Daemon Activate resource disk Feb 8 23:18:38.596458 waagent[1435]: 2024-02-08T23:18:38.589452Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:18:38.599643 waagent[1435]: 2024-02-08T23:18:38.599582Z INFO Daemon Daemon Found device: None Feb 8 23:18:38.613742 waagent[1435]: 2024-02-08T23:18:38.600857Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:18:38.613742 waagent[1435]: 2024-02-08T23:18:38.601623Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:18:38.613742 waagent[1435]: 2024-02-08T23:18:38.603321Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:18:38.613742 waagent[1435]: 2024-02-08T23:18:38.604119Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:18:38.613953 waagent[1435]: 2024-02-08T23:18:38.613831Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:18:38.620024 waagent[1435]: 2024-02-08T23:18:38.619919Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:18:38.627626 waagent[1435]: 2024-02-08T23:18:38.621192Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:18:38.627626 waagent[1435]: 2024-02-08T23:18:38.621907Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:18:38.757123 waagent[1435]: 2024-02-08T23:18:38.756940Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:18:38.850852 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:18:38.856621 waagent[1435]: 2024-02-08T23:18:38.856497Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:18:38.871723 waagent[1435]: 2024-02-08T23:18:38.858390Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:18:38.871723 waagent[1435]: 2024-02-08T23:18:38.859704Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:18:38.871723 waagent[1435]: 2024-02-08T23:18:38.860468Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:18:38.871723 waagent[1435]: 2024-02-08T23:18:38.861621Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:18:38.871723 waagent[1435]: 2024-02-08T23:18:38.862374Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:18:39.108940 waagent[1435]: 2024-02-08T23:18:39.108778Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:18:39.112877 waagent[1435]: 2024-02-08T23:18:39.112828Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:18:39.115549 waagent[1435]: 2024-02-08T23:18:39.115481Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:18:39.679969 waagent[1435]: 2024-02-08T23:18:39.679797Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:18:39.692133 waagent[1435]: 2024-02-08T23:18:39.692045Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:18:39.694899 waagent[1435]: 2024-02-08T23:18:39.694829Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:18:39.772731 waagent[1435]: 2024-02-08T23:18:39.772605Z INFO Daemon Daemon Found private key matching thumbprint 20F6F088945D2A82C32200F61EAB1023F7309EF6 Feb 8 23:18:39.782766 waagent[1435]: 2024-02-08T23:18:39.774121Z INFO Daemon Daemon Certificate with thumbprint A0412E7CB43CF37EE051D1229F83298B8D530D37 has no matching private key. Feb 8 23:18:39.782766 waagent[1435]: 2024-02-08T23:18:39.775104Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:18:39.822742 waagent[1435]: 2024-02-08T23:18:39.822655Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 4ebad936-cd51-48f9-bd4f-4338cdbbdc31 New eTag: 17391194539866329956] Feb 8 23:18:39.831003 waagent[1435]: 2024-02-08T23:18:39.824946Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:18:39.835715 waagent[1435]: 2024-02-08T23:18:39.835653Z INFO Daemon Daemon Starting provisioning Feb 8 23:18:39.841991 waagent[1435]: 2024-02-08T23:18:39.836967Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:18:39.841991 waagent[1435]: 2024-02-08T23:18:39.837805Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-5bade47376] Feb 8 23:18:39.853756 waagent[1435]: 2024-02-08T23:18:39.853654Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-5bade47376] Feb 8 23:18:39.860948 waagent[1435]: 2024-02-08T23:18:39.855119Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:18:39.860948 waagent[1435]: 2024-02-08T23:18:39.856052Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:18:39.869766 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:18:39.870025 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:18:39.870104 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:18:39.870512 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:18:39.876469 systemd-networkd[1198]: eth0: DHCPv6 lease lost Feb 8 23:18:39.878062 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:18:39.878270 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:18:39.880974 systemd[1]: Starting systemd-networkd.service... Feb 8 23:18:39.911762 systemd-networkd[1493]: enP57044s1: Link UP Feb 8 23:18:39.911772 systemd-networkd[1493]: enP57044s1: Gained carrier Feb 8 23:18:39.913106 systemd-networkd[1493]: eth0: Link UP Feb 8 23:18:39.913114 systemd-networkd[1493]: eth0: Gained carrier Feb 8 23:18:39.913566 systemd-networkd[1493]: lo: Link UP Feb 8 23:18:39.913574 systemd-networkd[1493]: lo: Gained carrier Feb 8 23:18:39.913893 systemd-networkd[1493]: eth0: Gained IPv6LL Feb 8 23:18:39.914165 systemd-networkd[1493]: Enumeration completed Feb 8 23:18:39.919063 waagent[1435]: 2024-02-08T23:18:39.915477Z INFO Daemon Daemon Create user account if not exists Feb 8 23:18:39.914269 systemd[1]: Started systemd-networkd.service. Feb 8 23:18:39.917150 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:18:39.920612 waagent[1435]: 2024-02-08T23:18:39.920511Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:18:39.923396 waagent[1435]: 2024-02-08T23:18:39.923319Z INFO Daemon Daemon Configure sudoer Feb 8 23:18:39.924246 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:18:39.927070 waagent[1435]: 2024-02-08T23:18:39.926982Z INFO Daemon Daemon Configure sshd Feb 8 23:18:39.929427 waagent[1435]: 2024-02-08T23:18:39.929327Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:18:40.001496 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:18:40.004988 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:19:09.347403 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:19:10.272505 waagent[1435]: 2024-02-08T23:19:10.272377Z INFO Daemon Daemon Provisioning complete Feb 8 23:19:10.289957 waagent[1435]: 2024-02-08T23:19:10.289874Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:19:10.296171 waagent[1435]: 2024-02-08T23:19:10.291094Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:19:10.296171 waagent[1435]: 2024-02-08T23:19:10.292660Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:19:10.560876 waagent[1502]: 2024-02-08T23:19:10.560688Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:19:10.561633 waagent[1502]: 2024-02-08T23:19:10.561564Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:10.561793 waagent[1502]: 2024-02-08T23:19:10.561737Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:10.573081 waagent[1502]: 2024-02-08T23:19:10.573001Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:19:10.573254 waagent[1502]: 2024-02-08T23:19:10.573199Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:19:10.634452 waagent[1502]: 2024-02-08T23:19:10.634309Z INFO ExtHandler ExtHandler Found private key matching thumbprint 20F6F088945D2A82C32200F61EAB1023F7309EF6 Feb 8 23:19:10.634729 waagent[1502]: 2024-02-08T23:19:10.634662Z INFO ExtHandler ExtHandler Certificate with thumbprint A0412E7CB43CF37EE051D1229F83298B8D530D37 has no matching private key. Feb 8 23:19:10.634973 waagent[1502]: 2024-02-08T23:19:10.634920Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:19:10.653469 waagent[1502]: 2024-02-08T23:19:10.653389Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 1d5979d3-198b-4ac0-87f8-eb38526c33e5 New eTag: 17391194539866329956] Feb 8 23:19:10.654064 waagent[1502]: 2024-02-08T23:19:10.654003Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:19:10.730670 waagent[1502]: 2024-02-08T23:19:10.730507Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:19:10.740072 waagent[1502]: 2024-02-08T23:19:10.739991Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1502 Feb 8 23:19:10.743429 waagent[1502]: 2024-02-08T23:19:10.743353Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:19:10.744649 waagent[1502]: 2024-02-08T23:19:10.744591Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:19:10.821734 waagent[1502]: 2024-02-08T23:19:10.821587Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:19:10.822165 waagent[1502]: 2024-02-08T23:19:10.822089Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:19:10.830675 waagent[1502]: 2024-02-08T23:19:10.830619Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:19:10.831143 waagent[1502]: 2024-02-08T23:19:10.831084Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:19:10.832252 waagent[1502]: 2024-02-08T23:19:10.832184Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:19:10.833585 waagent[1502]: 2024-02-08T23:19:10.833525Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:19:10.834001 waagent[1502]: 2024-02-08T23:19:10.833943Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:10.834153 waagent[1502]: 2024-02-08T23:19:10.834105Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:10.834701 waagent[1502]: 2024-02-08T23:19:10.834645Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:19:10.834995 waagent[1502]: 2024-02-08T23:19:10.834936Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:19:10.834995 waagent[1502]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:19:10.834995 waagent[1502]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:19:10.834995 waagent[1502]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:19:10.834995 waagent[1502]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:10.834995 waagent[1502]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:10.834995 waagent[1502]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:10.838177 waagent[1502]: 2024-02-08T23:19:10.837979Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:19:10.838947 waagent[1502]: 2024-02-08T23:19:10.838890Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:10.839210 waagent[1502]: 2024-02-08T23:19:10.839163Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:19:10.839315 waagent[1502]: 2024-02-08T23:19:10.839047Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:19:10.839670 waagent[1502]: 2024-02-08T23:19:10.839592Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:10.840297 waagent[1502]: 2024-02-08T23:19:10.840233Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:19:10.840461 waagent[1502]: 2024-02-08T23:19:10.840392Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:19:10.840602 waagent[1502]: 2024-02-08T23:19:10.840555Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:19:10.842001 waagent[1502]: 2024-02-08T23:19:10.841939Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:19:10.842110 waagent[1502]: 2024-02-08T23:19:10.842049Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:19:10.842501 waagent[1502]: 2024-02-08T23:19:10.842444Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:19:10.853542 waagent[1502]: 2024-02-08T23:19:10.853487Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:19:10.855070 waagent[1502]: 2024-02-08T23:19:10.855023Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:19:10.856079 waagent[1502]: 2024-02-08T23:19:10.856030Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:19:10.895845 waagent[1502]: 2024-02-08T23:19:10.895775Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:19:10.915841 waagent[1502]: 2024-02-08T23:19:10.915726Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1493' Feb 8 23:19:10.987840 waagent[1502]: 2024-02-08T23:19:10.987677Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:19:10.987840 waagent[1502]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:19:10.987840 waagent[1502]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:19:10.987840 waagent[1502]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:d9:96 brd ff:ff:ff:ff:ff:ff Feb 8 23:19:10.987840 waagent[1502]: 3: enP57044s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:d9:96 brd ff:ff:ff:ff:ff:ff\ altname enP57044p0s2 Feb 8 23:19:10.987840 waagent[1502]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:19:10.987840 waagent[1502]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:19:10.987840 waagent[1502]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:19:10.987840 waagent[1502]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:19:10.987840 waagent[1502]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:19:10.987840 waagent[1502]: 2: eth0 inet6 fe80::20d:3aff:fe64:d996/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:19:11.222774 waagent[1502]: 2024-02-08T23:19:11.222699Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 8 23:19:11.225912 waagent[1502]: 2024-02-08T23:19:11.225812Z INFO EnvHandler ExtHandler Firewall rules: Feb 8 23:19:11.225912 waagent[1502]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:11.225912 waagent[1502]: pkts bytes target prot opt in out source destination Feb 8 23:19:11.225912 waagent[1502]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:11.225912 waagent[1502]: pkts bytes target prot opt in out source destination Feb 8 23:19:11.225912 waagent[1502]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:11.225912 waagent[1502]: pkts bytes target prot opt in out source destination Feb 8 23:19:11.225912 waagent[1502]: 3 856 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:19:11.225912 waagent[1502]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:19:11.227217 waagent[1502]: 2024-02-08T23:19:11.227161Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:19:11.293616 waagent[1502]: 2024-02-08T23:19:11.293543Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:19:12.297004 waagent[1435]: 2024-02-08T23:19:12.296795Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:19:12.303632 waagent[1435]: 2024-02-08T23:19:12.303554Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:19:13.304265 waagent[1542]: 2024-02-08T23:19:13.304145Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:19:13.305026 waagent[1542]: 2024-02-08T23:19:13.304938Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:19:13.305174 waagent[1542]: 2024-02-08T23:19:13.305118Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:19:13.314927 waagent[1542]: 2024-02-08T23:19:13.314829Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:19:13.315306 waagent[1542]: 2024-02-08T23:19:13.315248Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:13.315486 waagent[1542]: 2024-02-08T23:19:13.315432Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:13.327037 waagent[1542]: 2024-02-08T23:19:13.326964Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:19:13.335631 waagent[1542]: 2024-02-08T23:19:13.335572Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:19:13.336545 waagent[1542]: 2024-02-08T23:19:13.336487Z INFO ExtHandler Feb 8 23:19:13.336697 waagent[1542]: 2024-02-08T23:19:13.336646Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 913bf565-83be-4388-a740-e19e43aeb472 eTag: 17391194539866329956 source: Fabric] Feb 8 23:19:13.337386 waagent[1542]: 2024-02-08T23:19:13.337328Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:19:13.338490 waagent[1542]: 2024-02-08T23:19:13.338429Z INFO ExtHandler Feb 8 23:19:13.338632 waagent[1542]: 2024-02-08T23:19:13.338583Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:19:13.345008 waagent[1542]: 2024-02-08T23:19:13.344955Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:19:13.345450 waagent[1542]: 2024-02-08T23:19:13.345384Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:19:13.366527 waagent[1542]: 2024-02-08T23:19:13.366462Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:19:13.430310 waagent[1542]: 2024-02-08T23:19:13.430188Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A0412E7CB43CF37EE051D1229F83298B8D530D37', 'hasPrivateKey': False} Feb 8 23:19:13.431251 waagent[1542]: 2024-02-08T23:19:13.431184Z INFO ExtHandler Downloaded certificate {'thumbprint': '20F6F088945D2A82C32200F61EAB1023F7309EF6', 'hasPrivateKey': True} Feb 8 23:19:13.432241 waagent[1542]: 2024-02-08T23:19:13.432179Z INFO ExtHandler Fetch goal state completed Feb 8 23:19:13.457849 waagent[1542]: 2024-02-08T23:19:13.457764Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1542 Feb 8 23:19:13.461112 waagent[1542]: 2024-02-08T23:19:13.461045Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:19:13.462550 waagent[1542]: 2024-02-08T23:19:13.462488Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:19:13.467594 waagent[1542]: 2024-02-08T23:19:13.467541Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:19:13.467958 waagent[1542]: 2024-02-08T23:19:13.467900Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:19:13.476234 waagent[1542]: 2024-02-08T23:19:13.476176Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:19:13.476736 waagent[1542]: 2024-02-08T23:19:13.476676Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:19:13.499981 waagent[1542]: 2024-02-08T23:19:13.499859Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 8 23:19:13.503393 waagent[1542]: 2024-02-08T23:19:13.503274Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 8 23:19:13.509475 waagent[1542]: 2024-02-08T23:19:13.509383Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:19:13.510906 waagent[1542]: 2024-02-08T23:19:13.510845Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:19:13.511389 waagent[1542]: 2024-02-08T23:19:13.511333Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:13.511568 waagent[1542]: 2024-02-08T23:19:13.511516Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:13.512102 waagent[1542]: 2024-02-08T23:19:13.512043Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:19:13.512384 waagent[1542]: 2024-02-08T23:19:13.512329Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:19:13.512384 waagent[1542]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:19:13.512384 waagent[1542]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:19:13.512384 waagent[1542]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:19:13.512384 waagent[1542]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:13.512384 waagent[1542]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:13.512384 waagent[1542]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:13.514604 waagent[1542]: 2024-02-08T23:19:13.514485Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:19:13.515285 waagent[1542]: 2024-02-08T23:19:13.515221Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:19:13.515542 waagent[1542]: 2024-02-08T23:19:13.515485Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:19:13.519087 waagent[1542]: 2024-02-08T23:19:13.518984Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:13.519515 waagent[1542]: 2024-02-08T23:19:13.519448Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:13.520065 waagent[1542]: 2024-02-08T23:19:13.520003Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:19:13.520418 waagent[1542]: 2024-02-08T23:19:13.520352Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:19:13.520804 waagent[1542]: 2024-02-08T23:19:13.520746Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:19:13.521125 waagent[1542]: 2024-02-08T23:19:13.521071Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:19:13.521881 waagent[1542]: 2024-02-08T23:19:13.521828Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:19:13.523837 waagent[1542]: 2024-02-08T23:19:13.523723Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:19:13.539059 waagent[1542]: 2024-02-08T23:19:13.538984Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:19:13.539059 waagent[1542]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:19:13.539059 waagent[1542]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:19:13.539059 waagent[1542]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:d9:96 brd ff:ff:ff:ff:ff:ff Feb 8 23:19:13.539059 waagent[1542]: 3: enP57044s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:d9:96 brd ff:ff:ff:ff:ff:ff\ altname enP57044p0s2 Feb 8 23:19:13.539059 waagent[1542]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:19:13.539059 waagent[1542]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:19:13.539059 waagent[1542]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:19:13.539059 waagent[1542]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:19:13.539059 waagent[1542]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:19:13.539059 waagent[1542]: 2: eth0 inet6 fe80::20d:3aff:fe64:d996/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:19:13.543389 waagent[1542]: 2024-02-08T23:19:13.543280Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:19:13.548514 waagent[1542]: 2024-02-08T23:19:13.548236Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:19:13.623266 waagent[1542]: 2024-02-08T23:19:13.623197Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:19:13.623266 waagent[1542]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:13.623266 waagent[1542]: pkts bytes target prot opt in out source destination Feb 8 23:19:13.623266 waagent[1542]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:13.623266 waagent[1542]: pkts bytes target prot opt in out source destination Feb 8 23:19:13.623266 waagent[1542]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:13.623266 waagent[1542]: pkts bytes target prot opt in out source destination Feb 8 23:19:13.623266 waagent[1542]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:19:13.623266 waagent[1542]: 115 14024 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:19:13.623266 waagent[1542]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:19:13.626560 waagent[1542]: 2024-02-08T23:19:13.626505Z INFO ExtHandler ExtHandler Feb 8 23:19:13.627577 waagent[1542]: 2024-02-08T23:19:13.627521Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1adc19c6-6dcc-48a8-b725-9d1e6ae9f04f correlation 7e216879-97b0-47eb-a767-c99828cf6a13 created: 2024-02-08T23:17:17.303169Z] Feb 8 23:19:13.629261 waagent[1542]: 2024-02-08T23:19:13.629199Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:19:13.631043 waagent[1542]: 2024-02-08T23:19:13.630984Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Feb 8 23:19:13.650551 waagent[1542]: 2024-02-08T23:19:13.650482Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:19:13.660545 waagent[1542]: 2024-02-08T23:19:13.660461Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 60314881-8483-4916-BBDA-ADFE5588B559;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:19:16.247624 update_engine[1327]: I0208 23:19:16.247546 1327 update_attempter.cc:509] Updating boot flags... Feb 8 23:19:54.765050 systemd[1]: Created slice system-sshd.slice. Feb 8 23:19:54.766740 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.12.6:55794.service. Feb 8 23:19:55.603751 sshd[1623]: Accepted publickey for core from 10.200.12.6 port 55794 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:19:55.605497 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:19:55.609890 systemd-logind[1326]: New session 3 of user core. Feb 8 23:19:55.611683 systemd[1]: Started session-3.scope. Feb 8 23:19:56.140986 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.12.6:55808.service. Feb 8 23:19:56.759216 sshd[1628]: Accepted publickey for core from 10.200.12.6 port 55808 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:19:56.760942 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:19:56.765749 systemd[1]: Started session-4.scope. Feb 8 23:19:56.766350 systemd-logind[1326]: New session 4 of user core. Feb 8 23:19:57.203867 sshd[1628]: pam_unix(sshd:session): session closed for user core Feb 8 23:19:57.207434 systemd[1]: sshd@1-10.200.8.4:22-10.200.12.6:55808.service: Deactivated successfully. Feb 8 23:19:57.208540 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:19:57.209203 systemd-logind[1326]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:19:57.209970 systemd-logind[1326]: Removed session 4. Feb 8 23:19:57.309472 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.12.6:59818.service. Feb 8 23:19:57.931254 sshd[1634]: Accepted publickey for core from 10.200.12.6 port 59818 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:19:57.933003 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:19:57.938002 systemd[1]: Started session-5.scope. Feb 8 23:19:57.938641 systemd-logind[1326]: New session 5 of user core. Feb 8 23:19:58.367276 sshd[1634]: pam_unix(sshd:session): session closed for user core Feb 8 23:19:58.370682 systemd[1]: sshd@2-10.200.8.4:22-10.200.12.6:59818.service: Deactivated successfully. Feb 8 23:19:58.371659 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:19:58.372292 systemd-logind[1326]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:19:58.373055 systemd-logind[1326]: Removed session 5. Feb 8 23:19:58.473028 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.12.6:59820.service. Feb 8 23:19:59.114219 sshd[1640]: Accepted publickey for core from 10.200.12.6 port 59820 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:19:59.115912 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:19:59.120991 systemd[1]: Started session-6.scope. Feb 8 23:19:59.121462 systemd-logind[1326]: New session 6 of user core. Feb 8 23:19:59.563905 sshd[1640]: pam_unix(sshd:session): session closed for user core Feb 8 23:19:59.567313 systemd[1]: sshd@3-10.200.8.4:22-10.200.12.6:59820.service: Deactivated successfully. Feb 8 23:19:59.568194 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:19:59.568824 systemd-logind[1326]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:19:59.569599 systemd-logind[1326]: Removed session 6. Feb 8 23:19:59.669761 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.12.6:59826.service. Feb 8 23:20:00.298633 sshd[1646]: Accepted publickey for core from 10.200.12.6 port 59826 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:00.300742 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:00.305899 systemd[1]: Started session-7.scope. Feb 8 23:20:00.306558 systemd-logind[1326]: New session 7 of user core. Feb 8 23:20:01.010430 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:20:01.010791 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:20:01.727338 systemd[1]: Starting docker.service... Feb 8 23:20:01.778994 env[1664]: time="2024-02-08T23:20:01.778916222Z" level=info msg="Starting up" Feb 8 23:20:01.780785 env[1664]: time="2024-02-08T23:20:01.780754033Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:20:01.780785 env[1664]: time="2024-02-08T23:20:01.780774033Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:20:01.780945 env[1664]: time="2024-02-08T23:20:01.780797133Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:20:01.780945 env[1664]: time="2024-02-08T23:20:01.780810233Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:20:01.782758 env[1664]: time="2024-02-08T23:20:01.782735944Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:20:01.782931 env[1664]: time="2024-02-08T23:20:01.782918245Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:20:01.783000 env[1664]: time="2024-02-08T23:20:01.782989846Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:20:01.783042 env[1664]: time="2024-02-08T23:20:01.783034646Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:20:01.790175 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3803857226-merged.mount: Deactivated successfully. Feb 8 23:20:01.887391 env[1664]: time="2024-02-08T23:20:01.887337139Z" level=info msg="Loading containers: start." Feb 8 23:20:01.988436 kernel: Initializing XFRM netlink socket Feb 8 23:20:02.011519 env[1664]: time="2024-02-08T23:20:02.011480545Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:20:02.112739 systemd-networkd[1493]: docker0: Link UP Feb 8 23:20:02.130676 env[1664]: time="2024-02-08T23:20:02.130634707Z" level=info msg="Loading containers: done." Feb 8 23:20:02.142029 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck909723867-merged.mount: Deactivated successfully. Feb 8 23:20:02.160385 env[1664]: time="2024-02-08T23:20:02.160339972Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:20:02.160613 env[1664]: time="2024-02-08T23:20:02.160589073Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:20:02.160734 env[1664]: time="2024-02-08T23:20:02.160704774Z" level=info msg="Daemon has completed initialization" Feb 8 23:20:02.188225 systemd[1]: Started docker.service. Feb 8 23:20:02.196932 env[1664]: time="2024-02-08T23:20:02.196882775Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:20:02.218158 systemd[1]: Reloading. Feb 8 23:20:02.285967 /usr/lib/systemd/system-generators/torcx-generator[1794]: time="2024-02-08T23:20:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:20:02.286004 /usr/lib/systemd/system-generators/torcx-generator[1794]: time="2024-02-08T23:20:02Z" level=info msg="torcx already run" Feb 8 23:20:02.381284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:20:02.381304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:20:02.397340 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:20:02.485038 systemd[1]: Started kubelet.service. Feb 8 23:20:02.556880 kubelet[1856]: E0208 23:20:02.556720 1856 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:20:02.558698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:20:02.558810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:20:06.056513 env[1345]: time="2024-02-08T23:20:06.056456150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:20:06.754103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082983492.mount: Deactivated successfully. Feb 8 23:20:08.764192 env[1345]: time="2024-02-08T23:20:08.764082293Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:08.770691 env[1345]: time="2024-02-08T23:20:08.770641424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:08.775981 env[1345]: time="2024-02-08T23:20:08.775936950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:08.779986 env[1345]: time="2024-02-08T23:20:08.779951570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:08.781491 env[1345]: time="2024-02-08T23:20:08.781455577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:20:08.796178 env[1345]: time="2024-02-08T23:20:08.796151448Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:20:10.748120 env[1345]: time="2024-02-08T23:20:10.748021648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:10.756189 env[1345]: time="2024-02-08T23:20:10.756141085Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:10.761969 env[1345]: time="2024-02-08T23:20:10.761932712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:10.770653 env[1345]: time="2024-02-08T23:20:10.770616552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:10.771376 env[1345]: time="2024-02-08T23:20:10.771340056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:20:10.781831 env[1345]: time="2024-02-08T23:20:10.781796004Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:20:11.986870 env[1345]: time="2024-02-08T23:20:11.986808096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:11.993434 env[1345]: time="2024-02-08T23:20:11.993372226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:11.997451 env[1345]: time="2024-02-08T23:20:11.997393544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:12.003257 env[1345]: time="2024-02-08T23:20:12.003221070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:12.003914 env[1345]: time="2024-02-08T23:20:12.003875873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:20:12.014149 env[1345]: time="2024-02-08T23:20:12.014110019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:20:12.576967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:20:12.577266 systemd[1]: Stopped kubelet.service. Feb 8 23:20:12.579300 systemd[1]: Started kubelet.service. Feb 8 23:20:12.670296 kubelet[1890]: E0208 23:20:12.670242 1890 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:20:12.677475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:20:12.677643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:20:13.168962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519589204.mount: Deactivated successfully. Feb 8 23:20:13.664991 env[1345]: time="2024-02-08T23:20:13.664935592Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:13.671606 env[1345]: time="2024-02-08T23:20:13.671558121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:13.676608 env[1345]: time="2024-02-08T23:20:13.676566942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:13.681119 env[1345]: time="2024-02-08T23:20:13.681085162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:13.681546 env[1345]: time="2024-02-08T23:20:13.681514264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:20:13.691856 env[1345]: time="2024-02-08T23:20:13.691817809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:20:14.230046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374878212.mount: Deactivated successfully. Feb 8 23:20:14.252331 env[1345]: time="2024-02-08T23:20:14.252279124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:14.259145 env[1345]: time="2024-02-08T23:20:14.259106153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:14.262956 env[1345]: time="2024-02-08T23:20:14.262921969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:14.268051 env[1345]: time="2024-02-08T23:20:14.268019791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:14.268480 env[1345]: time="2024-02-08T23:20:14.268450293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:20:14.278026 env[1345]: time="2024-02-08T23:20:14.277991433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:20:15.022005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2086161962.mount: Deactivated successfully. Feb 8 23:20:19.290308 env[1345]: time="2024-02-08T23:20:19.290238019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:19.299269 env[1345]: time="2024-02-08T23:20:19.299202854Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:19.304502 env[1345]: time="2024-02-08T23:20:19.304463874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:19.309295 env[1345]: time="2024-02-08T23:20:19.309241392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:19.309948 env[1345]: time="2024-02-08T23:20:19.309917095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:20:19.319919 env[1345]: time="2024-02-08T23:20:19.319893533Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:20:19.904499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550961553.mount: Deactivated successfully. Feb 8 23:20:20.480956 env[1345]: time="2024-02-08T23:20:20.480898867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:20.490652 env[1345]: time="2024-02-08T23:20:20.490613003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:20.495163 env[1345]: time="2024-02-08T23:20:20.495129720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:20.500430 env[1345]: time="2024-02-08T23:20:20.500378140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:20.500892 env[1345]: time="2024-02-08T23:20:20.500856942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:20:22.827024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:20:22.827309 systemd[1]: Stopped kubelet.service. Feb 8 23:20:22.829692 systemd[1]: Started kubelet.service. Feb 8 23:20:22.912654 kubelet[1965]: E0208 23:20:22.912583 1965 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:20:22.915363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:20:22.915553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:20:23.395991 systemd[1]: Stopped kubelet.service. Feb 8 23:20:23.409478 systemd[1]: Reloading. Feb 8 23:20:23.512044 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2024-02-08T23:20:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:20:23.512081 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2024-02-08T23:20:23Z" level=info msg="torcx already run" Feb 8 23:20:23.574461 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:20:23.574481 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:20:23.590543 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:20:23.685179 systemd[1]: Started kubelet.service. Feb 8 23:20:23.736063 kubelet[2057]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:20:23.736449 kubelet[2057]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:20:23.736601 kubelet[2057]: I0208 23:20:23.736575 2057 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:20:23.738085 kubelet[2057]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:20:23.738177 kubelet[2057]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:20:24.279521 kubelet[2057]: I0208 23:20:24.279478 2057 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:20:24.279521 kubelet[2057]: I0208 23:20:24.279506 2057 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:20:24.279817 kubelet[2057]: I0208 23:20:24.279796 2057 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:20:24.282869 kubelet[2057]: E0208 23:20:24.282842 2057 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.283068 kubelet[2057]: I0208 23:20:24.283053 2057 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:20:24.285757 kubelet[2057]: I0208 23:20:24.285726 2057 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:20:24.285994 kubelet[2057]: I0208 23:20:24.285974 2057 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:20:24.286082 kubelet[2057]: I0208 23:20:24.286064 2057 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:20:24.286222 kubelet[2057]: I0208 23:20:24.286098 2057 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:20:24.286222 kubelet[2057]: I0208 23:20:24.286115 2057 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:20:24.286319 kubelet[2057]: I0208 23:20:24.286231 2057 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:20:24.292910 kubelet[2057]: I0208 23:20:24.292890 2057 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:20:24.292910 kubelet[2057]: I0208 23:20:24.292912 2057 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:20:24.293066 kubelet[2057]: I0208 23:20:24.292938 2057 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:20:24.293066 kubelet[2057]: I0208 23:20:24.292952 2057 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:20:24.293996 kubelet[2057]: W0208 23:20:24.293955 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.294140 kubelet[2057]: E0208 23:20:24.294122 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.294304 kubelet[2057]: W0208 23:20:24.294273 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-5bade47376&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.294395 kubelet[2057]: E0208 23:20:24.294385 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-5bade47376&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.294571 kubelet[2057]: I0208 23:20:24.294560 2057 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:20:24.294888 kubelet[2057]: W0208 23:20:24.294874 2057 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:20:24.295422 kubelet[2057]: I0208 23:20:24.295387 2057 server.go:1186] "Started kubelet" Feb 8 23:20:24.301513 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:20:24.301598 kubelet[2057]: I0208 23:20:24.298537 2057 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:20:24.301598 kubelet[2057]: I0208 23:20:24.299079 2057 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:20:24.301598 kubelet[2057]: E0208 23:20:24.300141 2057 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-5bade47376.17b206900166131d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-5bade47376", UID:"ci-3510.3.2-a-5bade47376", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-5bade47376"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 24, 295363357, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 24, 295363357, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.4:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:20:24.301598 kubelet[2057]: E0208 23:20:24.300955 2057 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:20:24.301598 kubelet[2057]: E0208 23:20:24.300973 2057 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:20:24.301920 kubelet[2057]: I0208 23:20:24.301906 2057 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:20:24.303357 kubelet[2057]: I0208 23:20:24.303226 2057 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:20:24.304889 kubelet[2057]: I0208 23:20:24.304866 2057 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:20:24.305218 kubelet[2057]: W0208 23:20:24.305178 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.305300 kubelet[2057]: E0208 23:20:24.305227 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.305358 kubelet[2057]: E0208 23:20:24.305349 2057 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-5bade47376\" not found" Feb 8 23:20:24.305659 kubelet[2057]: E0208 23:20:24.305634 2057 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-5bade47376?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.347148 kubelet[2057]: I0208 23:20:24.347112 2057 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:20:24.347148 kubelet[2057]: I0208 23:20:24.347140 2057 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:20:24.347333 kubelet[2057]: I0208 23:20:24.347159 2057 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:20:24.353257 kubelet[2057]: I0208 23:20:24.353232 2057 policy_none.go:49] "None policy: Start" Feb 8 23:20:24.353906 kubelet[2057]: I0208 23:20:24.353869 2057 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:20:24.353906 kubelet[2057]: I0208 23:20:24.353905 2057 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:20:24.361828 systemd[1]: Created slice kubepods.slice. Feb 8 23:20:24.367062 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:20:24.373615 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:20:24.375629 kubelet[2057]: I0208 23:20:24.375609 2057 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:20:24.375818 kubelet[2057]: I0208 23:20:24.375794 2057 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:20:24.377569 kubelet[2057]: E0208 23:20:24.377549 2057 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-5bade47376\" not found" Feb 8 23:20:24.394452 kubelet[2057]: I0208 23:20:24.394431 2057 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:20:24.406703 kubelet[2057]: I0208 23:20:24.406686 2057 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.407334 kubelet[2057]: E0208 23:20:24.407319 2057 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.442837 kubelet[2057]: I0208 23:20:24.442803 2057 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:20:24.442837 kubelet[2057]: I0208 23:20:24.442845 2057 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:20:24.443060 kubelet[2057]: I0208 23:20:24.442872 2057 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:20:24.443060 kubelet[2057]: E0208 23:20:24.442928 2057 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:20:24.443810 kubelet[2057]: W0208 23:20:24.443775 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.444008 kubelet[2057]: E0208 23:20:24.443989 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.506204 kubelet[2057]: E0208 23:20:24.506161 2057 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-5bade47376?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:24.543747 kubelet[2057]: I0208 23:20:24.543526 2057 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:24.547038 kubelet[2057]: I0208 23:20:24.547011 2057 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:24.548905 kubelet[2057]: I0208 23:20:24.548883 2057 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:24.552638 kubelet[2057]: I0208 23:20:24.552617 2057 status_manager.go:698] "Failed to get status for pod" podUID=49a68657c36d4deccb595324a614a57e pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" err="Get \"https://10.200.8.4:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-5bade47376\": dial tcp 10.200.8.4:6443: connect: connection refused" Feb 8 23:20:24.553002 kubelet[2057]: I0208 23:20:24.552984 2057 status_manager.go:698] "Failed to get status for pod" podUID=c4381dc1d8462b96e9981ffb96985706 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" err="Get \"https://10.200.8.4:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-5bade47376\": dial tcp 10.200.8.4:6443: connect: connection refused" Feb 8 23:20:24.555475 systemd[1]: Created slice kubepods-burstable-pod49a68657c36d4deccb595324a614a57e.slice. Feb 8 23:20:24.557604 kubelet[2057]: I0208 23:20:24.557385 2057 status_manager.go:698] "Failed to get status for pod" podUID=2b3427f98b21123f2c4a6dd34716319d pod="kube-system/kube-scheduler-ci-3510.3.2-a-5bade47376" err="Get \"https://10.200.8.4:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-5bade47376\": dial tcp 10.200.8.4:6443: connect: connection refused" Feb 8 23:20:24.564820 systemd[1]: Created slice kubepods-burstable-podc4381dc1d8462b96e9981ffb96985706.slice. Feb 8 23:20:24.569399 systemd[1]: Created slice kubepods-burstable-pod2b3427f98b21123f2c4a6dd34716319d.slice. Feb 8 23:20:24.606212 kubelet[2057]: I0208 23:20:24.606175 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49a68657c36d4deccb595324a614a57e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-5bade47376\" (UID: \"49a68657c36d4deccb595324a614a57e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606311 kubelet[2057]: I0208 23:20:24.606227 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606311 kubelet[2057]: I0208 23:20:24.606265 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606311 kubelet[2057]: I0208 23:20:24.606309 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606561 kubelet[2057]: I0208 23:20:24.606346 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49a68657c36d4deccb595324a614a57e-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-5bade47376\" (UID: \"49a68657c36d4deccb595324a614a57e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606561 kubelet[2057]: I0208 23:20:24.606385 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49a68657c36d4deccb595324a614a57e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-5bade47376\" (UID: \"49a68657c36d4deccb595324a614a57e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606561 kubelet[2057]: I0208 23:20:24.606444 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606561 kubelet[2057]: I0208 23:20:24.606487 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.606561 kubelet[2057]: I0208 23:20:24.606529 2057 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b3427f98b21123f2c4a6dd34716319d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-5bade47376\" (UID: \"2b3427f98b21123f2c4a6dd34716319d\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.609595 kubelet[2057]: I0208 23:20:24.609564 2057 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.609904 kubelet[2057]: E0208 23:20:24.609885 2057 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:24.864316 env[1345]: time="2024-02-08T23:20:24.864261547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-5bade47376,Uid:49a68657c36d4deccb595324a614a57e,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:24.868033 env[1345]: time="2024-02-08T23:20:24.867982460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-5bade47376,Uid:c4381dc1d8462b96e9981ffb96985706,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:24.872725 env[1345]: time="2024-02-08T23:20:24.872682776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-5bade47376,Uid:2b3427f98b21123f2c4a6dd34716319d,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:24.907438 kubelet[2057]: E0208 23:20:24.907375 2057 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-5bade47376?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.012214 kubelet[2057]: I0208 23:20:25.012181 2057 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:25.012581 kubelet[2057]: E0208 23:20:25.012554 2057 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:25.360274 kubelet[2057]: W0208 23:20:25.360219 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.360274 kubelet[2057]: E0208 23:20:25.360275 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.379169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204810215.mount: Deactivated successfully. Feb 8 23:20:25.405493 env[1345]: time="2024-02-08T23:20:25.405447214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.412060 env[1345]: time="2024-02-08T23:20:25.412018937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.421882 env[1345]: time="2024-02-08T23:20:25.421794370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.428772 env[1345]: time="2024-02-08T23:20:25.428735894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.431953 env[1345]: time="2024-02-08T23:20:25.431920205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.437339 env[1345]: time="2024-02-08T23:20:25.437305724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.439671 env[1345]: time="2024-02-08T23:20:25.439638732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.444545 env[1345]: time="2024-02-08T23:20:25.444515749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.450600 env[1345]: time="2024-02-08T23:20:25.450567869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.460928 env[1345]: time="2024-02-08T23:20:25.460895405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.469715 env[1345]: time="2024-02-08T23:20:25.469683835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.485878 env[1345]: time="2024-02-08T23:20:25.485845790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:25.486844 kubelet[2057]: W0208 23:20:25.486770 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.486844 kubelet[2057]: E0208 23:20:25.486819 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.531806 env[1345]: time="2024-02-08T23:20:25.529537441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:25.531806 env[1345]: time="2024-02-08T23:20:25.529614041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:25.531806 env[1345]: time="2024-02-08T23:20:25.529629541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:25.531806 env[1345]: time="2024-02-08T23:20:25.529920642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791 pid=2132 runtime=io.containerd.runc.v2 Feb 8 23:20:25.540025 env[1345]: time="2024-02-08T23:20:25.539960376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:25.540025 env[1345]: time="2024-02-08T23:20:25.539994476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:25.540232 env[1345]: time="2024-02-08T23:20:25.540009676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:25.540232 env[1345]: time="2024-02-08T23:20:25.540179877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958 pid=2146 runtime=io.containerd.runc.v2 Feb 8 23:20:25.557770 systemd[1]: Started cri-containerd-5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958.scope. Feb 8 23:20:25.573651 systemd[1]: Started cri-containerd-b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791.scope. Feb 8 23:20:25.601608 kubelet[2057]: W0208 23:20:25.601503 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.601608 kubelet[2057]: E0208 23:20:25.601582 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.609516 env[1345]: time="2024-02-08T23:20:25.609429915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:25.609657 env[1345]: time="2024-02-08T23:20:25.609521115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:25.609657 env[1345]: time="2024-02-08T23:20:25.609551715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:25.609761 env[1345]: time="2024-02-08T23:20:25.609715016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e024f7ab42cf71739c5c7faa13110fee7c789d9fbcf31debad72b43d2acb0d1 pid=2200 runtime=io.containerd.runc.v2 Feb 8 23:20:25.628622 systemd[1]: Started cri-containerd-6e024f7ab42cf71739c5c7faa13110fee7c789d9fbcf31debad72b43d2acb0d1.scope. Feb 8 23:20:25.641233 env[1345]: time="2024-02-08T23:20:25.641190324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-5bade47376,Uid:2b3427f98b21123f2c4a6dd34716319d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958\"" Feb 8 23:20:25.645276 env[1345]: time="2024-02-08T23:20:25.645239138Z" level=info msg="CreateContainer within sandbox \"5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:20:25.665579 env[1345]: time="2024-02-08T23:20:25.665535308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-5bade47376,Uid:c4381dc1d8462b96e9981ffb96985706,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791\"" Feb 8 23:20:25.669438 env[1345]: time="2024-02-08T23:20:25.669056420Z" level=info msg="CreateContainer within sandbox \"b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:20:25.689363 env[1345]: time="2024-02-08T23:20:25.689331489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-5bade47376,Uid:49a68657c36d4deccb595324a614a57e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e024f7ab42cf71739c5c7faa13110fee7c789d9fbcf31debad72b43d2acb0d1\"" Feb 8 23:20:25.691720 env[1345]: time="2024-02-08T23:20:25.691686497Z" level=info msg="CreateContainer within sandbox \"6e024f7ab42cf71739c5c7faa13110fee7c789d9fbcf31debad72b43d2acb0d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:20:25.699448 kubelet[2057]: W0208 23:20:25.699344 2057 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-5bade47376&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.699448 kubelet[2057]: E0208 23:20:25.699404 2057 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-5bade47376&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.699894 env[1345]: time="2024-02-08T23:20:25.699865326Z" level=info msg="CreateContainer within sandbox \"5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6\"" Feb 8 23:20:25.700461 env[1345]: time="2024-02-08T23:20:25.700427227Z" level=info msg="StartContainer for \"f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6\"" Feb 8 23:20:25.708129 kubelet[2057]: E0208 23:20:25.708097 2057 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-5bade47376?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:20:25.718369 systemd[1]: Started cri-containerd-f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6.scope. Feb 8 23:20:25.755732 env[1345]: time="2024-02-08T23:20:25.755683517Z" level=info msg="CreateContainer within sandbox \"b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d\"" Feb 8 23:20:25.756292 env[1345]: time="2024-02-08T23:20:25.756265619Z" level=info msg="StartContainer for \"1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d\"" Feb 8 23:20:25.784143 systemd[1]: Started cri-containerd-1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d.scope. Feb 8 23:20:25.793657 env[1345]: time="2024-02-08T23:20:25.793615047Z" level=info msg="CreateContainer within sandbox \"6e024f7ab42cf71739c5c7faa13110fee7c789d9fbcf31debad72b43d2acb0d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc07b12cbf90fecbe160aa844b9af56eb2493ac088c3545d503f299c09420ae1\"" Feb 8 23:20:25.793962 env[1345]: time="2024-02-08T23:20:25.793927249Z" level=info msg="StartContainer for \"f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6\" returns successfully" Feb 8 23:20:25.794443 env[1345]: time="2024-02-08T23:20:25.794253250Z" level=info msg="StartContainer for \"bc07b12cbf90fecbe160aa844b9af56eb2493ac088c3545d503f299c09420ae1\"" Feb 8 23:20:25.819559 kubelet[2057]: I0208 23:20:25.818589 2057 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:25.819559 kubelet[2057]: E0208 23:20:25.819045 2057 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:25.822666 systemd[1]: Started cri-containerd-bc07b12cbf90fecbe160aa844b9af56eb2493ac088c3545d503f299c09420ae1.scope. Feb 8 23:20:25.889882 env[1345]: time="2024-02-08T23:20:25.889774978Z" level=info msg="StartContainer for \"bc07b12cbf90fecbe160aa844b9af56eb2493ac088c3545d503f299c09420ae1\" returns successfully" Feb 8 23:20:25.917286 env[1345]: time="2024-02-08T23:20:25.917230972Z" level=info msg="StartContainer for \"1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d\" returns successfully" Feb 8 23:20:27.421330 kubelet[2057]: I0208 23:20:27.421300 2057 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:28.072292 kubelet[2057]: E0208 23:20:28.072256 2057 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-5bade47376\" not found" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:28.104735 kubelet[2057]: I0208 23:20:28.104696 2057 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:28.115538 kubelet[2057]: E0208 23:20:28.115508 2057 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-5bade47376\" not found" Feb 8 23:20:28.215909 kubelet[2057]: E0208 23:20:28.215819 2057 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-5bade47376\" not found" Feb 8 23:20:28.316427 kubelet[2057]: E0208 23:20:28.316364 2057 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-5bade47376\" not found" Feb 8 23:20:28.498127 kubelet[2057]: E0208 23:20:28.498090 2057 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-5bade47376\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:28.896464 kubelet[2057]: E0208 23:20:28.896427 2057 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-5bade47376\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-5bade47376" Feb 8 23:20:29.295421 kubelet[2057]: I0208 23:20:29.295301 2057 apiserver.go:52] "Watching apiserver" Feb 8 23:20:29.305642 kubelet[2057]: I0208 23:20:29.305606 2057 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:20:29.334924 kubelet[2057]: I0208 23:20:29.334837 2057 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:20:30.841536 systemd[1]: Reloading. Feb 8 23:20:30.938019 /usr/lib/systemd/system-generators/torcx-generator[2389]: time="2024-02-08T23:20:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:20:30.938060 /usr/lib/systemd/system-generators/torcx-generator[2389]: time="2024-02-08T23:20:30Z" level=info msg="torcx already run" Feb 8 23:20:31.030791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:20:31.030813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:20:31.047196 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:20:32.028049 kubelet[2057]: I0208 23:20:31.156842 2057 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:20:31.157211 systemd[1]: Stopping kubelet.service... Feb 8 23:20:31.174742 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:20:32.029215 kubelet[2452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:20:32.029215 kubelet[2452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:20:32.029215 kubelet[2452]: I0208 23:20:31.258083 2452 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:20:32.029215 kubelet[2452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:20:32.029215 kubelet[2452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:20:32.029215 kubelet[2452]: I0208 23:20:31.262487 2452 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:20:32.029215 kubelet[2452]: I0208 23:20:31.262533 2452 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:20:32.029215 kubelet[2452]: I0208 23:20:31.262694 2452 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:20:31.174914 systemd[1]: Stopped kubelet.service. Feb 8 23:20:31.176877 systemd[1]: Started kubelet.service. Feb 8 23:20:32.036226 kubelet[2452]: I0208 23:20:32.036201 2452 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:20:32.039035 kubelet[2452]: I0208 23:20:32.039006 2452 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:20:32.041735 kubelet[2452]: I0208 23:20:32.041713 2452 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:20:32.042489 kubelet[2452]: I0208 23:20:32.042475 2452 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:20:32.042636 kubelet[2452]: I0208 23:20:32.042625 2452 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:20:32.042773 kubelet[2452]: I0208 23:20:32.042762 2452 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:20:32.042840 kubelet[2452]: I0208 23:20:32.042834 2452 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:20:32.042934 kubelet[2452]: I0208 23:20:32.042927 2452 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:20:32.045819 kubelet[2452]: I0208 23:20:32.045797 2452 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:20:32.045819 kubelet[2452]: I0208 23:20:32.045819 2452 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:20:32.045953 kubelet[2452]: I0208 23:20:32.045845 2452 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:20:32.045953 kubelet[2452]: I0208 23:20:32.045864 2452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:20:32.053194 kubelet[2452]: I0208 23:20:32.051976 2452 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:20:32.053194 kubelet[2452]: I0208 23:20:32.052395 2452 server.go:1186] "Started kubelet" Feb 8 23:20:32.058235 kubelet[2452]: E0208 23:20:32.058221 2452 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:20:32.058372 kubelet[2452]: E0208 23:20:32.058362 2452 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:20:32.059431 kubelet[2452]: I0208 23:20:32.059378 2452 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:20:32.062209 kubelet[2452]: I0208 23:20:32.062173 2452 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:20:32.069243 kubelet[2452]: I0208 23:20:32.069225 2452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:20:32.072371 sudo[2465]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:20:32.073094 sudo[2465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:20:32.074529 kubelet[2452]: I0208 23:20:32.074509 2452 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:20:32.076215 kubelet[2452]: I0208 23:20:32.076176 2452 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:20:32.145781 kubelet[2452]: I0208 23:20:32.141208 2452 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:20:32.180602 kubelet[2452]: I0208 23:20:32.180572 2452 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.189376 kubelet[2452]: I0208 23:20:32.189350 2452 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:20:32.189376 kubelet[2452]: I0208 23:20:32.189372 2452 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:20:32.189565 kubelet[2452]: I0208 23:20:32.189390 2452 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:20:32.189565 kubelet[2452]: I0208 23:20:32.189561 2452 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:20:32.189655 kubelet[2452]: I0208 23:20:32.189578 2452 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:20:32.189655 kubelet[2452]: I0208 23:20:32.189588 2452 policy_none.go:49] "None policy: Start" Feb 8 23:20:32.191288 kubelet[2452]: I0208 23:20:32.191266 2452 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:20:32.191288 kubelet[2452]: I0208 23:20:32.191288 2452 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:20:32.191445 kubelet[2452]: I0208 23:20:32.191308 2452 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:20:32.191445 kubelet[2452]: E0208 23:20:32.191363 2452 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:20:32.192067 kubelet[2452]: I0208 23:20:32.192044 2452 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:20:32.192149 kubelet[2452]: I0208 23:20:32.192076 2452 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:20:32.192253 kubelet[2452]: I0208 23:20:32.192236 2452 state_mem.go:75] "Updated machine memory state" Feb 8 23:20:32.200066 kubelet[2452]: I0208 23:20:32.200045 2452 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.200157 kubelet[2452]: I0208 23:20:32.200118 2452 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.218743 kubelet[2452]: I0208 23:20:32.218721 2452 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:20:32.219383 kubelet[2452]: I0208 23:20:32.219333 2452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:20:32.292272 kubelet[2452]: I0208 23:20:32.292159 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:32.292272 kubelet[2452]: I0208 23:20:32.292266 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:32.292510 kubelet[2452]: I0208 23:20:32.292306 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:32.378867 kubelet[2452]: I0208 23:20:32.378833 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49a68657c36d4deccb595324a614a57e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-5bade47376\" (UID: \"49a68657c36d4deccb595324a614a57e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379176 kubelet[2452]: I0208 23:20:32.379158 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379315 kubelet[2452]: I0208 23:20:32.379305 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379424 kubelet[2452]: I0208 23:20:32.379404 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379533 kubelet[2452]: I0208 23:20:32.379524 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379634 kubelet[2452]: I0208 23:20:32.379626 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49a68657c36d4deccb595324a614a57e-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-5bade47376\" (UID: \"49a68657c36d4deccb595324a614a57e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379723 kubelet[2452]: I0208 23:20:32.379715 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4381dc1d8462b96e9981ffb96985706-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" (UID: \"c4381dc1d8462b96e9981ffb96985706\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379814 kubelet[2452]: I0208 23:20:32.379792 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b3427f98b21123f2c4a6dd34716319d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-5bade47376\" (UID: \"2b3427f98b21123f2c4a6dd34716319d\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.379895 kubelet[2452]: I0208 23:20:32.379830 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49a68657c36d4deccb595324a614a57e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-5bade47376\" (UID: \"49a68657c36d4deccb595324a614a57e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:32.749648 sudo[2465]: pam_unix(sudo:session): session closed for user root Feb 8 23:20:33.052366 kubelet[2452]: I0208 23:20:33.052243 2452 apiserver.go:52] "Watching apiserver" Feb 8 23:20:33.077109 kubelet[2452]: I0208 23:20:33.077067 2452 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:20:33.085344 kubelet[2452]: I0208 23:20:33.085316 2452 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:20:33.254651 kubelet[2452]: E0208 23:20:33.254615 2452 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-5bade47376\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" Feb 8 23:20:33.651977 kubelet[2452]: E0208 23:20:33.651940 2452 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-5bade47376\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-5bade47376" Feb 8 23:20:33.854570 kubelet[2452]: E0208 23:20:33.854522 2452 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-5bade47376\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" Feb 8 23:20:34.056510 kubelet[2452]: I0208 23:20:34.056374 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-5bade47376" podStartSLOduration=2.056303634 pod.CreationTimestamp="2024-02-08 23:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:20:34.056237434 +0000 UTC m=+2.873354828" watchObservedRunningTime="2024-02-08 23:20:34.056303634 +0000 UTC m=+2.873421028" Feb 8 23:20:34.187064 sudo[1649]: pam_unix(sudo:session): session closed for user root Feb 8 23:20:34.286284 sshd[1646]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:34.289836 systemd[1]: sshd@4-10.200.8.4:22-10.200.12.6:59826.service: Deactivated successfully. Feb 8 23:20:34.290769 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:20:34.290970 systemd[1]: session-7.scope: Consumed 4.109s CPU time. Feb 8 23:20:34.291549 systemd-logind[1326]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:20:34.292525 systemd-logind[1326]: Removed session 7. Feb 8 23:20:34.452998 kubelet[2452]: I0208 23:20:34.452951 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-5bade47376" podStartSLOduration=2.452908201 pod.CreationTimestamp="2024-02-08 23:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:20:34.452695201 +0000 UTC m=+3.269812595" watchObservedRunningTime="2024-02-08 23:20:34.452908201 +0000 UTC m=+3.270025595" Feb 8 23:20:34.853401 kubelet[2452]: I0208 23:20:34.853366 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-5bade47376" podStartSLOduration=2.85333068 pod.CreationTimestamp="2024-02-08 23:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:20:34.853060279 +0000 UTC m=+3.670177573" watchObservedRunningTime="2024-02-08 23:20:34.85333068 +0000 UTC m=+3.670447974" Feb 8 23:20:38.222280 update_engine[1327]: I0208 23:20:38.222224 1327 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 8 23:20:38.222280 update_engine[1327]: I0208 23:20:38.222270 1327 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 8 23:20:38.222978 update_engine[1327]: I0208 23:20:38.222450 1327 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 8 23:20:38.223095 update_engine[1327]: I0208 23:20:38.223067 1327 omaha_request_params.cc:62] Current group set to lts Feb 8 23:20:38.223283 update_engine[1327]: I0208 23:20:38.223261 1327 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 8 23:20:38.223283 update_engine[1327]: I0208 23:20:38.223275 1327 update_attempter.cc:643] Scheduling an action processor start. Feb 8 23:20:38.223431 update_engine[1327]: I0208 23:20:38.223299 1327 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 8 23:20:38.223431 update_engine[1327]: I0208 23:20:38.223337 1327 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 8 23:20:38.223539 update_engine[1327]: I0208 23:20:38.223438 1327 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 8 23:20:38.223539 update_engine[1327]: I0208 23:20:38.223446 1327 omaha_request_action.cc:271] Request: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: Feb 8 23:20:38.223539 update_engine[1327]: I0208 23:20:38.223453 1327 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:20:38.225205 locksmithd[1418]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 8 23:20:38.225483 update_engine[1327]: I0208 23:20:38.224983 1327 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:20:38.225483 update_engine[1327]: I0208 23:20:38.225154 1327 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:20:38.248867 update_engine[1327]: E0208 23:20:38.248828 1327 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:20:38.249009 update_engine[1327]: I0208 23:20:38.248960 1327 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 8 23:20:45.284017 kubelet[2452]: I0208 23:20:45.283979 2452 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:20:45.284712 env[1345]: time="2024-02-08T23:20:45.284672039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:20:45.285115 kubelet[2452]: I0208 23:20:45.284899 2452 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:20:45.950515 kubelet[2452]: I0208 23:20:45.950469 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:45.957510 systemd[1]: Created slice kubepods-besteffort-pod3150b825_8c91_442c_96d0_b86b99a392a6.slice. Feb 8 23:20:45.964663 kubelet[2452]: I0208 23:20:45.964631 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:45.969589 kubelet[2452]: I0208 23:20:45.969570 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3150b825-8c91-442c-96d0-b86b99a392a6-kube-proxy\") pod \"kube-proxy-ppgfx\" (UID: \"3150b825-8c91-442c-96d0-b86b99a392a6\") " pod="kube-system/kube-proxy-ppgfx" Feb 8 23:20:45.970167 kubelet[2452]: I0208 23:20:45.970151 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3150b825-8c91-442c-96d0-b86b99a392a6-lib-modules\") pod \"kube-proxy-ppgfx\" (UID: \"3150b825-8c91-442c-96d0-b86b99a392a6\") " pod="kube-system/kube-proxy-ppgfx" Feb 8 23:20:45.972237 kubelet[2452]: I0208 23:20:45.971588 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h5kh\" (UniqueName: \"kubernetes.io/projected/3150b825-8c91-442c-96d0-b86b99a392a6-kube-api-access-6h5kh\") pod \"kube-proxy-ppgfx\" (UID: \"3150b825-8c91-442c-96d0-b86b99a392a6\") " pod="kube-system/kube-proxy-ppgfx" Feb 8 23:20:45.974092 kubelet[2452]: I0208 23:20:45.973484 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3150b825-8c91-442c-96d0-b86b99a392a6-xtables-lock\") pod \"kube-proxy-ppgfx\" (UID: \"3150b825-8c91-442c-96d0-b86b99a392a6\") " pod="kube-system/kube-proxy-ppgfx" Feb 8 23:20:45.977579 systemd[1]: Created slice kubepods-burstable-pod6f394164_0f20_4f64_a444_bbbf667c2cd9.slice. Feb 8 23:20:46.073691 kubelet[2452]: I0208 23:20:46.073654 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-kernel\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.073691 kubelet[2452]: I0208 23:20:46.073699 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-hubble-tls\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.073970 kubelet[2452]: I0208 23:20:46.073776 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cni-path\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074475 kubelet[2452]: I0208 23:20:46.074447 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-lib-modules\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074626 kubelet[2452]: I0208 23:20:46.074501 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-config-path\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074626 kubelet[2452]: I0208 23:20:46.074531 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-net\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074626 kubelet[2452]: I0208 23:20:46.074572 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-bpf-maps\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074626 kubelet[2452]: I0208 23:20:46.074598 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-etc-cni-netd\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074626 kubelet[2452]: I0208 23:20:46.074627 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-hostproc\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074842 kubelet[2452]: I0208 23:20:46.074653 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f394164-0f20-4f64-a444-bbbf667c2cd9-clustermesh-secrets\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074842 kubelet[2452]: I0208 23:20:46.074696 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrgl\" (UniqueName: \"kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-kube-api-access-lhrgl\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074842 kubelet[2452]: I0208 23:20:46.074725 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-cgroup\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074842 kubelet[2452]: I0208 23:20:46.074752 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-xtables-lock\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.074842 kubelet[2452]: I0208 23:20:46.074821 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-run\") pod \"cilium-85h9n\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " pod="kube-system/cilium-85h9n" Feb 8 23:20:46.276139 env[1345]: time="2024-02-08T23:20:46.274947512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppgfx,Uid:3150b825-8c91-442c-96d0-b86b99a392a6,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:46.281817 env[1345]: time="2024-02-08T23:20:46.281779729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-85h9n,Uid:6f394164-0f20-4f64-a444-bbbf667c2cd9,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:46.314962 env[1345]: time="2024-02-08T23:20:46.314877211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:46.314962 env[1345]: time="2024-02-08T23:20:46.314934911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:46.315569 env[1345]: time="2024-02-08T23:20:46.314949711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:46.316055 env[1345]: time="2024-02-08T23:20:46.315746913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cc340e275d8ceb9dc9c10c1ec762341d3fc14fffe1ceac529d4449e98da9c74 pid=2563 runtime=io.containerd.runc.v2 Feb 8 23:20:46.330524 systemd[1]: Started cri-containerd-1cc340e275d8ceb9dc9c10c1ec762341d3fc14fffe1ceac529d4449e98da9c74.scope. Feb 8 23:20:46.346191 kubelet[2452]: I0208 23:20:46.345538 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:20:46.353721 systemd[1]: Created slice kubepods-besteffort-pod3be482ba_1877_4428_b84c_af63f313ffea.slice. Feb 8 23:20:46.356385 env[1345]: time="2024-02-08T23:20:46.356322413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:46.356588 env[1345]: time="2024-02-08T23:20:46.356558214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:46.356731 env[1345]: time="2024-02-08T23:20:46.356703514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:46.357022 env[1345]: time="2024-02-08T23:20:46.356986815Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029 pid=2595 runtime=io.containerd.runc.v2 Feb 8 23:20:46.376881 kubelet[2452]: I0208 23:20:46.376843 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3be482ba-1877-4428-b84c-af63f313ffea-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-42m6n\" (UID: \"3be482ba-1877-4428-b84c-af63f313ffea\") " pod="kube-system/cilium-operator-f59cbd8c6-42m6n" Feb 8 23:20:46.377038 kubelet[2452]: I0208 23:20:46.376898 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6m42\" (UniqueName: \"kubernetes.io/projected/3be482ba-1877-4428-b84c-af63f313ffea-kube-api-access-v6m42\") pod \"cilium-operator-f59cbd8c6-42m6n\" (UID: \"3be482ba-1877-4428-b84c-af63f313ffea\") " pod="kube-system/cilium-operator-f59cbd8c6-42m6n" Feb 8 23:20:46.393193 systemd[1]: Started cri-containerd-4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029.scope. Feb 8 23:20:46.433989 env[1345]: time="2024-02-08T23:20:46.433943505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-85h9n,Uid:6f394164-0f20-4f64-a444-bbbf667c2cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\"" Feb 8 23:20:46.436780 env[1345]: time="2024-02-08T23:20:46.436735312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ppgfx,Uid:3150b825-8c91-442c-96d0-b86b99a392a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cc340e275d8ceb9dc9c10c1ec762341d3fc14fffe1ceac529d4449e98da9c74\"" Feb 8 23:20:46.437347 env[1345]: time="2024-02-08T23:20:46.437319414Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:20:46.441704 env[1345]: time="2024-02-08T23:20:46.441653325Z" level=info msg="CreateContainer within sandbox \"1cc340e275d8ceb9dc9c10c1ec762341d3fc14fffe1ceac529d4449e98da9c74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:20:46.490785 env[1345]: time="2024-02-08T23:20:46.490740846Z" level=info msg="CreateContainer within sandbox \"1cc340e275d8ceb9dc9c10c1ec762341d3fc14fffe1ceac529d4449e98da9c74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66b6f5f7330b39bc14cf571baa39aadda3999f7f0ebf61ef04e4cb14c703f662\"" Feb 8 23:20:46.492851 env[1345]: time="2024-02-08T23:20:46.491514848Z" level=info msg="StartContainer for \"66b6f5f7330b39bc14cf571baa39aadda3999f7f0ebf61ef04e4cb14c703f662\"" Feb 8 23:20:46.508289 systemd[1]: Started cri-containerd-66b6f5f7330b39bc14cf571baa39aadda3999f7f0ebf61ef04e4cb14c703f662.scope. Feb 8 23:20:46.546178 env[1345]: time="2024-02-08T23:20:46.546067683Z" level=info msg="StartContainer for \"66b6f5f7330b39bc14cf571baa39aadda3999f7f0ebf61ef04e4cb14c703f662\" returns successfully" Feb 8 23:20:46.959948 env[1345]: time="2024-02-08T23:20:46.959896807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-42m6n,Uid:3be482ba-1877-4428-b84c-af63f313ffea,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:47.002701 env[1345]: time="2024-02-08T23:20:47.002631112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:47.002701 env[1345]: time="2024-02-08T23:20:47.002674512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:47.002933 env[1345]: time="2024-02-08T23:20:47.002897613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:47.003631 env[1345]: time="2024-02-08T23:20:47.003192714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3 pid=2787 runtime=io.containerd.runc.v2 Feb 8 23:20:47.016625 systemd[1]: Started cri-containerd-b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3.scope. Feb 8 23:20:47.059312 env[1345]: time="2024-02-08T23:20:47.059260151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-42m6n,Uid:3be482ba-1877-4428-b84c-af63f313ffea,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\"" Feb 8 23:20:47.253598 kubelet[2452]: I0208 23:20:47.253023 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ppgfx" podStartSLOduration=2.2529626240000002 pod.CreationTimestamp="2024-02-08 23:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:20:47.252282822 +0000 UTC m=+16.069400116" watchObservedRunningTime="2024-02-08 23:20:47.252962624 +0000 UTC m=+16.070080018" Feb 8 23:20:48.218537 update_engine[1327]: I0208 23:20:48.217813 1327 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:20:48.218537 update_engine[1327]: I0208 23:20:48.218147 1327 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:20:48.218537 update_engine[1327]: I0208 23:20:48.218453 1327 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:20:48.233977 update_engine[1327]: E0208 23:20:48.233776 1327 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:20:48.233977 update_engine[1327]: I0208 23:20:48.233942 1327 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 8 23:20:52.200008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3926373406.mount: Deactivated successfully. Feb 8 23:20:55.024456 env[1345]: time="2024-02-08T23:20:55.024387390Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:55.032387 env[1345]: time="2024-02-08T23:20:55.032348508Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:55.038056 env[1345]: time="2024-02-08T23:20:55.038023020Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:55.038523 env[1345]: time="2024-02-08T23:20:55.038490221Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:20:55.042128 env[1345]: time="2024-02-08T23:20:55.042084529Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:20:55.043392 env[1345]: time="2024-02-08T23:20:55.043150132Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:20:55.077005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182904105.mount: Deactivated successfully. Feb 8 23:20:55.084557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976644493.mount: Deactivated successfully. Feb 8 23:20:55.097166 env[1345]: time="2024-02-08T23:20:55.097115452Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\"" Feb 8 23:20:55.099644 env[1345]: time="2024-02-08T23:20:55.097956153Z" level=info msg="StartContainer for \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\"" Feb 8 23:20:55.120113 systemd[1]: Started cri-containerd-8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400.scope. Feb 8 23:20:55.159632 env[1345]: time="2024-02-08T23:20:55.159590590Z" level=info msg="StartContainer for \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\" returns successfully" Feb 8 23:20:55.160307 systemd[1]: cri-containerd-8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400.scope: Deactivated successfully. Feb 8 23:20:56.073927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400-rootfs.mount: Deactivated successfully. Feb 8 23:20:58.221380 update_engine[1327]: I0208 23:20:58.221304 1327 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:20:58.221976 update_engine[1327]: I0208 23:20:58.221655 1327 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:20:58.221976 update_engine[1327]: I0208 23:20:58.221890 1327 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:20:58.244125 update_engine[1327]: E0208 23:20:58.244087 1327 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:20:58.244265 update_engine[1327]: I0208 23:20:58.244217 1327 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 8 23:20:58.879086 env[1345]: time="2024-02-08T23:20:58.879009623Z" level=info msg="shim disconnected" id=8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400 Feb 8 23:20:58.879086 env[1345]: time="2024-02-08T23:20:58.879074323Z" level=warning msg="cleaning up after shim disconnected" id=8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400 namespace=k8s.io Feb 8 23:20:58.879086 env[1345]: time="2024-02-08T23:20:58.879089623Z" level=info msg="cleaning up dead shim" Feb 8 23:20:58.888344 env[1345]: time="2024-02-08T23:20:58.888301943Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:20:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2869 runtime=io.containerd.runc.v2\n" Feb 8 23:20:59.272531 env[1345]: time="2024-02-08T23:20:59.272079763Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:20:59.308809 env[1345]: time="2024-02-08T23:20:59.308757442Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\"" Feb 8 23:20:59.309334 env[1345]: time="2024-02-08T23:20:59.309304643Z" level=info msg="StartContainer for \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\"" Feb 8 23:20:59.335726 systemd[1]: run-containerd-runc-k8s.io-d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953-runc.dDQQ5R.mount: Deactivated successfully. Feb 8 23:20:59.337948 systemd[1]: Started cri-containerd-d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953.scope. Feb 8 23:20:59.378514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:20:59.378831 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:20:59.379222 env[1345]: time="2024-02-08T23:20:59.379174392Z" level=info msg="StartContainer for \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\" returns successfully" Feb 8 23:20:59.379524 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:20:59.383255 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:20:59.389263 systemd[1]: cri-containerd-d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953.scope: Deactivated successfully. Feb 8 23:20:59.404381 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:20:59.448853 env[1345]: time="2024-02-08T23:20:59.448797040Z" level=info msg="shim disconnected" id=d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953 Feb 8 23:20:59.449224 env[1345]: time="2024-02-08T23:20:59.449195141Z" level=warning msg="cleaning up after shim disconnected" id=d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953 namespace=k8s.io Feb 8 23:20:59.449333 env[1345]: time="2024-02-08T23:20:59.449315641Z" level=info msg="cleaning up dead shim" Feb 8 23:20:59.464973 env[1345]: time="2024-02-08T23:20:59.464925974Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:20:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2933 runtime=io.containerd.runc.v2\n" Feb 8 23:21:00.281433 env[1345]: time="2024-02-08T23:21:00.281370209Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:21:00.295124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953-rootfs.mount: Deactivated successfully. Feb 8 23:21:00.322074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896965709.mount: Deactivated successfully. Feb 8 23:21:00.339528 env[1345]: time="2024-02-08T23:21:00.339487531Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\"" Feb 8 23:21:00.342091 env[1345]: time="2024-02-08T23:21:00.340023832Z" level=info msg="StartContainer for \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\"" Feb 8 23:21:00.362221 systemd[1]: Started cri-containerd-8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903.scope. Feb 8 23:21:00.401282 systemd[1]: cri-containerd-8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903.scope: Deactivated successfully. Feb 8 23:21:00.405974 env[1345]: time="2024-02-08T23:21:00.405934271Z" level=info msg="StartContainer for \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\" returns successfully" Feb 8 23:21:00.880986 env[1345]: time="2024-02-08T23:21:00.880934774Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:00.885495 env[1345]: time="2024-02-08T23:21:00.885449383Z" level=info msg="shim disconnected" id=8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903 Feb 8 23:21:00.885495 env[1345]: time="2024-02-08T23:21:00.885493283Z" level=warning msg="cleaning up after shim disconnected" id=8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903 namespace=k8s.io Feb 8 23:21:00.885688 env[1345]: time="2024-02-08T23:21:00.885503983Z" level=info msg="cleaning up dead shim" Feb 8 23:21:00.890359 env[1345]: time="2024-02-08T23:21:00.890321893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:00.894203 env[1345]: time="2024-02-08T23:21:00.894165002Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:21:00.894476 env[1345]: time="2024-02-08T23:21:00.894449102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:00.897512 env[1345]: time="2024-02-08T23:21:00.897483309Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2990 runtime=io.containerd.runc.v2\n" Feb 8 23:21:00.898469 env[1345]: time="2024-02-08T23:21:00.898365610Z" level=info msg="CreateContainer within sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:21:00.943654 env[1345]: time="2024-02-08T23:21:00.943610306Z" level=info msg="CreateContainer within sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\"" Feb 8 23:21:00.946041 env[1345]: time="2024-02-08T23:21:00.944022207Z" level=info msg="StartContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\"" Feb 8 23:21:00.962314 systemd[1]: Started cri-containerd-bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2.scope. Feb 8 23:21:00.999660 env[1345]: time="2024-02-08T23:21:00.999621224Z" level=info msg="StartContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" returns successfully" Feb 8 23:21:01.278870 env[1345]: time="2024-02-08T23:21:01.278749407Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:21:01.297503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903-rootfs.mount: Deactivated successfully. Feb 8 23:21:01.319105 env[1345]: time="2024-02-08T23:21:01.319062992Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\"" Feb 8 23:21:01.319956 env[1345]: time="2024-02-08T23:21:01.319922994Z" level=info msg="StartContainer for \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\"" Feb 8 23:21:01.362092 systemd[1]: Started cri-containerd-6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea.scope. Feb 8 23:21:01.428804 systemd[1]: cri-containerd-6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea.scope: Deactivated successfully. Feb 8 23:21:01.431423 env[1345]: time="2024-02-08T23:21:01.431368326Z" level=info msg="StartContainer for \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\" returns successfully" Feb 8 23:21:01.454664 kubelet[2452]: I0208 23:21:01.454114 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-42m6n" podStartSLOduration=-9.223372021400715e+09 pod.CreationTimestamp="2024-02-08 23:20:46 +0000 UTC" firstStartedPulling="2024-02-08 23:20:47.060749554 +0000 UTC m=+15.877866848" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:21:01.390950142 +0000 UTC m=+30.208067436" watchObservedRunningTime="2024-02-08 23:21:01.454061674 +0000 UTC m=+30.271179068" Feb 8 23:21:01.481468 env[1345]: time="2024-02-08T23:21:01.481419231Z" level=info msg="shim disconnected" id=6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea Feb 8 23:21:01.481766 env[1345]: time="2024-02-08T23:21:01.481744532Z" level=warning msg="cleaning up after shim disconnected" id=6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea namespace=k8s.io Feb 8 23:21:01.481866 env[1345]: time="2024-02-08T23:21:01.481852932Z" level=info msg="cleaning up dead shim" Feb 8 23:21:01.495755 env[1345]: time="2024-02-08T23:21:01.495716961Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3083 runtime=io.containerd.runc.v2\n" Feb 8 23:21:02.295122 systemd[1]: run-containerd-runc-k8s.io-6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea-runc.mdHJ2O.mount: Deactivated successfully. Feb 8 23:21:02.295254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea-rootfs.mount: Deactivated successfully. Feb 8 23:21:02.302893 env[1345]: time="2024-02-08T23:21:02.302829342Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:21:02.352610 env[1345]: time="2024-02-08T23:21:02.352503445Z" level=info msg="CreateContainer within sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\"" Feb 8 23:21:02.355188 env[1345]: time="2024-02-08T23:21:02.353528347Z" level=info msg="StartContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\"" Feb 8 23:21:02.378645 systemd[1]: Started cri-containerd-7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412.scope. Feb 8 23:21:02.415375 env[1345]: time="2024-02-08T23:21:02.415317975Z" level=info msg="StartContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" returns successfully" Feb 8 23:21:02.611908 kubelet[2452]: I0208 23:21:02.611877 2452 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:21:02.642128 kubelet[2452]: I0208 23:21:02.642084 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:21:02.649358 systemd[1]: Created slice kubepods-burstable-podcbc0d699_ea3f_410e_b3a2_5dfee05480e4.slice. Feb 8 23:21:02.655072 kubelet[2452]: I0208 23:21:02.655046 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:21:02.660361 systemd[1]: Created slice kubepods-burstable-pod130b720d_2ffa_4d10_8b9b_82871dbd2adb.slice. Feb 8 23:21:02.695848 kubelet[2452]: I0208 23:21:02.695803 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbc0d699-ea3f-410e-b3a2-5dfee05480e4-config-volume\") pod \"coredns-787d4945fb-k9csc\" (UID: \"cbc0d699-ea3f-410e-b3a2-5dfee05480e4\") " pod="kube-system/coredns-787d4945fb-k9csc" Feb 8 23:21:02.696166 kubelet[2452]: I0208 23:21:02.696154 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/130b720d-2ffa-4d10-8b9b-82871dbd2adb-config-volume\") pod \"coredns-787d4945fb-vvd62\" (UID: \"130b720d-2ffa-4d10-8b9b-82871dbd2adb\") " pod="kube-system/coredns-787d4945fb-vvd62" Feb 8 23:21:02.696288 kubelet[2452]: I0208 23:21:02.696279 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbhn\" (UniqueName: \"kubernetes.io/projected/cbc0d699-ea3f-410e-b3a2-5dfee05480e4-kube-api-access-mdbhn\") pod \"coredns-787d4945fb-k9csc\" (UID: \"cbc0d699-ea3f-410e-b3a2-5dfee05480e4\") " pod="kube-system/coredns-787d4945fb-k9csc" Feb 8 23:21:02.696396 kubelet[2452]: I0208 23:21:02.696388 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwd7\" (UniqueName: \"kubernetes.io/projected/130b720d-2ffa-4d10-8b9b-82871dbd2adb-kube-api-access-bqwd7\") pod \"coredns-787d4945fb-vvd62\" (UID: \"130b720d-2ffa-4d10-8b9b-82871dbd2adb\") " pod="kube-system/coredns-787d4945fb-vvd62" Feb 8 23:21:02.954373 env[1345]: time="2024-02-08T23:21:02.954246890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-k9csc,Uid:cbc0d699-ea3f-410e-b3a2-5dfee05480e4,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:02.964971 env[1345]: time="2024-02-08T23:21:02.964922812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-vvd62,Uid:130b720d-2ffa-4d10-8b9b-82871dbd2adb,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:04.882787 systemd-networkd[1493]: cilium_host: Link UP Feb 8 23:21:04.884884 systemd-networkd[1493]: cilium_net: Link UP Feb 8 23:21:04.892186 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:21:04.892272 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:21:04.892463 systemd-networkd[1493]: cilium_net: Gained carrier Feb 8 23:21:04.893701 systemd-networkd[1493]: cilium_host: Gained carrier Feb 8 23:21:05.047512 systemd-networkd[1493]: cilium_net: Gained IPv6LL Feb 8 23:21:05.062580 systemd-networkd[1493]: cilium_vxlan: Link UP Feb 8 23:21:05.062592 systemd-networkd[1493]: cilium_vxlan: Gained carrier Feb 8 23:21:05.293684 kernel: NET: Registered PF_ALG protocol family Feb 8 23:21:05.367613 systemd-networkd[1493]: cilium_host: Gained IPv6LL Feb 8 23:21:05.964852 systemd-networkd[1493]: lxc_health: Link UP Feb 8 23:21:05.997188 systemd-networkd[1493]: lxc_health: Gained carrier Feb 8 23:21:05.997573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:21:06.304286 kubelet[2452]: I0208 23:21:06.304167 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-85h9n" podStartSLOduration=-9.223372015550648e+09 pod.CreationTimestamp="2024-02-08 23:20:45 +0000 UTC" firstStartedPulling="2024-02-08 23:20:46.436818713 +0000 UTC m=+15.253936107" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:21:03.312945727 +0000 UTC m=+32.130063121" watchObservedRunningTime="2024-02-08 23:21:06.304127191 +0000 UTC m=+35.121244485" Feb 8 23:21:06.559856 systemd-networkd[1493]: lxc3bdbf0d15eab: Link UP Feb 8 23:21:06.573013 kernel: eth0: renamed from tmp7ed93 Feb 8 23:21:06.584865 systemd-networkd[1493]: lxc504972856af3: Link UP Feb 8 23:21:06.588493 kernel: eth0: renamed from tmp5deb6 Feb 8 23:21:06.614400 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3bdbf0d15eab: link becomes ready Feb 8 23:21:06.614580 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc504972856af3: link becomes ready Feb 8 23:21:06.614783 systemd-networkd[1493]: lxc3bdbf0d15eab: Gained carrier Feb 8 23:21:06.618914 systemd-networkd[1493]: lxc504972856af3: Gained carrier Feb 8 23:21:07.119695 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL Feb 8 23:21:07.503569 systemd-networkd[1493]: lxc_health: Gained IPv6LL Feb 8 23:21:07.951551 systemd-networkd[1493]: lxc504972856af3: Gained IPv6LL Feb 8 23:21:08.143626 systemd-networkd[1493]: lxc3bdbf0d15eab: Gained IPv6LL Feb 8 23:21:08.224387 update_engine[1327]: I0208 23:21:08.223312 1327 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:21:08.224387 update_engine[1327]: I0208 23:21:08.223634 1327 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:21:08.224387 update_engine[1327]: I0208 23:21:08.223901 1327 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:21:08.246446 update_engine[1327]: E0208 23:21:08.245894 1327 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246021 1327 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246031 1327 omaha_request_action.cc:621] Omaha request response: Feb 8 23:21:08.246446 update_engine[1327]: E0208 23:21:08.246140 1327 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246157 1327 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246161 1327 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246166 1327 update_attempter.cc:306] Processing Done. Feb 8 23:21:08.246446 update_engine[1327]: E0208 23:21:08.246182 1327 update_attempter.cc:619] Update failed. Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246187 1327 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246193 1327 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246199 1327 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246294 1327 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246321 1327 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 8 23:21:08.246446 update_engine[1327]: I0208 23:21:08.246326 1327 omaha_request_action.cc:271] Request: Feb 8 23:21:08.246446 update_engine[1327]: Feb 8 23:21:08.246446 update_engine[1327]: Feb 8 23:21:08.246446 update_engine[1327]: Feb 8 23:21:08.247209 update_engine[1327]: Feb 8 23:21:08.247209 update_engine[1327]: Feb 8 23:21:08.247209 update_engine[1327]: Feb 8 23:21:08.247209 update_engine[1327]: I0208 23:21:08.246330 1327 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:21:08.247381 locksmithd[1418]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 8 23:21:08.248086 update_engine[1327]: I0208 23:21:08.247856 1327 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:21:08.248086 update_engine[1327]: I0208 23:21:08.248046 1327 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:21:08.252722 update_engine[1327]: E0208 23:21:08.252488 1327 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252586 1327 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252599 1327 omaha_request_action.cc:621] Omaha request response: Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252605 1327 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252609 1327 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252614 1327 update_attempter.cc:306] Processing Done. Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252619 1327 update_attempter.cc:310] Error event sent. Feb 8 23:21:08.252722 update_engine[1327]: I0208 23:21:08.252629 1327 update_check_scheduler.cc:74] Next update check in 47m48s Feb 8 23:21:08.253059 locksmithd[1418]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 8 23:21:10.190615 env[1345]: time="2024-02-08T23:21:10.190527143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:10.191097 env[1345]: time="2024-02-08T23:21:10.190681043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:10.191097 env[1345]: time="2024-02-08T23:21:10.190715443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:10.191097 env[1345]: time="2024-02-08T23:21:10.190925544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ed93cf8e726fb9f560364c0305bf2efcdd67acc6985a73790d7ae74fbbd4e58 pid=3634 runtime=io.containerd.runc.v2 Feb 8 23:21:10.203582 env[1345]: time="2024-02-08T23:21:10.202572266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:10.203582 env[1345]: time="2024-02-08T23:21:10.202660766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:10.203582 env[1345]: time="2024-02-08T23:21:10.202690566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:10.203582 env[1345]: time="2024-02-08T23:21:10.202840667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5deb636a56c8c48036fb8ea91344bb2b24d8d236429ce2beebd38caa41883ed2 pid=3650 runtime=io.containerd.runc.v2 Feb 8 23:21:10.243857 systemd[1]: run-containerd-runc-k8s.io-5deb636a56c8c48036fb8ea91344bb2b24d8d236429ce2beebd38caa41883ed2-runc.7lwQMe.mount: Deactivated successfully. Feb 8 23:21:10.252469 systemd[1]: Started cri-containerd-5deb636a56c8c48036fb8ea91344bb2b24d8d236429ce2beebd38caa41883ed2.scope. Feb 8 23:21:10.268994 systemd[1]: Started cri-containerd-7ed93cf8e726fb9f560364c0305bf2efcdd67acc6985a73790d7ae74fbbd4e58.scope. Feb 8 23:21:10.333521 env[1345]: time="2024-02-08T23:21:10.333458219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-vvd62,Uid:130b720d-2ffa-4d10-8b9b-82871dbd2adb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ed93cf8e726fb9f560364c0305bf2efcdd67acc6985a73790d7ae74fbbd4e58\"" Feb 8 23:21:10.344971 env[1345]: time="2024-02-08T23:21:10.344906041Z" level=info msg="CreateContainer within sandbox \"7ed93cf8e726fb9f560364c0305bf2efcdd67acc6985a73790d7ae74fbbd4e58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:21:10.362828 env[1345]: time="2024-02-08T23:21:10.362782476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-k9csc,Uid:cbc0d699-ea3f-410e-b3a2-5dfee05480e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5deb636a56c8c48036fb8ea91344bb2b24d8d236429ce2beebd38caa41883ed2\"" Feb 8 23:21:10.368213 env[1345]: time="2024-02-08T23:21:10.368038686Z" level=info msg="CreateContainer within sandbox \"5deb636a56c8c48036fb8ea91344bb2b24d8d236429ce2beebd38caa41883ed2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:21:10.401034 env[1345]: time="2024-02-08T23:21:10.400981750Z" level=info msg="CreateContainer within sandbox \"7ed93cf8e726fb9f560364c0305bf2efcdd67acc6985a73790d7ae74fbbd4e58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8942c6353c3184a0fc7aefca557b4a805c5f6a59b6fe863a6a0476cff598cb06\"" Feb 8 23:21:10.401652 env[1345]: time="2024-02-08T23:21:10.401615851Z" level=info msg="StartContainer for \"8942c6353c3184a0fc7aefca557b4a805c5f6a59b6fe863a6a0476cff598cb06\"" Feb 8 23:21:10.421440 env[1345]: time="2024-02-08T23:21:10.419027785Z" level=info msg="CreateContainer within sandbox \"5deb636a56c8c48036fb8ea91344bb2b24d8d236429ce2beebd38caa41883ed2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88937c122c31a9f4176323fd4bb9086aa4ff7beb6be75ae77b8a9887c4d4bd5a\"" Feb 8 23:21:10.421440 env[1345]: time="2024-02-08T23:21:10.420997089Z" level=info msg="StartContainer for \"88937c122c31a9f4176323fd4bb9086aa4ff7beb6be75ae77b8a9887c4d4bd5a\"" Feb 8 23:21:10.429861 systemd[1]: Started cri-containerd-8942c6353c3184a0fc7aefca557b4a805c5f6a59b6fe863a6a0476cff598cb06.scope. Feb 8 23:21:10.455306 systemd[1]: Started cri-containerd-88937c122c31a9f4176323fd4bb9086aa4ff7beb6be75ae77b8a9887c4d4bd5a.scope. Feb 8 23:21:10.502541 env[1345]: time="2024-02-08T23:21:10.502489846Z" level=info msg="StartContainer for \"8942c6353c3184a0fc7aefca557b4a805c5f6a59b6fe863a6a0476cff598cb06\" returns successfully" Feb 8 23:21:10.559346 env[1345]: time="2024-02-08T23:21:10.559291856Z" level=info msg="StartContainer for \"88937c122c31a9f4176323fd4bb9086aa4ff7beb6be75ae77b8a9887c4d4bd5a\" returns successfully" Feb 8 23:21:11.323399 kubelet[2452]: I0208 23:21:11.323365 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-vvd62" podStartSLOduration=25.323327128 pod.CreationTimestamp="2024-02-08 23:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:21:11.322400027 +0000 UTC m=+40.139517321" watchObservedRunningTime="2024-02-08 23:21:11.323327128 +0000 UTC m=+40.140444522" Feb 8 23:21:11.361554 kubelet[2452]: I0208 23:21:11.361518 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-k9csc" podStartSLOduration=25.361475601 pod.CreationTimestamp="2024-02-08 23:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:21:11.349067478 +0000 UTC m=+40.166184772" watchObservedRunningTime="2024-02-08 23:21:11.361475601 +0000 UTC m=+40.178592895" Feb 8 23:23:39.787366 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.12.6:41822.service. Feb 8 23:23:40.408640 sshd[3859]: Accepted publickey for core from 10.200.12.6 port 41822 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:23:40.410218 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:40.415913 systemd[1]: Started session-8.scope. Feb 8 23:23:40.416365 systemd-logind[1326]: New session 8 of user core. Feb 8 23:23:40.947983 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:40.951404 systemd[1]: sshd@5-10.200.8.4:22-10.200.12.6:41822.service: Deactivated successfully. Feb 8 23:23:40.952494 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:23:40.953235 systemd-logind[1326]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:23:40.954092 systemd-logind[1326]: Removed session 8. Feb 8 23:23:46.052985 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.12.6:41826.service. Feb 8 23:23:46.666464 sshd[3880]: Accepted publickey for core from 10.200.12.6 port 41826 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:23:46.668166 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:46.674441 systemd[1]: Started session-9.scope. Feb 8 23:23:46.675005 systemd-logind[1326]: New session 9 of user core. Feb 8 23:23:47.165771 sshd[3880]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:47.169025 systemd[1]: sshd@6-10.200.8.4:22-10.200.12.6:41826.service: Deactivated successfully. Feb 8 23:23:47.170101 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:23:47.170896 systemd-logind[1326]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:23:47.171842 systemd-logind[1326]: Removed session 9. Feb 8 23:23:52.264149 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.12.6:44396.service. Feb 8 23:23:52.884185 sshd[3895]: Accepted publickey for core from 10.200.12.6 port 44396 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:23:52.885958 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:52.891913 systemd[1]: Started session-10.scope. Feb 8 23:23:52.892355 systemd-logind[1326]: New session 10 of user core. Feb 8 23:23:53.378150 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:53.381421 systemd[1]: sshd@7-10.200.8.4:22-10.200.12.6:44396.service: Deactivated successfully. Feb 8 23:23:53.382441 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:23:53.383139 systemd-logind[1326]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:23:53.383969 systemd-logind[1326]: Removed session 10. Feb 8 23:23:58.483852 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.12.6:60426.service. Feb 8 23:23:59.138962 sshd[3908]: Accepted publickey for core from 10.200.12.6 port 60426 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:23:59.140656 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:59.146283 systemd[1]: Started session-11.scope. Feb 8 23:23:59.146918 systemd-logind[1326]: New session 11 of user core. Feb 8 23:23:59.633679 sshd[3908]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:59.637156 systemd[1]: sshd@8-10.200.8.4:22-10.200.12.6:60426.service: Deactivated successfully. Feb 8 23:23:59.638554 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:23:59.639514 systemd-logind[1326]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:23:59.640505 systemd-logind[1326]: Removed session 11. Feb 8 23:24:04.735478 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.12.6:60430.service. Feb 8 23:24:05.350001 sshd[3921]: Accepted publickey for core from 10.200.12.6 port 60430 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:05.351736 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:05.358154 systemd[1]: Started session-12.scope. Feb 8 23:24:05.359489 systemd-logind[1326]: New session 12 of user core. Feb 8 23:24:05.847081 sshd[3921]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:05.850459 systemd[1]: sshd@9-10.200.8.4:22-10.200.12.6:60430.service: Deactivated successfully. Feb 8 23:24:05.851590 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:24:05.852459 systemd-logind[1326]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:24:05.853386 systemd-logind[1326]: Removed session 12. Feb 8 23:24:05.952878 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.12.6:60432.service. Feb 8 23:24:06.571220 sshd[3933]: Accepted publickey for core from 10.200.12.6 port 60432 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:06.572760 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:06.578041 systemd[1]: Started session-13.scope. Feb 8 23:24:06.578464 systemd-logind[1326]: New session 13 of user core. Feb 8 23:24:07.840751 sshd[3933]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:07.844614 systemd[1]: sshd@10-10.200.8.4:22-10.200.12.6:60432.service: Deactivated successfully. Feb 8 23:24:07.845594 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:24:07.846302 systemd-logind[1326]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:24:07.847182 systemd-logind[1326]: Removed session 13. Feb 8 23:24:07.946659 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.12.6:54464.service. Feb 8 23:24:08.571795 sshd[3943]: Accepted publickey for core from 10.200.12.6 port 54464 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:08.573207 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:08.578663 systemd[1]: Started session-14.scope. Feb 8 23:24:08.579044 systemd-logind[1326]: New session 14 of user core. Feb 8 23:24:09.068380 sshd[3943]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:09.072727 systemd-logind[1326]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:24:09.073854 systemd[1]: sshd@11-10.200.8.4:22-10.200.12.6:54464.service: Deactivated successfully. Feb 8 23:24:09.074814 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:24:09.076174 systemd-logind[1326]: Removed session 14. Feb 8 23:24:14.176496 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.12.6:54478.service. Feb 8 23:24:14.794961 sshd[3959]: Accepted publickey for core from 10.200.12.6 port 54478 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:14.796601 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:14.802334 systemd[1]: Started session-15.scope. Feb 8 23:24:14.802908 systemd-logind[1326]: New session 15 of user core. Feb 8 23:24:15.290637 sshd[3959]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:15.293923 systemd[1]: sshd@12-10.200.8.4:22-10.200.12.6:54478.service: Deactivated successfully. Feb 8 23:24:15.295124 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:24:15.296033 systemd-logind[1326]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:24:15.297055 systemd-logind[1326]: Removed session 15. Feb 8 23:24:20.398758 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.12.6:54414.service. Feb 8 23:24:21.022049 sshd[3973]: Accepted publickey for core from 10.200.12.6 port 54414 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:21.023540 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:21.028838 systemd[1]: Started session-16.scope. Feb 8 23:24:21.029287 systemd-logind[1326]: New session 16 of user core. Feb 8 23:24:21.521841 sshd[3973]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:21.525435 systemd[1]: sshd@13-10.200.8.4:22-10.200.12.6:54414.service: Deactivated successfully. Feb 8 23:24:21.526794 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:24:21.527676 systemd-logind[1326]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:24:21.528785 systemd-logind[1326]: Removed session 16. Feb 8 23:24:21.628056 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.12.6:54420.service. Feb 8 23:24:22.248184 sshd[3985]: Accepted publickey for core from 10.200.12.6 port 54420 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:22.249905 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:22.255104 systemd[1]: Started session-17.scope. Feb 8 23:24:22.255889 systemd-logind[1326]: New session 17 of user core. Feb 8 23:24:22.806287 sshd[3985]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:22.809956 systemd[1]: sshd@14-10.200.8.4:22-10.200.12.6:54420.service: Deactivated successfully. Feb 8 23:24:22.810983 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:24:22.812238 systemd-logind[1326]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:24:22.813166 systemd-logind[1326]: Removed session 17. Feb 8 23:24:22.914073 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.12.6:54428.service. Feb 8 23:24:23.536055 sshd[3994]: Accepted publickey for core from 10.200.12.6 port 54428 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:23.537743 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:23.543899 systemd-logind[1326]: New session 18 of user core. Feb 8 23:24:23.544474 systemd[1]: Started session-18.scope. Feb 8 23:24:25.049520 sshd[3994]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:25.052740 systemd[1]: sshd@15-10.200.8.4:22-10.200.12.6:54428.service: Deactivated successfully. Feb 8 23:24:25.053710 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:24:25.054481 systemd-logind[1326]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:24:25.055859 systemd-logind[1326]: Removed session 18. Feb 8 23:24:25.153439 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.12.6:54442.service. Feb 8 23:24:25.772979 sshd[4059]: Accepted publickey for core from 10.200.12.6 port 54442 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:25.774783 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:25.779994 systemd-logind[1326]: New session 19 of user core. Feb 8 23:24:25.780480 systemd[1]: Started session-19.scope. Feb 8 23:24:26.384368 sshd[4059]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:26.387531 systemd[1]: sshd@16-10.200.8.4:22-10.200.12.6:54442.service: Deactivated successfully. Feb 8 23:24:26.388808 systemd-logind[1326]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:24:26.388899 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:24:26.390111 systemd-logind[1326]: Removed session 19. Feb 8 23:24:26.489247 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.12.6:54448.service. Feb 8 23:24:27.108108 sshd[4069]: Accepted publickey for core from 10.200.12.6 port 54448 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:27.110364 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:27.116039 systemd[1]: Started session-20.scope. Feb 8 23:24:27.116532 systemd-logind[1326]: New session 20 of user core. Feb 8 23:24:27.614721 sshd[4069]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:27.617986 systemd[1]: sshd@17-10.200.8.4:22-10.200.12.6:54448.service: Deactivated successfully. Feb 8 23:24:27.619029 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:24:27.619785 systemd-logind[1326]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:24:27.620650 systemd-logind[1326]: Removed session 20. Feb 8 23:24:32.724356 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.12.6:53414.service. Feb 8 23:24:33.336872 sshd[4110]: Accepted publickey for core from 10.200.12.6 port 53414 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:33.338425 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:33.343705 systemd[1]: Started session-21.scope. Feb 8 23:24:33.344159 systemd-logind[1326]: New session 21 of user core. Feb 8 23:24:33.824350 sshd[4110]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:33.827818 systemd[1]: sshd@18-10.200.8.4:22-10.200.12.6:53414.service: Deactivated successfully. Feb 8 23:24:33.828923 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:24:33.829762 systemd-logind[1326]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:24:33.830705 systemd-logind[1326]: Removed session 21. Feb 8 23:24:38.929952 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.12.6:42292.service. Feb 8 23:24:39.585894 sshd[4121]: Accepted publickey for core from 10.200.12.6 port 42292 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:39.587333 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:39.592569 systemd[1]: Started session-22.scope. Feb 8 23:24:39.593181 systemd-logind[1326]: New session 22 of user core. Feb 8 23:24:40.092964 sshd[4121]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:40.095987 systemd[1]: sshd@19-10.200.8.4:22-10.200.12.6:42292.service: Deactivated successfully. Feb 8 23:24:40.097445 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:24:40.097469 systemd-logind[1326]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:24:40.098671 systemd-logind[1326]: Removed session 22. Feb 8 23:24:45.197895 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.12.6:42308.service. Feb 8 23:24:45.826276 sshd[4133]: Accepted publickey for core from 10.200.12.6 port 42308 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:45.827694 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:45.831496 systemd-logind[1326]: New session 23 of user core. Feb 8 23:24:45.833707 systemd[1]: Started session-23.scope. Feb 8 23:24:46.322827 sshd[4133]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:46.326513 systemd[1]: sshd@20-10.200.8.4:22-10.200.12.6:42308.service: Deactivated successfully. Feb 8 23:24:46.327521 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:24:46.328209 systemd-logind[1326]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:24:46.329099 systemd-logind[1326]: Removed session 23. Feb 8 23:24:46.428367 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.12.6:42316.service. Feb 8 23:24:47.051959 sshd[4144]: Accepted publickey for core from 10.200.12.6 port 42316 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:47.053355 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:47.058483 systemd-logind[1326]: New session 24 of user core. Feb 8 23:24:47.059157 systemd[1]: Started session-24.scope. Feb 8 23:24:48.690075 env[1345]: time="2024-02-08T23:24:48.690019632Z" level=info msg="StopContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" with timeout 30 (s)" Feb 8 23:24:48.691274 env[1345]: time="2024-02-08T23:24:48.691230327Z" level=info msg="Stop container \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" with signal terminated" Feb 8 23:24:48.710850 systemd[1]: cri-containerd-bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2.scope: Deactivated successfully. Feb 8 23:24:48.716132 systemd[1]: run-containerd-runc-k8s.io-7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412-runc.jDxQqD.mount: Deactivated successfully. Feb 8 23:24:48.738140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2-rootfs.mount: Deactivated successfully. Feb 8 23:24:48.746114 env[1345]: time="2024-02-08T23:24:48.746037908Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:24:48.752527 env[1345]: time="2024-02-08T23:24:48.752477082Z" level=info msg="StopContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" with timeout 1 (s)" Feb 8 23:24:48.752778 env[1345]: time="2024-02-08T23:24:48.752740181Z" level=info msg="Stop container \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" with signal terminated" Feb 8 23:24:48.760381 systemd-networkd[1493]: lxc_health: Link DOWN Feb 8 23:24:48.760392 systemd-networkd[1493]: lxc_health: Lost carrier Feb 8 23:24:48.782976 systemd[1]: cri-containerd-7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412.scope: Deactivated successfully. Feb 8 23:24:48.783258 systemd[1]: cri-containerd-7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412.scope: Consumed 7.269s CPU time. Feb 8 23:24:48.805007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412-rootfs.mount: Deactivated successfully. Feb 8 23:24:48.827091 env[1345]: time="2024-02-08T23:24:48.827039083Z" level=info msg="shim disconnected" id=7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412 Feb 8 23:24:48.827677 env[1345]: time="2024-02-08T23:24:48.827655181Z" level=warning msg="cleaning up after shim disconnected" id=7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412 namespace=k8s.io Feb 8 23:24:48.827889 env[1345]: time="2024-02-08T23:24:48.827874680Z" level=info msg="cleaning up dead shim" Feb 8 23:24:48.828244 env[1345]: time="2024-02-08T23:24:48.828212178Z" level=info msg="shim disconnected" id=bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2 Feb 8 23:24:48.828389 env[1345]: time="2024-02-08T23:24:48.828368778Z" level=warning msg="cleaning up after shim disconnected" id=bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2 namespace=k8s.io Feb 8 23:24:48.828524 env[1345]: time="2024-02-08T23:24:48.828505577Z" level=info msg="cleaning up dead shim" Feb 8 23:24:48.842344 env[1345]: time="2024-02-08T23:24:48.842307822Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4216 runtime=io.containerd.runc.v2\n" Feb 8 23:24:48.843829 env[1345]: time="2024-02-08T23:24:48.843799516Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4217 runtime=io.containerd.runc.v2\n" Feb 8 23:24:48.847200 env[1345]: time="2024-02-08T23:24:48.847165802Z" level=info msg="StopContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" returns successfully" Feb 8 23:24:48.848031 env[1345]: time="2024-02-08T23:24:48.847998699Z" level=info msg="StopPodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\"" Feb 8 23:24:48.848151 env[1345]: time="2024-02-08T23:24:48.848072799Z" level=info msg="Container to stop \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:48.848151 env[1345]: time="2024-02-08T23:24:48.848093299Z" level=info msg="Container to stop \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:48.848151 env[1345]: time="2024-02-08T23:24:48.848109499Z" level=info msg="Container to stop \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:48.848151 env[1345]: time="2024-02-08T23:24:48.848126599Z" level=info msg="Container to stop \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:48.848151 env[1345]: time="2024-02-08T23:24:48.848140698Z" level=info msg="Container to stop \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:48.849310 env[1345]: time="2024-02-08T23:24:48.849280094Z" level=info msg="StopContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" returns successfully" Feb 8 23:24:48.849963 env[1345]: time="2024-02-08T23:24:48.849930791Z" level=info msg="StopPodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\"" Feb 8 23:24:48.850062 env[1345]: time="2024-02-08T23:24:48.849996691Z" level=info msg="Container to stop \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:48.858026 systemd[1]: cri-containerd-b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3.scope: Deactivated successfully. Feb 8 23:24:48.862250 systemd[1]: cri-containerd-4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029.scope: Deactivated successfully. Feb 8 23:24:48.903910 env[1345]: time="2024-02-08T23:24:48.903855775Z" level=info msg="shim disconnected" id=b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3 Feb 8 23:24:48.905132 env[1345]: time="2024-02-08T23:24:48.905100970Z" level=warning msg="cleaning up after shim disconnected" id=b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3 namespace=k8s.io Feb 8 23:24:48.905132 env[1345]: time="2024-02-08T23:24:48.905123370Z" level=info msg="cleaning up dead shim" Feb 8 23:24:48.905363 env[1345]: time="2024-02-08T23:24:48.904217474Z" level=info msg="shim disconnected" id=4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029 Feb 8 23:24:48.905490 env[1345]: time="2024-02-08T23:24:48.905469869Z" level=warning msg="cleaning up after shim disconnected" id=4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029 namespace=k8s.io Feb 8 23:24:48.905576 env[1345]: time="2024-02-08T23:24:48.905561768Z" level=info msg="cleaning up dead shim" Feb 8 23:24:48.914208 env[1345]: time="2024-02-08T23:24:48.914176634Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4280 runtime=io.containerd.runc.v2\n" Feb 8 23:24:48.914746 env[1345]: time="2024-02-08T23:24:48.914706032Z" level=info msg="TearDown network for sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" successfully" Feb 8 23:24:48.914923 env[1345]: time="2024-02-08T23:24:48.914891931Z" level=info msg="StopPodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" returns successfully" Feb 8 23:24:48.920891 env[1345]: time="2024-02-08T23:24:48.920693808Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4279 runtime=io.containerd.runc.v2\n" Feb 8 23:24:48.921063 env[1345]: time="2024-02-08T23:24:48.921038206Z" level=info msg="TearDown network for sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" successfully" Feb 8 23:24:48.921144 env[1345]: time="2024-02-08T23:24:48.921061706Z" level=info msg="StopPodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" returns successfully" Feb 8 23:24:49.104912 kubelet[2452]: I0208 23:24:49.104858 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-bpf-maps\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.104912 kubelet[2452]: I0208 23:24:49.104924 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-hubble-tls\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105595 kubelet[2452]: I0208 23:24:49.104952 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-xtables-lock\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105595 kubelet[2452]: I0208 23:24:49.104985 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6m42\" (UniqueName: \"kubernetes.io/projected/3be482ba-1877-4428-b84c-af63f313ffea-kube-api-access-v6m42\") pod \"3be482ba-1877-4428-b84c-af63f313ffea\" (UID: \"3be482ba-1877-4428-b84c-af63f313ffea\") " Feb 8 23:24:49.105595 kubelet[2452]: I0208 23:24:49.105024 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3be482ba-1877-4428-b84c-af63f313ffea-cilium-config-path\") pod \"3be482ba-1877-4428-b84c-af63f313ffea\" (UID: \"3be482ba-1877-4428-b84c-af63f313ffea\") " Feb 8 23:24:49.105595 kubelet[2452]: I0208 23:24:49.105055 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-net\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105595 kubelet[2452]: I0208 23:24:49.105088 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-lib-modules\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105595 kubelet[2452]: I0208 23:24:49.105118 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-hostproc\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105895 kubelet[2452]: I0208 23:24:49.105155 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-config-path\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105895 kubelet[2452]: I0208 23:24:49.105184 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-etc-cni-netd\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105895 kubelet[2452]: I0208 23:24:49.105215 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f394164-0f20-4f64-a444-bbbf667c2cd9-clustermesh-secrets\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105895 kubelet[2452]: I0208 23:24:49.105248 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-cgroup\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105895 kubelet[2452]: I0208 23:24:49.105279 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cni-path\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.105895 kubelet[2452]: I0208 23:24:49.105312 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-run\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.106189 kubelet[2452]: I0208 23:24:49.105345 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhrgl\" (UniqueName: \"kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-kube-api-access-lhrgl\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.106189 kubelet[2452]: I0208 23:24:49.105372 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-kernel\") pod \"6f394164-0f20-4f64-a444-bbbf667c2cd9\" (UID: \"6f394164-0f20-4f64-a444-bbbf667c2cd9\") " Feb 8 23:24:49.106189 kubelet[2452]: I0208 23:24:49.105492 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.106189 kubelet[2452]: I0208 23:24:49.105552 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.107243 kubelet[2452]: I0208 23:24:49.106517 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.107243 kubelet[2452]: W0208 23:24:49.106679 2452 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6f394164-0f20-4f64-a444-bbbf667c2cd9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:24:49.109979 kubelet[2452]: I0208 23:24:49.109930 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:24:49.110277 kubelet[2452]: I0208 23:24:49.110241 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.111070 kubelet[2452]: W0208 23:24:49.110838 2452 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3be482ba-1877-4428-b84c-af63f313ffea/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:24:49.114219 kubelet[2452]: I0208 23:24:49.114183 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3be482ba-1877-4428-b84c-af63f313ffea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3be482ba-1877-4428-b84c-af63f313ffea" (UID: "3be482ba-1877-4428-b84c-af63f313ffea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:24:49.114647 kubelet[2452]: I0208 23:24:49.109996 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.114769 kubelet[2452]: I0208 23:24:49.110010 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.114879 kubelet[2452]: I0208 23:24:49.110887 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.114879 kubelet[2452]: I0208 23:24:49.110909 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.115000 kubelet[2452]: I0208 23:24:49.110925 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.115000 kubelet[2452]: I0208 23:24:49.109978 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.115000 kubelet[2452]: I0208 23:24:49.111028 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:49.115000 kubelet[2452]: I0208 23:24:49.114609 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f394164-0f20-4f64-a444-bbbf667c2cd9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:24:49.115762 kubelet[2452]: I0208 23:24:49.115734 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-kube-api-access-lhrgl" (OuterVolumeSpecName: "kube-api-access-lhrgl") pod "6f394164-0f20-4f64-a444-bbbf667c2cd9" (UID: "6f394164-0f20-4f64-a444-bbbf667c2cd9"). InnerVolumeSpecName "kube-api-access-lhrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:49.117228 kubelet[2452]: I0208 23:24:49.117206 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be482ba-1877-4428-b84c-af63f313ffea-kube-api-access-v6m42" (OuterVolumeSpecName: "kube-api-access-v6m42") pod "3be482ba-1877-4428-b84c-af63f313ffea" (UID: "3be482ba-1877-4428-b84c-af63f313ffea"). InnerVolumeSpecName "kube-api-access-v6m42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:49.205652 kubelet[2452]: I0208 23:24:49.205608 2452 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-lhrgl\" (UniqueName: \"kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-kube-api-access-lhrgl\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205652 kubelet[2452]: I0208 23:24:49.205651 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-cgroup\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205652 kubelet[2452]: I0208 23:24:49.205667 2452 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cni-path\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205683 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-run\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205697 2452 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205709 2452 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-bpf-maps\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205734 2452 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f394164-0f20-4f64-a444-bbbf667c2cd9-hubble-tls\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205750 2452 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-host-proc-sys-net\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205763 2452 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-xtables-lock\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205776 2452 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-v6m42\" (UniqueName: \"kubernetes.io/projected/3be482ba-1877-4428-b84c-af63f313ffea-kube-api-access-v6m42\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.205968 kubelet[2452]: I0208 23:24:49.205790 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3be482ba-1877-4428-b84c-af63f313ffea-cilium-config-path\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.206167 kubelet[2452]: I0208 23:24:49.205804 2452 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-lib-modules\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.206167 kubelet[2452]: I0208 23:24:49.205816 2452 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-hostproc\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.206167 kubelet[2452]: I0208 23:24:49.205830 2452 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f394164-0f20-4f64-a444-bbbf667c2cd9-clustermesh-secrets\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.206167 kubelet[2452]: I0208 23:24:49.205846 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f394164-0f20-4f64-a444-bbbf667c2cd9-cilium-config-path\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.206167 kubelet[2452]: I0208 23:24:49.205891 2452 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f394164-0f20-4f64-a444-bbbf667c2cd9-etc-cni-netd\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:49.706571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3-rootfs.mount: Deactivated successfully. Feb 8 23:24:49.706724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3-shm.mount: Deactivated successfully. Feb 8 23:24:49.706823 systemd[1]: var-lib-kubelet-pods-3be482ba\x2d1877\x2d4428\x2db84c\x2daf63f313ffea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv6m42.mount: Deactivated successfully. Feb 8 23:24:49.706928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029-rootfs.mount: Deactivated successfully. Feb 8 23:24:49.707025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029-shm.mount: Deactivated successfully. Feb 8 23:24:49.707122 systemd[1]: var-lib-kubelet-pods-6f394164\x2d0f20\x2d4f64\x2da444\x2dbbbf667c2cd9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhrgl.mount: Deactivated successfully. Feb 8 23:24:49.707228 systemd[1]: var-lib-kubelet-pods-6f394164\x2d0f20\x2d4f64\x2da444\x2dbbbf667c2cd9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:24:49.707334 systemd[1]: var-lib-kubelet-pods-6f394164\x2d0f20\x2d4f64\x2da444\x2dbbbf667c2cd9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:24:49.771791 kubelet[2452]: I0208 23:24:49.771761 2452 scope.go:115] "RemoveContainer" containerID="bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2" Feb 8 23:24:49.776079 env[1345]: time="2024-02-08T23:24:49.776027798Z" level=info msg="RemoveContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\"" Feb 8 23:24:49.778804 systemd[1]: Removed slice kubepods-besteffort-pod3be482ba_1877_4428_b84c_af63f313ffea.slice. Feb 8 23:24:49.788218 systemd[1]: Removed slice kubepods-burstable-pod6f394164_0f20_4f64_a444_bbbf667c2cd9.slice. Feb 8 23:24:49.788338 systemd[1]: kubepods-burstable-pod6f394164_0f20_4f64_a444_bbbf667c2cd9.slice: Consumed 7.375s CPU time. Feb 8 23:24:49.793094 env[1345]: time="2024-02-08T23:24:49.793058130Z" level=info msg="RemoveContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" returns successfully" Feb 8 23:24:49.794574 kubelet[2452]: I0208 23:24:49.794543 2452 scope.go:115] "RemoveContainer" containerID="bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2" Feb 8 23:24:49.795022 env[1345]: time="2024-02-08T23:24:49.794940522Z" level=error msg="ContainerStatus for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\": not found" Feb 8 23:24:49.795261 kubelet[2452]: E0208 23:24:49.795238 2452 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\": not found" containerID="bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2" Feb 8 23:24:49.795368 kubelet[2452]: I0208 23:24:49.795351 2452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2} err="failed to get container status \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\": not found" Feb 8 23:24:49.795529 kubelet[2452]: I0208 23:24:49.795513 2452 scope.go:115] "RemoveContainer" containerID="7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412" Feb 8 23:24:49.797624 env[1345]: time="2024-02-08T23:24:49.797592012Z" level=info msg="RemoveContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\"" Feb 8 23:24:49.807345 env[1345]: time="2024-02-08T23:24:49.807304273Z" level=info msg="RemoveContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" returns successfully" Feb 8 23:24:49.807677 kubelet[2452]: I0208 23:24:49.807662 2452 scope.go:115] "RemoveContainer" containerID="6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea" Feb 8 23:24:49.810102 env[1345]: time="2024-02-08T23:24:49.810071262Z" level=info msg="RemoveContainer for \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\"" Feb 8 23:24:49.820928 env[1345]: time="2024-02-08T23:24:49.820872719Z" level=info msg="RemoveContainer for \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\" returns successfully" Feb 8 23:24:49.821083 kubelet[2452]: I0208 23:24:49.821055 2452 scope.go:115] "RemoveContainer" containerID="8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903" Feb 8 23:24:49.822468 env[1345]: time="2024-02-08T23:24:49.822431913Z" level=info msg="RemoveContainer for \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\"" Feb 8 23:24:49.835643 env[1345]: time="2024-02-08T23:24:49.835606260Z" level=info msg="RemoveContainer for \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\" returns successfully" Feb 8 23:24:49.835811 kubelet[2452]: I0208 23:24:49.835789 2452 scope.go:115] "RemoveContainer" containerID="d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953" Feb 8 23:24:49.836816 env[1345]: time="2024-02-08T23:24:49.836785956Z" level=info msg="RemoveContainer for \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\"" Feb 8 23:24:49.850566 env[1345]: time="2024-02-08T23:24:49.850536501Z" level=info msg="RemoveContainer for \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\" returns successfully" Feb 8 23:24:49.850803 kubelet[2452]: I0208 23:24:49.850781 2452 scope.go:115] "RemoveContainer" containerID="8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400" Feb 8 23:24:49.851774 env[1345]: time="2024-02-08T23:24:49.851748096Z" level=info msg="RemoveContainer for \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\"" Feb 8 23:24:49.861947 env[1345]: time="2024-02-08T23:24:49.861911355Z" level=info msg="RemoveContainer for \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\" returns successfully" Feb 8 23:24:49.862121 kubelet[2452]: I0208 23:24:49.862101 2452 scope.go:115] "RemoveContainer" containerID="7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412" Feb 8 23:24:49.862403 env[1345]: time="2024-02-08T23:24:49.862347754Z" level=error msg="ContainerStatus for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\": not found" Feb 8 23:24:49.862598 kubelet[2452]: E0208 23:24:49.862573 2452 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\": not found" containerID="7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412" Feb 8 23:24:49.862681 kubelet[2452]: I0208 23:24:49.862614 2452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412} err="failed to get container status \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\": not found" Feb 8 23:24:49.862681 kubelet[2452]: I0208 23:24:49.862629 2452 scope.go:115] "RemoveContainer" containerID="6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea" Feb 8 23:24:49.862846 env[1345]: time="2024-02-08T23:24:49.862791452Z" level=error msg="ContainerStatus for \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\": not found" Feb 8 23:24:49.862971 kubelet[2452]: E0208 23:24:49.862953 2452 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\": not found" containerID="6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea" Feb 8 23:24:49.863044 kubelet[2452]: I0208 23:24:49.862985 2452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea} err="failed to get container status \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f745e3bffab522c25b64a70f0c8eec8cef6168a35d90e2c42c01a1ab60e3eea\": not found" Feb 8 23:24:49.863044 kubelet[2452]: I0208 23:24:49.862998 2452 scope.go:115] "RemoveContainer" containerID="8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903" Feb 8 23:24:49.863203 env[1345]: time="2024-02-08T23:24:49.863154051Z" level=error msg="ContainerStatus for \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\": not found" Feb 8 23:24:49.863322 kubelet[2452]: E0208 23:24:49.863304 2452 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\": not found" containerID="8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903" Feb 8 23:24:49.863388 kubelet[2452]: I0208 23:24:49.863335 2452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903} err="failed to get container status \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b30cd3dcba592404a46cf8dad7601d1b850ff9bd3f7dce5bd708e0a0b638903\": not found" Feb 8 23:24:49.863388 kubelet[2452]: I0208 23:24:49.863349 2452 scope.go:115] "RemoveContainer" containerID="d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953" Feb 8 23:24:49.863570 env[1345]: time="2024-02-08T23:24:49.863518749Z" level=error msg="ContainerStatus for \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\": not found" Feb 8 23:24:49.863696 kubelet[2452]: E0208 23:24:49.863678 2452 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\": not found" containerID="d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953" Feb 8 23:24:49.863769 kubelet[2452]: I0208 23:24:49.863710 2452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953} err="failed to get container status \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\": rpc error: code = NotFound desc = an error occurred when try to find container \"d19c04da0d830c78b818c42c9bba38837bddbaed01e52f0c4c59afc955bc8953\": not found" Feb 8 23:24:49.863769 kubelet[2452]: I0208 23:24:49.863723 2452 scope.go:115] "RemoveContainer" containerID="8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400" Feb 8 23:24:49.863921 env[1345]: time="2024-02-08T23:24:49.863876048Z" level=error msg="ContainerStatus for \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\": not found" Feb 8 23:24:49.864031 kubelet[2452]: E0208 23:24:49.864014 2452 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\": not found" containerID="8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400" Feb 8 23:24:49.864103 kubelet[2452]: I0208 23:24:49.864045 2452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400} err="failed to get container status \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cd94a84d5a4db28e2f4a39c2dab922ff1a8263a7fae10f7b3ea781803aac400\": not found" Feb 8 23:24:50.193940 env[1345]: time="2024-02-08T23:24:50.193861537Z" level=info msg="StopContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" with timeout 1 (s)" Feb 8 23:24:50.195851 env[1345]: time="2024-02-08T23:24:50.195633830Z" level=error msg="StopContainer for \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\": not found" Feb 8 23:24:50.196273 env[1345]: time="2024-02-08T23:24:50.195495831Z" level=info msg="StopContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" with timeout 1 (s)" Feb 8 23:24:50.196717 env[1345]: time="2024-02-08T23:24:50.196650226Z" level=error msg="StopContainer for \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\": not found" Feb 8 23:24:50.198948 kubelet[2452]: E0208 23:24:50.197089 2452 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2\": not found" containerID="bdf812a6496ee21e1c0ab817dd39000655d8b1b4e55c95c6cda223f0d95a7bf2" Feb 8 23:24:50.198948 kubelet[2452]: E0208 23:24:50.197234 2452 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412\": not found" containerID="7b6251528144de930a6187c18ddff7e9483312dbcf6049e506cacf6cb88b0412" Feb 8 23:24:50.198948 kubelet[2452]: I0208 23:24:50.197782 2452 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3be482ba-1877-4428-b84c-af63f313ffea path="/var/lib/kubelet/pods/3be482ba-1877-4428-b84c-af63f313ffea/volumes" Feb 8 23:24:50.198948 kubelet[2452]: I0208 23:24:50.198346 2452 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6f394164-0f20-4f64-a444-bbbf667c2cd9 path="/var/lib/kubelet/pods/6f394164-0f20-4f64-a444-bbbf667c2cd9/volumes" Feb 8 23:24:50.199682 env[1345]: time="2024-02-08T23:24:50.199658714Z" level=info msg="StopPodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\"" Feb 8 23:24:50.199890 env[1345]: time="2024-02-08T23:24:50.199845114Z" level=info msg="TearDown network for sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" successfully" Feb 8 23:24:50.199978 env[1345]: time="2024-02-08T23:24:50.199960613Z" level=info msg="StopPodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" returns successfully" Feb 8 23:24:50.200173 env[1345]: time="2024-02-08T23:24:50.200151213Z" level=info msg="StopPodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\"" Feb 8 23:24:50.200399 env[1345]: time="2024-02-08T23:24:50.200351512Z" level=info msg="TearDown network for sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" successfully" Feb 8 23:24:50.200531 env[1345]: time="2024-02-08T23:24:50.200509911Z" level=info msg="StopPodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" returns successfully" Feb 8 23:24:50.730266 sshd[4144]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:50.733454 systemd[1]: sshd@21-10.200.8.4:22-10.200.12.6:42316.service: Deactivated successfully. Feb 8 23:24:50.734431 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:24:50.735171 systemd-logind[1326]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:24:50.736062 systemd-logind[1326]: Removed session 24. Feb 8 23:24:50.837334 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.12.6:40630.service. Feb 8 23:24:51.465031 sshd[4313]: Accepted publickey for core from 10.200.12.6 port 40630 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:51.466907 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:51.473081 systemd[1]: Started session-25.scope. Feb 8 23:24:51.473964 systemd-logind[1326]: New session 25 of user core. Feb 8 23:24:52.286489 kubelet[2452]: E0208 23:24:52.286449 2452 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:24:52.397671 kubelet[2452]: I0208 23:24:52.397632 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:24:52.397985 kubelet[2452]: E0208 23:24:52.397962 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f394164-0f20-4f64-a444-bbbf667c2cd9" containerName="apply-sysctl-overwrites" Feb 8 23:24:52.398146 kubelet[2452]: E0208 23:24:52.398131 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f394164-0f20-4f64-a444-bbbf667c2cd9" containerName="clean-cilium-state" Feb 8 23:24:52.398292 kubelet[2452]: E0208 23:24:52.398273 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f394164-0f20-4f64-a444-bbbf667c2cd9" containerName="cilium-agent" Feb 8 23:24:52.398401 kubelet[2452]: E0208 23:24:52.398390 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f394164-0f20-4f64-a444-bbbf667c2cd9" containerName="mount-cgroup" Feb 8 23:24:52.398524 kubelet[2452]: E0208 23:24:52.398511 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f394164-0f20-4f64-a444-bbbf667c2cd9" containerName="mount-bpf-fs" Feb 8 23:24:52.398612 kubelet[2452]: E0208 23:24:52.398602 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3be482ba-1877-4428-b84c-af63f313ffea" containerName="cilium-operator" Feb 8 23:24:52.398726 kubelet[2452]: I0208 23:24:52.398712 2452 memory_manager.go:346] "RemoveStaleState removing state" podUID="6f394164-0f20-4f64-a444-bbbf667c2cd9" containerName="cilium-agent" Feb 8 23:24:52.398819 kubelet[2452]: I0208 23:24:52.398809 2452 memory_manager.go:346] "RemoveStaleState removing state" podUID="3be482ba-1877-4428-b84c-af63f313ffea" containerName="cilium-operator" Feb 8 23:24:52.406345 systemd[1]: Created slice kubepods-burstable-pod96a40a54_a3f2_4745_86e0_d1e7a407c43d.slice. Feb 8 23:24:52.423183 kubelet[2452]: W0208 23:24:52.423150 2452 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-5bade47376" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-5bade47376' and this object Feb 8 23:24:52.423403 kubelet[2452]: E0208 23:24:52.423388 2452 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-5bade47376" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-5bade47376' and this object Feb 8 23:24:52.460164 sshd[4313]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:52.463143 systemd[1]: sshd@22-10.200.8.4:22-10.200.12.6:40630.service: Deactivated successfully. Feb 8 23:24:52.464171 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:24:52.464905 systemd-logind[1326]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:24:52.465899 systemd-logind[1326]: Removed session 25. Feb 8 23:24:52.523337 kubelet[2452]: I0208 23:24:52.523282 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-config-path\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523587 kubelet[2452]: I0208 23:24:52.523361 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hubble-tls\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523587 kubelet[2452]: I0208 23:24:52.523398 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-etc-cni-netd\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523587 kubelet[2452]: I0208 23:24:52.523450 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-bpf-maps\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523587 kubelet[2452]: I0208 23:24:52.523499 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hostproc\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523587 kubelet[2452]: I0208 23:24:52.523534 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-lib-modules\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523587 kubelet[2452]: I0208 23:24:52.523569 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-ipsec-secrets\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523922 kubelet[2452]: I0208 23:24:52.523603 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn52x\" (UniqueName: \"kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-kube-api-access-bn52x\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523922 kubelet[2452]: I0208 23:24:52.523639 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-kernel\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523922 kubelet[2452]: I0208 23:24:52.523681 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-run\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523922 kubelet[2452]: I0208 23:24:52.523719 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-cgroup\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523922 kubelet[2452]: I0208 23:24:52.523756 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cni-path\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.523922 kubelet[2452]: I0208 23:24:52.523798 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-xtables-lock\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.524260 kubelet[2452]: I0208 23:24:52.523837 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-clustermesh-secrets\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.524260 kubelet[2452]: I0208 23:24:52.523879 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-net\") pod \"cilium-448zx\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " pod="kube-system/cilium-448zx" Feb 8 23:24:52.564027 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.12.6:40646.service. Feb 8 23:24:53.176384 sshd[4323]: Accepted publickey for core from 10.200.12.6 port 40646 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:53.177890 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:53.182554 systemd-logind[1326]: New session 26 of user core. Feb 8 23:24:53.183435 systemd[1]: Started session-26.scope. Feb 8 23:24:53.311919 env[1345]: time="2024-02-08T23:24:53.311867580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-448zx,Uid:96a40a54-a3f2-4745-86e0-d1e7a407c43d,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:53.348877 env[1345]: time="2024-02-08T23:24:53.348798736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:53.349053 env[1345]: time="2024-02-08T23:24:53.348854736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:53.349053 env[1345]: time="2024-02-08T23:24:53.348868036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:53.349364 env[1345]: time="2024-02-08T23:24:53.349243234Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49 pid=4337 runtime=io.containerd.runc.v2 Feb 8 23:24:53.366953 systemd[1]: Started cri-containerd-3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49.scope. Feb 8 23:24:53.394567 env[1345]: time="2024-02-08T23:24:53.394531558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-448zx,Uid:96a40a54-a3f2-4745-86e0-d1e7a407c43d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\"" Feb 8 23:24:53.397374 env[1345]: time="2024-02-08T23:24:53.397338547Z" level=info msg="CreateContainer within sandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:24:53.436231 env[1345]: time="2024-02-08T23:24:53.436009797Z" level=info msg="CreateContainer within sandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\"" Feb 8 23:24:53.438705 env[1345]: time="2024-02-08T23:24:53.438661786Z" level=info msg="StartContainer for \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\"" Feb 8 23:24:53.454516 systemd[1]: Started cri-containerd-2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174.scope. Feb 8 23:24:53.467893 systemd[1]: cri-containerd-2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174.scope: Deactivated successfully. Feb 8 23:24:53.530053 env[1345]: time="2024-02-08T23:24:53.529995631Z" level=info msg="shim disconnected" id=2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174 Feb 8 23:24:53.530053 env[1345]: time="2024-02-08T23:24:53.530050231Z" level=warning msg="cleaning up after shim disconnected" id=2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174 namespace=k8s.io Feb 8 23:24:53.530053 env[1345]: time="2024-02-08T23:24:53.530061131Z" level=info msg="cleaning up dead shim" Feb 8 23:24:53.543691 env[1345]: time="2024-02-08T23:24:53.543642178Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4403 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:24:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:24:53.544241 env[1345]: time="2024-02-08T23:24:53.544122776Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 8 23:24:53.545378 env[1345]: time="2024-02-08T23:24:53.544926373Z" level=error msg="Failed to pipe stdout of container \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\"" error="reading from a closed fifo" Feb 8 23:24:53.545562 env[1345]: time="2024-02-08T23:24:53.545133772Z" level=error msg="Failed to pipe stderr of container \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\"" error="reading from a closed fifo" Feb 8 23:24:53.549484 env[1345]: time="2024-02-08T23:24:53.549424355Z" level=error msg="StartContainer for \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:24:53.550280 kubelet[2452]: E0208 23:24:53.549797 2452 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174" Feb 8 23:24:53.550280 kubelet[2452]: E0208 23:24:53.549945 2452 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:24:53.550280 kubelet[2452]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:24:53.550280 kubelet[2452]: rm /hostbin/cilium-mount Feb 8 23:24:53.550960 kubelet[2452]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bn52x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-448zx_kube-system(96a40a54-a3f2-4745-86e0-d1e7a407c43d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:24:53.551083 kubelet[2452]: E0208 23:24:53.550001 2452 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-448zx" podUID=96a40a54-a3f2-4745-86e0-d1e7a407c43d Feb 8 23:24:53.685093 sshd[4323]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:53.689948 systemd[1]: sshd@23-10.200.8.4:22-10.200.12.6:40646.service: Deactivated successfully. Feb 8 23:24:53.691148 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:24:53.692080 systemd-logind[1326]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:24:53.693572 systemd-logind[1326]: Removed session 26. Feb 8 23:24:53.790370 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.12.6:40662.service. Feb 8 23:24:53.793968 env[1345]: time="2024-02-08T23:24:53.793920004Z" level=info msg="StopPodSandbox for \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\"" Feb 8 23:24:53.794207 env[1345]: time="2024-02-08T23:24:53.794175603Z" level=info msg="Container to stop \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:53.796675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49-shm.mount: Deactivated successfully. Feb 8 23:24:53.816588 systemd[1]: cri-containerd-3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49.scope: Deactivated successfully. Feb 8 23:24:53.844897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49-rootfs.mount: Deactivated successfully. Feb 8 23:24:53.864048 env[1345]: time="2024-02-08T23:24:53.863996731Z" level=info msg="shim disconnected" id=3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49 Feb 8 23:24:53.864365 env[1345]: time="2024-02-08T23:24:53.864339530Z" level=warning msg="cleaning up after shim disconnected" id=3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49 namespace=k8s.io Feb 8 23:24:53.864500 env[1345]: time="2024-02-08T23:24:53.864484629Z" level=info msg="cleaning up dead shim" Feb 8 23:24:53.872995 env[1345]: time="2024-02-08T23:24:53.872963796Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4440 runtime=io.containerd.runc.v2\n" Feb 8 23:24:53.873293 env[1345]: time="2024-02-08T23:24:53.873260395Z" level=info msg="TearDown network for sandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" successfully" Feb 8 23:24:53.873293 env[1345]: time="2024-02-08T23:24:53.873288595Z" level=info msg="StopPodSandbox for \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" returns successfully" Feb 8 23:24:54.037094 kubelet[2452]: I0208 23:24:54.036949 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-kernel\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037094 kubelet[2452]: I0208 23:24:54.037007 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-xtables-lock\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037094 kubelet[2452]: I0208 23:24:54.037045 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-config-path\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037094 kubelet[2452]: I0208 23:24:54.037067 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-lib-modules\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037094 kubelet[2452]: I0208 23:24:54.037095 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-bpf-maps\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037516 kubelet[2452]: I0208 23:24:54.037125 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-clustermesh-secrets\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037516 kubelet[2452]: I0208 23:24:54.037146 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cni-path\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037516 kubelet[2452]: I0208 23:24:54.037167 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-net\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037516 kubelet[2452]: I0208 23:24:54.037193 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-ipsec-secrets\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037516 kubelet[2452]: I0208 23:24:54.037213 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-run\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037516 kubelet[2452]: I0208 23:24:54.037242 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hubble-tls\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037775 kubelet[2452]: I0208 23:24:54.037264 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-etc-cni-netd\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037775 kubelet[2452]: I0208 23:24:54.037289 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn52x\" (UniqueName: \"kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-kube-api-access-bn52x\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037775 kubelet[2452]: I0208 23:24:54.037314 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hostproc\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037775 kubelet[2452]: I0208 23:24:54.037341 2452 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-cgroup\") pod \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\" (UID: \"96a40a54-a3f2-4745-86e0-d1e7a407c43d\") " Feb 8 23:24:54.037775 kubelet[2452]: I0208 23:24:54.037440 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.038004 kubelet[2452]: I0208 23:24:54.037481 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.038004 kubelet[2452]: I0208 23:24:54.037504 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.038004 kubelet[2452]: W0208 23:24:54.037693 2452 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/96a40a54-a3f2-4745-86e0-d1e7a407c43d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:24:54.040992 kubelet[2452]: I0208 23:24:54.040193 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:24:54.040992 kubelet[2452]: I0208 23:24:54.040247 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.040992 kubelet[2452]: I0208 23:24:54.040382 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.040992 kubelet[2452]: I0208 23:24:54.040426 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.040992 kubelet[2452]: I0208 23:24:54.040857 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.041308 kubelet[2452]: I0208 23:24:54.041098 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hostproc" (OuterVolumeSpecName: "hostproc") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.041308 kubelet[2452]: I0208 23:24:54.041129 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.041308 kubelet[2452]: I0208 23:24:54.041152 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cni-path" (OuterVolumeSpecName: "cni-path") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:54.047885 systemd[1]: var-lib-kubelet-pods-96a40a54\x2da3f2\x2d4745\x2d86e0\x2dd1e7a407c43d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:24:54.048051 systemd[1]: var-lib-kubelet-pods-96a40a54\x2da3f2\x2d4745\x2d86e0\x2dd1e7a407c43d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:24:54.049635 kubelet[2452]: I0208 23:24:54.049606 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:24:54.051397 kubelet[2452]: I0208 23:24:54.051373 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:24:54.052824 kubelet[2452]: I0208 23:24:54.052799 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-kube-api-access-bn52x" (OuterVolumeSpecName: "kube-api-access-bn52x") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "kube-api-access-bn52x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:54.053118 kubelet[2452]: I0208 23:24:54.053098 2452 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "96a40a54-a3f2-4745-86e0-d1e7a407c43d" (UID: "96a40a54-a3f2-4745-86e0-d1e7a407c43d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:54.054296 systemd[1]: var-lib-kubelet-pods-96a40a54\x2da3f2\x2d4745\x2d86e0\x2dd1e7a407c43d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:24:54.138073 kubelet[2452]: I0208 23:24:54.138022 2452 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138073 kubelet[2452]: I0208 23:24:54.138069 2452 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-xtables-lock\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138073 kubelet[2452]: I0208 23:24:54.138087 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-config-path\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138105 2452 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-lib-modules\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138123 2452 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-bpf-maps\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138138 2452 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-clustermesh-secrets\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138154 2452 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-host-proc-sys-net\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138171 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138190 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-run\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138206 2452 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cni-path\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138393 kubelet[2452]: I0208 23:24:54.138221 2452 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hubble-tls\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138696 kubelet[2452]: I0208 23:24:54.138240 2452 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-etc-cni-netd\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138696 kubelet[2452]: I0208 23:24:54.138257 2452 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-bn52x\" (UniqueName: \"kubernetes.io/projected/96a40a54-a3f2-4745-86e0-d1e7a407c43d-kube-api-access-bn52x\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138696 kubelet[2452]: I0208 23:24:54.138273 2452 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-hostproc\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.138696 kubelet[2452]: I0208 23:24:54.138291 2452 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96a40a54-a3f2-4745-86e0-d1e7a407c43d-cilium-cgroup\") on node \"ci-3510.3.2-a-5bade47376\" DevicePath \"\"" Feb 8 23:24:54.199145 systemd[1]: Removed slice kubepods-burstable-pod96a40a54_a3f2_4745_86e0_d1e7a407c43d.slice. Feb 8 23:24:54.412822 sshd[4419]: Accepted publickey for core from 10.200.12.6 port 40662 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:54.414669 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:54.420173 systemd[1]: Started session-27.scope. Feb 8 23:24:54.420652 systemd-logind[1326]: New session 27 of user core. Feb 8 23:24:54.632242 systemd[1]: var-lib-kubelet-pods-96a40a54\x2da3f2\x2d4745\x2d86e0\x2dd1e7a407c43d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbn52x.mount: Deactivated successfully. Feb 8 23:24:54.797014 kubelet[2452]: I0208 23:24:54.796893 2452 scope.go:115] "RemoveContainer" containerID="2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174" Feb 8 23:24:54.800399 env[1345]: time="2024-02-08T23:24:54.800355506Z" level=info msg="RemoveContainer for \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\"" Feb 8 23:24:54.813138 env[1345]: time="2024-02-08T23:24:54.813092156Z" level=info msg="RemoveContainer for \"2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174\" returns successfully" Feb 8 23:24:54.865519 kubelet[2452]: I0208 23:24:54.865480 2452 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:24:54.865738 kubelet[2452]: E0208 23:24:54.865555 2452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96a40a54-a3f2-4745-86e0-d1e7a407c43d" containerName="mount-cgroup" Feb 8 23:24:54.865738 kubelet[2452]: I0208 23:24:54.865605 2452 memory_manager.go:346] "RemoveStaleState removing state" podUID="96a40a54-a3f2-4745-86e0-d1e7a407c43d" containerName="mount-cgroup" Feb 8 23:24:54.872120 systemd[1]: Created slice kubepods-burstable-poda845b6a9_66dd_4453_b8d8_d07796d33a28.slice. Feb 8 23:24:55.043828 kubelet[2452]: I0208 23:24:55.043791 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-cilium-run\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044127 kubelet[2452]: I0208 23:24:55.044109 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-bpf-maps\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044248 kubelet[2452]: I0208 23:24:55.044230 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-hostproc\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044332 kubelet[2452]: I0208 23:24:55.044263 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-etc-cni-netd\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044332 kubelet[2452]: I0208 23:24:55.044298 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a845b6a9-66dd-4453-b8d8-d07796d33a28-clustermesh-secrets\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044332 kubelet[2452]: I0208 23:24:55.044329 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjdg\" (UniqueName: \"kubernetes.io/projected/a845b6a9-66dd-4453-b8d8-d07796d33a28-kube-api-access-vfjdg\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044503 kubelet[2452]: I0208 23:24:55.044357 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-host-proc-sys-kernel\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044503 kubelet[2452]: I0208 23:24:55.044387 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a845b6a9-66dd-4453-b8d8-d07796d33a28-cilium-ipsec-secrets\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044503 kubelet[2452]: I0208 23:24:55.044436 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-host-proc-sys-net\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044503 kubelet[2452]: I0208 23:24:55.044469 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-cilium-cgroup\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.044861 kubelet[2452]: I0208 23:24:55.044834 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-lib-modules\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.045363 kubelet[2452]: I0208 23:24:55.045330 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a845b6a9-66dd-4453-b8d8-d07796d33a28-hubble-tls\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.045600 kubelet[2452]: I0208 23:24:55.045584 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a845b6a9-66dd-4453-b8d8-d07796d33a28-cilium-config-path\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.046129 kubelet[2452]: I0208 23:24:55.046104 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-xtables-lock\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.046235 kubelet[2452]: I0208 23:24:55.046180 2452 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a845b6a9-66dd-4453-b8d8-d07796d33a28-cni-path\") pod \"cilium-lvnvc\" (UID: \"a845b6a9-66dd-4453-b8d8-d07796d33a28\") " pod="kube-system/cilium-lvnvc" Feb 8 23:24:55.476561 env[1345]: time="2024-02-08T23:24:55.476167001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvnvc,Uid:a845b6a9-66dd-4453-b8d8-d07796d33a28,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:55.511998 env[1345]: time="2024-02-08T23:24:55.511920564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:55.512213 env[1345]: time="2024-02-08T23:24:55.511959764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:55.512213 env[1345]: time="2024-02-08T23:24:55.511973363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:55.512361 env[1345]: time="2024-02-08T23:24:55.512257762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40 pid=4477 runtime=io.containerd.runc.v2 Feb 8 23:24:55.524106 systemd[1]: Started cri-containerd-10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40.scope. Feb 8 23:24:55.556495 env[1345]: time="2024-02-08T23:24:55.556446692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvnvc,Uid:a845b6a9-66dd-4453-b8d8-d07796d33a28,Namespace:kube-system,Attempt:0,} returns sandbox id \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\"" Feb 8 23:24:55.561626 env[1345]: time="2024-02-08T23:24:55.561593773Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:24:55.604051 env[1345]: time="2024-02-08T23:24:55.603996309Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042\"" Feb 8 23:24:55.606281 env[1345]: time="2024-02-08T23:24:55.605486804Z" level=info msg="StartContainer for \"9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042\"" Feb 8 23:24:55.622926 systemd[1]: Started cri-containerd-9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042.scope. Feb 8 23:24:55.669884 env[1345]: time="2024-02-08T23:24:55.669832756Z" level=info msg="StartContainer for \"9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042\" returns successfully" Feb 8 23:24:55.680632 systemd[1]: cri-containerd-9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042.scope: Deactivated successfully. Feb 8 23:24:55.699366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042-rootfs.mount: Deactivated successfully. Feb 8 23:24:55.730256 env[1345]: time="2024-02-08T23:24:55.730135924Z" level=info msg="shim disconnected" id=9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042 Feb 8 23:24:55.730590 env[1345]: time="2024-02-08T23:24:55.730565523Z" level=warning msg="cleaning up after shim disconnected" id=9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042 namespace=k8s.io Feb 8 23:24:55.730699 env[1345]: time="2024-02-08T23:24:55.730684322Z" level=info msg="cleaning up dead shim" Feb 8 23:24:55.750193 env[1345]: time="2024-02-08T23:24:55.750154747Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4561 runtime=io.containerd.runc.v2\n" Feb 8 23:24:55.804551 env[1345]: time="2024-02-08T23:24:55.804510038Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:24:55.845399 env[1345]: time="2024-02-08T23:24:55.845338781Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd\"" Feb 8 23:24:55.846480 env[1345]: time="2024-02-08T23:24:55.846446577Z" level=info msg="StartContainer for \"b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd\"" Feb 8 23:24:55.879615 systemd[1]: Started cri-containerd-b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd.scope. Feb 8 23:24:55.980798 env[1345]: time="2024-02-08T23:24:55.980684860Z" level=info msg="StartContainer for \"b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd\" returns successfully" Feb 8 23:24:55.992658 systemd[1]: cri-containerd-b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd.scope: Deactivated successfully. Feb 8 23:24:56.028912 env[1345]: time="2024-02-08T23:24:56.028860076Z" level=info msg="shim disconnected" id=b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd Feb 8 23:24:56.029256 env[1345]: time="2024-02-08T23:24:56.029222774Z" level=warning msg="cleaning up after shim disconnected" id=b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd namespace=k8s.io Feb 8 23:24:56.029358 env[1345]: time="2024-02-08T23:24:56.029343374Z" level=info msg="cleaning up dead shim" Feb 8 23:24:56.045219 env[1345]: time="2024-02-08T23:24:56.045177713Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4623 runtime=io.containerd.runc.v2\n" Feb 8 23:24:56.196239 kubelet[2452]: I0208 23:24:56.196188 2452 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=96a40a54-a3f2-4745-86e0-d1e7a407c43d path="/var/lib/kubelet/pods/96a40a54-a3f2-4745-86e0-d1e7a407c43d/volumes" Feb 8 23:24:56.632531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd-rootfs.mount: Deactivated successfully. Feb 8 23:24:56.639873 kubelet[2452]: W0208 23:24:56.639816 2452 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a40a54_a3f2_4745_86e0_d1e7a407c43d.slice/cri-containerd-2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174.scope WatchSource:0}: container "2cb97c13b2fac2ef22328bebe95360c1b074bd83d774f92beaa3299f076ed174" in namespace "k8s.io": not found Feb 8 23:24:56.701889 kubelet[2452]: I0208 23:24:56.701862 2452 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-5bade47376" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:24:56.701813501 +0000 UTC m=+265.518930795 LastTransitionTime:2024-02-08 23:24:56.701813501 +0000 UTC m=+265.518930795 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:24:56.808520 env[1345]: time="2024-02-08T23:24:56.808460893Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:24:56.836757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42336235.mount: Deactivated successfully. Feb 8 23:24:56.845659 env[1345]: time="2024-02-08T23:24:56.845610651Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c\"" Feb 8 23:24:56.846179 env[1345]: time="2024-02-08T23:24:56.846109049Z" level=info msg="StartContainer for \"d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c\"" Feb 8 23:24:56.877582 systemd[1]: Started cri-containerd-d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c.scope. Feb 8 23:24:56.923293 systemd[1]: cri-containerd-d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c.scope: Deactivated successfully. Feb 8 23:24:56.924433 env[1345]: time="2024-02-08T23:24:56.924364550Z" level=info msg="StartContainer for \"d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c\" returns successfully" Feb 8 23:24:56.953434 env[1345]: time="2024-02-08T23:24:56.953364339Z" level=info msg="shim disconnected" id=d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c Feb 8 23:24:56.953434 env[1345]: time="2024-02-08T23:24:56.953432939Z" level=warning msg="cleaning up after shim disconnected" id=d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c namespace=k8s.io Feb 8 23:24:56.954439 env[1345]: time="2024-02-08T23:24:56.953445939Z" level=info msg="cleaning up dead shim" Feb 8 23:24:56.962835 env[1345]: time="2024-02-08T23:24:56.962799703Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4680 runtime=io.containerd.runc.v2\n" Feb 8 23:24:57.192735 kubelet[2452]: E0208 23:24:57.192552 2452 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-vvd62" podUID=130b720d-2ffa-4d10-8b9b-82871dbd2adb Feb 8 23:24:57.288179 kubelet[2452]: E0208 23:24:57.288128 2452 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:24:57.632614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c-rootfs.mount: Deactivated successfully. Feb 8 23:24:57.813436 env[1345]: time="2024-02-08T23:24:57.813361767Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:24:57.854932 env[1345]: time="2024-02-08T23:24:57.854824909Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29\"" Feb 8 23:24:57.856150 env[1345]: time="2024-02-08T23:24:57.856119004Z" level=info msg="StartContainer for \"1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29\"" Feb 8 23:24:57.877500 systemd[1]: Started cri-containerd-1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29.scope. Feb 8 23:24:57.901931 systemd[1]: cri-containerd-1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29.scope: Deactivated successfully. Feb 8 23:24:57.906281 env[1345]: time="2024-02-08T23:24:57.906028214Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda845b6a9_66dd_4453_b8d8_d07796d33a28.slice/cri-containerd-1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29.scope/memory.events\": no such file or directory" Feb 8 23:24:57.913064 env[1345]: time="2024-02-08T23:24:57.913022188Z" level=info msg="StartContainer for \"1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29\" returns successfully" Feb 8 23:24:57.942148 env[1345]: time="2024-02-08T23:24:57.942098377Z" level=info msg="shim disconnected" id=1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29 Feb 8 23:24:57.942384 env[1345]: time="2024-02-08T23:24:57.942149877Z" level=warning msg="cleaning up after shim disconnected" id=1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29 namespace=k8s.io Feb 8 23:24:57.942384 env[1345]: time="2024-02-08T23:24:57.942161977Z" level=info msg="cleaning up dead shim" Feb 8 23:24:57.951054 env[1345]: time="2024-02-08T23:24:57.951015943Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4739 runtime=io.containerd.runc.v2\n" Feb 8 23:24:58.632916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29-rootfs.mount: Deactivated successfully. Feb 8 23:24:58.829878 env[1345]: time="2024-02-08T23:24:58.829827318Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:24:58.875355 env[1345]: time="2024-02-08T23:24:58.875308346Z" level=info msg="CreateContainer within sandbox \"10334d520e19dbd174d9b5f3223ed1cbd20df871e885e09e1517ab8f652d3a40\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f\"" Feb 8 23:24:58.875844 env[1345]: time="2024-02-08T23:24:58.875805544Z" level=info msg="StartContainer for \"18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f\"" Feb 8 23:24:58.907614 systemd[1]: Started cri-containerd-18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f.scope. Feb 8 23:24:58.964997 env[1345]: time="2024-02-08T23:24:58.964946207Z" level=info msg="StartContainer for \"18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f\" returns successfully" Feb 8 23:24:59.192372 kubelet[2452]: E0208 23:24:59.192238 2452 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-vvd62" podUID=130b720d-2ffa-4d10-8b9b-82871dbd2adb Feb 8 23:24:59.365445 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:24:59.750706 kubelet[2452]: W0208 23:24:59.750645 2452 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda845b6a9_66dd_4453_b8d8_d07796d33a28.slice/cri-containerd-9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042.scope WatchSource:0}: task 9141d403be3d56b0acaeb2079735ce94458012ce152fa2aeb7e399301734a042 not found: not found Feb 8 23:25:01.050479 systemd[1]: run-containerd-runc-k8s.io-18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f-runc.BE4TEG.mount: Deactivated successfully. Feb 8 23:25:01.192383 kubelet[2452]: E0208 23:25:01.192347 2452 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-vvd62" podUID=130b720d-2ffa-4d10-8b9b-82871dbd2adb Feb 8 23:25:02.124808 systemd-networkd[1493]: lxc_health: Link UP Feb 8 23:25:02.148094 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:25:02.147977 systemd-networkd[1493]: lxc_health: Gained carrier Feb 8 23:25:02.858795 kubelet[2452]: W0208 23:25:02.858749 2452 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda845b6a9_66dd_4453_b8d8_d07796d33a28.slice/cri-containerd-b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd.scope WatchSource:0}: task b71d9ce4dc6c0cdaa915863878655ac86240fcb0550a657a8c22a8b46999e2dd not found: not found Feb 8 23:25:03.215651 systemd-networkd[1493]: lxc_health: Gained IPv6LL Feb 8 23:25:03.266802 systemd[1]: run-containerd-runc-k8s.io-18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f-runc.htVaar.mount: Deactivated successfully. Feb 8 23:25:03.501289 kubelet[2452]: I0208 23:25:03.501156 2452 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lvnvc" podStartSLOduration=9.5011087 pod.CreationTimestamp="2024-02-08 23:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:24:59.834620834 +0000 UTC m=+268.651738128" watchObservedRunningTime="2024-02-08 23:25:03.5011087 +0000 UTC m=+272.318226094" Feb 8 23:25:05.975396 kubelet[2452]: W0208 23:25:05.975345 2452 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda845b6a9_66dd_4453_b8d8_d07796d33a28.slice/cri-containerd-d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c.scope WatchSource:0}: task d4743df587986355b096fa5ff4274d166a36fab9b58341b704577112b39b293c not found: not found Feb 8 23:25:09.086783 kubelet[2452]: W0208 23:25:09.086734 2452 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda845b6a9_66dd_4453_b8d8_d07796d33a28.slice/cri-containerd-1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29.scope WatchSource:0}: task 1a40a694f318bed19395b162f7ada5e377d11d321194212cacf11229ce4a3f29 not found: not found Feb 8 23:25:09.832404 systemd[1]: run-containerd-runc-k8s.io-18ddb0ebc22bf8f670530cc9c462266b4969c98a4380b75f816dfa5f49532f6f-runc.I0ZTGh.mount: Deactivated successfully. Feb 8 23:25:09.982950 sshd[4419]: pam_unix(sshd:session): session closed for user core Feb 8 23:25:09.986263 systemd[1]: sshd@24-10.200.8.4:22-10.200.12.6:40662.service: Deactivated successfully. Feb 8 23:25:09.987234 systemd[1]: session-27.scope: Deactivated successfully. Feb 8 23:25:09.987984 systemd-logind[1326]: Session 27 logged out. Waiting for processes to exit. Feb 8 23:25:09.989152 systemd-logind[1326]: Removed session 27. Feb 8 23:25:32.103248 env[1345]: time="2024-02-08T23:25:32.103194083Z" level=info msg="StopPodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\"" Feb 8 23:25:32.103906 env[1345]: time="2024-02-08T23:25:32.103857381Z" level=info msg="TearDown network for sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" successfully" Feb 8 23:25:32.104017 env[1345]: time="2024-02-08T23:25:32.103988680Z" level=info msg="StopPodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" returns successfully" Feb 8 23:25:32.104434 env[1345]: time="2024-02-08T23:25:32.104385779Z" level=info msg="RemovePodSandbox for \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\"" Feb 8 23:25:32.104562 env[1345]: time="2024-02-08T23:25:32.104436879Z" level=info msg="Forcibly stopping sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\"" Feb 8 23:25:32.104562 env[1345]: time="2024-02-08T23:25:32.104519679Z" level=info msg="TearDown network for sandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" successfully" Feb 8 23:25:32.118659 env[1345]: time="2024-02-08T23:25:32.118611933Z" level=info msg="RemovePodSandbox \"4a72c54c2b6e7c6c7542ee08ae42cec18b54a54f1305c6276f875bb127011029\" returns successfully" Feb 8 23:25:32.119070 env[1345]: time="2024-02-08T23:25:32.119039832Z" level=info msg="StopPodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\"" Feb 8 23:25:32.119182 env[1345]: time="2024-02-08T23:25:32.119119332Z" level=info msg="TearDown network for sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" successfully" Feb 8 23:25:32.119182 env[1345]: time="2024-02-08T23:25:32.119158731Z" level=info msg="StopPodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" returns successfully" Feb 8 23:25:32.119454 env[1345]: time="2024-02-08T23:25:32.119427431Z" level=info msg="RemovePodSandbox for \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\"" Feb 8 23:25:32.119550 env[1345]: time="2024-02-08T23:25:32.119460330Z" level=info msg="Forcibly stopping sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\"" Feb 8 23:25:32.119602 env[1345]: time="2024-02-08T23:25:32.119542030Z" level=info msg="TearDown network for sandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" successfully" Feb 8 23:25:32.129507 env[1345]: time="2024-02-08T23:25:32.129469998Z" level=info msg="RemovePodSandbox \"b3856d93033295beb3ca1642b811ed2a643e30f32525e28603544d1300bfc5f3\" returns successfully" Feb 8 23:25:32.129818 env[1345]: time="2024-02-08T23:25:32.129789597Z" level=info msg="StopPodSandbox for \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\"" Feb 8 23:25:32.129910 env[1345]: time="2024-02-08T23:25:32.129865797Z" level=info msg="TearDown network for sandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" successfully" Feb 8 23:25:32.129910 env[1345]: time="2024-02-08T23:25:32.129903597Z" level=info msg="StopPodSandbox for \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" returns successfully" Feb 8 23:25:32.130201 env[1345]: time="2024-02-08T23:25:32.130177096Z" level=info msg="RemovePodSandbox for \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\"" Feb 8 23:25:32.130277 env[1345]: time="2024-02-08T23:25:32.130203196Z" level=info msg="Forcibly stopping sandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\"" Feb 8 23:25:32.130338 env[1345]: time="2024-02-08T23:25:32.130298495Z" level=info msg="TearDown network for sandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" successfully" Feb 8 23:25:32.142544 env[1345]: time="2024-02-08T23:25:32.142504956Z" level=info msg="RemovePodSandbox \"3f397f24cef567673567f557be3f4e72bda401bd4220e12bf1bd36cfba668e49\" returns successfully" Feb 8 23:25:55.385048 systemd[1]: cri-containerd-1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d.scope: Deactivated successfully. Feb 8 23:25:55.385386 systemd[1]: cri-containerd-1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d.scope: Consumed 4.301s CPU time. Feb 8 23:25:55.407618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d-rootfs.mount: Deactivated successfully. Feb 8 23:25:55.425181 env[1345]: time="2024-02-08T23:25:55.425126927Z" level=info msg="shim disconnected" id=1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d Feb 8 23:25:55.425181 env[1345]: time="2024-02-08T23:25:55.425179837Z" level=warning msg="cleaning up after shim disconnected" id=1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d namespace=k8s.io Feb 8 23:25:55.425894 env[1345]: time="2024-02-08T23:25:55.425191117Z" level=info msg="cleaning up dead shim" Feb 8 23:25:55.433672 env[1345]: time="2024-02-08T23:25:55.433634795Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5455 runtime=io.containerd.runc.v2\n" Feb 8 23:25:55.940308 kubelet[2452]: I0208 23:25:55.940216 2452 scope.go:115] "RemoveContainer" containerID="1aed2bb8c6d7356b02062e2040af80cfb385c2b835039cc93764333463342c9d" Feb 8 23:25:55.942626 env[1345]: time="2024-02-08T23:25:55.942579754Z" level=info msg="CreateContainer within sandbox \"b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 8 23:25:55.964527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129308741.mount: Deactivated successfully. Feb 8 23:25:55.976959 env[1345]: time="2024-02-08T23:25:55.976913109Z" level=info msg="CreateContainer within sandbox \"b3037b3c8bd89d69477c7a73f4c0c2b454c81c9480c84a01ac430b61f09c4791\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"61e498b4cd84c55586092859f85ab75877fb9ea3d272221c4f1a762fc7a25394\"" Feb 8 23:25:55.977484 env[1345]: time="2024-02-08T23:25:55.977457878Z" level=info msg="StartContainer for \"61e498b4cd84c55586092859f85ab75877fb9ea3d272221c4f1a762fc7a25394\"" Feb 8 23:25:56.000474 systemd[1]: Started cri-containerd-61e498b4cd84c55586092859f85ab75877fb9ea3d272221c4f1a762fc7a25394.scope. Feb 8 23:25:56.050653 env[1345]: time="2024-02-08T23:25:56.050607194Z" level=info msg="StartContainer for \"61e498b4cd84c55586092859f85ab75877fb9ea3d272221c4f1a762fc7a25394\" returns successfully" Feb 8 23:25:58.874919 kubelet[2452]: E0208 23:25:58.874859 2452 controller.go:189] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-5bade47376?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 8 23:26:01.856205 systemd[1]: cri-containerd-f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6.scope: Deactivated successfully. Feb 8 23:26:01.856627 systemd[1]: cri-containerd-f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6.scope: Consumed 1.492s CPU time. Feb 8 23:26:01.884995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6-rootfs.mount: Deactivated successfully. Feb 8 23:26:01.901396 env[1345]: time="2024-02-08T23:26:01.901337008Z" level=info msg="shim disconnected" id=f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6 Feb 8 23:26:01.901396 env[1345]: time="2024-02-08T23:26:01.901394917Z" level=warning msg="cleaning up after shim disconnected" id=f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6 namespace=k8s.io Feb 8 23:26:01.902029 env[1345]: time="2024-02-08T23:26:01.901423772Z" level=info msg="cleaning up dead shim" Feb 8 23:26:01.909750 env[1345]: time="2024-02-08T23:26:01.909705456Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:26:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5517 runtime=io.containerd.runc.v2\n" Feb 8 23:26:01.960352 kubelet[2452]: I0208 23:26:01.959986 2452 scope.go:115] "RemoveContainer" containerID="f407d7912d2703389d77b4e259658c23dac4d6dae811870afeecf25c991a22a6" Feb 8 23:26:01.962243 env[1345]: time="2024-02-08T23:26:01.962192562Z" level=info msg="CreateContainer within sandbox \"5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 8 23:26:01.994301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285776902.mount: Deactivated successfully. Feb 8 23:26:02.009267 env[1345]: time="2024-02-08T23:26:02.009217222Z" level=info msg="CreateContainer within sandbox \"5676cf3ff6408846f4a1ffd44e665965ccdd9b5c4b4428c22315c364c4e04958\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2ed03ac2b3ed5a9a4d10af8f663b2983309b5a594344a82eac4538a354a105bf\"" Feb 8 23:26:02.009716 env[1345]: time="2024-02-08T23:26:02.009684597Z" level=info msg="StartContainer for \"2ed03ac2b3ed5a9a4d10af8f663b2983309b5a594344a82eac4538a354a105bf\"" Feb 8 23:26:02.027629 systemd[1]: Started cri-containerd-2ed03ac2b3ed5a9a4d10af8f663b2983309b5a594344a82eac4538a354a105bf.scope. Feb 8 23:26:02.081309 env[1345]: time="2024-02-08T23:26:02.081250457Z" level=info msg="StartContainer for \"2ed03ac2b3ed5a9a4d10af8f663b2983309b5a594344a82eac4538a354a105bf\" returns successfully" Feb 8 23:26:02.357699 kubelet[2452]: E0208 23:26:02.357437 2452 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:58248->10.200.8.19:2379: read: connection timed out