Feb 12 19:43:05.030676 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:43:05.030708 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:43:05.030722 kernel: BIOS-provided physical RAM map: Feb 12 19:43:05.030731 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:43:05.030740 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 12 19:43:05.030750 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 12 19:43:05.030764 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 12 19:43:05.030774 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 12 19:43:05.030784 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 12 19:43:05.030794 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 12 19:43:05.030804 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 12 19:43:05.030814 kernel: printk: bootconsole [earlyser0] enabled Feb 12 19:43:05.030824 kernel: NX (Execute Disable) protection: active Feb 12 19:43:05.030834 kernel: efi: EFI v2.70 by Microsoft Feb 12 19:43:05.030850 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 12 19:43:05.030861 kernel: random: crng init done Feb 12 19:43:05.030871 kernel: SMBIOS 3.1.0 present. Feb 12 19:43:05.030883 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:43:05.030894 kernel: Hypervisor detected: Microsoft Hyper-V Feb 12 19:43:05.030904 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 12 19:43:05.030915 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 12 19:43:05.030926 kernel: Hyper-V: Nested features: 0x1e0101 Feb 12 19:43:05.030939 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 12 19:43:05.030950 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 12 19:43:05.030961 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 12 19:43:05.030973 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 12 19:43:05.030985 kernel: tsc: Detected 2593.906 MHz processor Feb 12 19:43:05.030996 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:43:05.031008 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:43:05.031020 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 12 19:43:05.031031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:43:05.031043 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 12 19:43:05.031056 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 12 19:43:05.031068 kernel: Using GB pages for direct mapping Feb 12 19:43:05.031079 kernel: Secure boot disabled Feb 12 19:43:05.031091 kernel: ACPI: Early table checksum verification disabled Feb 12 19:43:05.031102 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 12 19:43:05.031113 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031123 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031134 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:43:05.031152 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 12 19:43:05.031166 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031178 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031189 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031201 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031213 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031229 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031241 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:43:05.031252 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 12 19:43:05.031264 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 12 19:43:05.031276 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 12 19:43:05.031288 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 12 19:43:05.031300 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 12 19:43:05.031312 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 12 19:43:05.031326 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 12 19:43:05.031338 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 12 19:43:05.031350 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 12 19:43:05.031361 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 12 19:43:05.031372 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:43:05.031383 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:43:05.031395 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 12 19:43:05.031407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 12 19:43:05.031430 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 12 19:43:05.036468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 12 19:43:05.036479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 12 19:43:05.036488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 12 19:43:05.036497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 12 19:43:05.036504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 12 19:43:05.036514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 12 19:43:05.036521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 12 19:43:05.036528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 12 19:43:05.036537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 12 19:43:05.036547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 12 19:43:05.036557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 12 19:43:05.036565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 12 19:43:05.036572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 12 19:43:05.036581 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 12 19:43:05.036589 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 12 19:43:05.036599 kernel: Zone ranges: Feb 12 19:43:05.036607 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:43:05.036613 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 12 19:43:05.036625 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:43:05.036633 kernel: Movable zone start for each node Feb 12 19:43:05.036641 kernel: Early memory node ranges Feb 12 19:43:05.036650 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:43:05.036657 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 12 19:43:05.036663 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 12 19:43:05.036674 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:43:05.036681 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 12 19:43:05.036691 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:43:05.036700 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:43:05.036707 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 12 19:43:05.036717 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 12 19:43:05.036724 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 12 19:43:05.036735 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:43:05.036742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:43:05.036749 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:43:05.036758 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 12 19:43:05.036766 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:43:05.036777 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 12 19:43:05.036785 kernel: Booting paravirtualized kernel on Hyper-V Feb 12 19:43:05.036792 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:43:05.036801 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:43:05.036809 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:43:05.036817 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:43:05.036826 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:43:05.036833 kernel: Hyper-V: PV spinlocks enabled Feb 12 19:43:05.036840 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:43:05.036852 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 12 19:43:05.036859 kernel: Policy zone: Normal Feb 12 19:43:05.036870 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:43:05.036879 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:43:05.036885 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 12 19:43:05.036895 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:43:05.036903 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:43:05.036913 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 12 19:43:05.036922 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:43:05.036929 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:43:05.036946 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:43:05.036959 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:43:05.036967 kernel: rcu: RCU event tracing is enabled. Feb 12 19:43:05.036975 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:43:05.036985 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:43:05.036993 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:43:05.037003 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:43:05.037011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:43:05.037018 kernel: Using NULL legacy PIC Feb 12 19:43:05.037030 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 12 19:43:05.037038 kernel: Console: colour dummy device 80x25 Feb 12 19:43:05.037048 kernel: printk: console [tty1] enabled Feb 12 19:43:05.037055 kernel: printk: console [ttyS0] enabled Feb 12 19:43:05.037063 kernel: printk: bootconsole [earlyser0] disabled Feb 12 19:43:05.037074 kernel: ACPI: Core revision 20210730 Feb 12 19:43:05.037083 kernel: Failed to register legacy timer interrupt Feb 12 19:43:05.037092 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:43:05.037100 kernel: Hyper-V: Using IPI hypercalls Feb 12 19:43:05.037107 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 12 19:43:05.037117 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 19:43:05.037125 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 19:43:05.037135 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:43:05.037142 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:43:05.037150 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:43:05.037162 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:43:05.037171 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 19:43:05.037179 kernel: RETBleed: Vulnerable Feb 12 19:43:05.037187 kernel: Speculative Store Bypass: Vulnerable Feb 12 19:43:05.037195 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:43:05.037204 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:43:05.037212 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 19:43:05.037221 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:43:05.037228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:43:05.037235 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:43:05.037247 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 19:43:05.037255 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 19:43:05.037265 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 19:43:05.037272 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:43:05.037280 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 12 19:43:05.037289 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 12 19:43:05.037298 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 12 19:43:05.037307 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 12 19:43:05.037314 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:43:05.037322 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:43:05.037331 kernel: LSM: Security Framework initializing Feb 12 19:43:05.037338 kernel: SELinux: Initializing. Feb 12 19:43:05.037351 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:43:05.037358 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:43:05.037366 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 19:43:05.037376 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 19:43:05.037385 kernel: signal: max sigframe size: 3632 Feb 12 19:43:05.037394 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:43:05.037401 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:43:05.037409 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:43:05.037424 kernel: x86: Booting SMP configuration: Feb 12 19:43:05.037434 kernel: .... node #0, CPUs: #1 Feb 12 19:43:05.037445 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 12 19:43:05.037454 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 19:43:05.037463 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:43:05.037472 kernel: smpboot: Max logical packages: 1 Feb 12 19:43:05.037481 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 12 19:43:05.037488 kernel: devtmpfs: initialized Feb 12 19:43:05.037496 kernel: x86/mm: Memory block size: 128MB Feb 12 19:43:05.037506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 12 19:43:05.037517 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:43:05.037526 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:43:05.037535 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:43:05.037546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:43:05.037554 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:43:05.037561 kernel: audit: type=2000 audit(1707766983.023:1): state=initialized audit_enabled=0 res=1 Feb 12 19:43:05.037568 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:43:05.037575 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:43:05.037582 kernel: cpuidle: using governor menu Feb 12 19:43:05.037592 kernel: ACPI: bus type PCI registered Feb 12 19:43:05.037604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:43:05.037618 kernel: dca service started, version 1.12.1 Feb 12 19:43:05.037629 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:43:05.037636 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:43:05.037644 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:43:05.037656 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:43:05.037672 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:43:05.037687 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:43:05.037703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:43:05.037714 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:43:05.037721 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:43:05.037728 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:43:05.037743 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:43:05.037759 kernel: ACPI: Interpreter enabled Feb 12 19:43:05.037772 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:43:05.037788 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:43:05.037801 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:43:05.037810 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 12 19:43:05.037820 kernel: iommu: Default domain type: Translated Feb 12 19:43:05.037837 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:43:05.037851 kernel: vgaarb: loaded Feb 12 19:43:05.037861 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:43:05.037868 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:43:05.037876 kernel: PTP clock support registered Feb 12 19:43:05.037891 kernel: Registered efivars operations Feb 12 19:43:05.037906 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:43:05.037919 kernel: PCI: System does not support PCI Feb 12 19:43:05.037929 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 12 19:43:05.037937 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:43:05.037950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:43:05.037965 kernel: pnp: PnP ACPI init Feb 12 19:43:05.037977 kernel: pnp: PnP ACPI: found 3 devices Feb 12 19:43:05.037986 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:43:05.037993 kernel: NET: Registered PF_INET protocol family Feb 12 19:43:05.038006 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:43:05.038027 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 12 19:43:05.038041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:43:05.038052 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:43:05.038059 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 12 19:43:05.038068 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 12 19:43:05.038083 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:43:05.038098 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:43:05.038110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:43:05.038117 kernel: NET: Registered PF_XDP protocol family Feb 12 19:43:05.038128 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:43:05.038143 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 12 19:43:05.038159 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 12 19:43:05.038174 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:43:05.038188 kernel: Initialise system trusted keyrings Feb 12 19:43:05.038201 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 12 19:43:05.038215 kernel: Key type asymmetric registered Feb 12 19:43:05.038224 kernel: Asymmetric key parser 'x509' registered Feb 12 19:43:05.038231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:43:05.038245 kernel: io scheduler mq-deadline registered Feb 12 19:43:05.038258 kernel: io scheduler kyber registered Feb 12 19:43:05.038272 kernel: io scheduler bfq registered Feb 12 19:43:05.038286 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:43:05.038300 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:43:05.038315 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:43:05.038325 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 12 19:43:05.038332 kernel: i8042: PNP: No PS/2 controller found. Feb 12 19:43:05.038548 kernel: rtc_cmos 00:02: registered as rtc0 Feb 12 19:43:05.038653 kernel: rtc_cmos 00:02: setting system clock to 2024-02-12T19:43:04 UTC (1707766984) Feb 12 19:43:05.038754 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 12 19:43:05.038763 kernel: fail to initialize ptp_kvm Feb 12 19:43:05.038776 kernel: intel_pstate: CPU model not supported Feb 12 19:43:05.038789 kernel: efifb: probing for efifb Feb 12 19:43:05.038797 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:43:05.038805 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:43:05.038818 kernel: efifb: scrolling: redraw Feb 12 19:43:05.038836 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:43:05.038843 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:43:05.038851 kernel: fb0: EFI VGA frame buffer device Feb 12 19:43:05.038865 kernel: pstore: Registered efi as persistent store backend Feb 12 19:43:05.038879 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:43:05.038892 kernel: Segment Routing with IPv6 Feb 12 19:43:05.038900 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:43:05.038907 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:43:05.038920 kernel: Key type dns_resolver registered Feb 12 19:43:05.038938 kernel: IPI shorthand broadcast: enabled Feb 12 19:43:05.038950 kernel: sched_clock: Marking stable (740774200, 20542900)->(957215300, -195898200) Feb 12 19:43:05.038957 kernel: registered taskstats version 1 Feb 12 19:43:05.038965 kernel: Loading compiled-in X.509 certificates Feb 12 19:43:05.038980 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:43:05.038992 kernel: Key type .fscrypt registered Feb 12 19:43:05.039000 kernel: Key type fscrypt-provisioning registered Feb 12 19:43:05.039008 kernel: pstore: Using crash dump compression: deflate Feb 12 19:43:05.039026 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:43:05.039039 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:43:05.039046 kernel: ima: No architecture policies found Feb 12 19:43:05.039055 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:43:05.039070 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:43:05.039083 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:43:05.039094 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:43:05.039106 kernel: Run /init as init process Feb 12 19:43:05.039117 kernel: with arguments: Feb 12 19:43:05.039128 kernel: /init Feb 12 19:43:05.039141 kernel: with environment: Feb 12 19:43:05.039151 kernel: HOME=/ Feb 12 19:43:05.039161 kernel: TERM=linux Feb 12 19:43:05.039173 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:43:05.039188 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:43:05.039201 systemd[1]: Detected virtualization microsoft. Feb 12 19:43:05.039215 systemd[1]: Detected architecture x86-64. Feb 12 19:43:05.039231 systemd[1]: Running in initrd. Feb 12 19:43:05.039244 systemd[1]: No hostname configured, using default hostname. Feb 12 19:43:05.039258 systemd[1]: Hostname set to . Feb 12 19:43:05.039273 systemd[1]: Initializing machine ID from random generator. Feb 12 19:43:05.039286 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:43:05.039301 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:43:05.039315 systemd[1]: Reached target cryptsetup.target. Feb 12 19:43:05.039329 systemd[1]: Reached target paths.target. Feb 12 19:43:05.039344 systemd[1]: Reached target slices.target. Feb 12 19:43:05.039360 systemd[1]: Reached target swap.target. Feb 12 19:43:05.039374 systemd[1]: Reached target timers.target. Feb 12 19:43:05.039389 systemd[1]: Listening on iscsid.socket. Feb 12 19:43:05.039403 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:43:05.039416 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:43:05.039438 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:43:05.039453 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:43:05.039470 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:43:05.039485 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:43:05.039499 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:43:05.039514 systemd[1]: Reached target sockets.target. Feb 12 19:43:05.039529 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:43:05.039543 systemd[1]: Finished network-cleanup.service. Feb 12 19:43:05.039558 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:43:05.039572 systemd[1]: Starting systemd-journald.service... Feb 12 19:43:05.039587 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:43:05.039604 systemd[1]: Starting systemd-resolved.service... Feb 12 19:43:05.039619 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:43:05.039633 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:43:05.039652 systemd-journald[183]: Journal started Feb 12 19:43:05.039722 systemd-journald[183]: Runtime Journal (/run/log/journal/b44cc552b5f74f8db4ffb434f57a4592) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:43:05.021338 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:43:05.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.069918 kernel: audit: type=1130 audit(1707766985.048:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.069974 systemd[1]: Started systemd-journald.service. Feb 12 19:43:05.069990 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:43:05.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.083887 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:43:05.090478 kernel: audit: type=1130 audit(1707766985.071:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.090501 kernel: Bridge firewalling registered Feb 12 19:43:05.088235 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:43:05.090362 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:43:05.097268 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:43:05.106113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:43:05.162862 kernel: audit: type=1130 audit(1707766985.087:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.162885 kernel: audit: type=1130 audit(1707766985.094:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.162895 kernel: SCSI subsystem initialized Feb 12 19:43:05.162910 kernel: audit: type=1130 audit(1707766985.139:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.129686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:43:05.134963 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:43:05.134974 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:43:05.135025 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:43:05.138577 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:43:05.150264 systemd[1]: Started systemd-resolved.service. Feb 12 19:43:05.155053 systemd[1]: Reached target nss-lookup.target. Feb 12 19:43:05.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.202639 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:43:05.241542 kernel: audit: type=1130 audit(1707766985.154:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.241569 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:43:05.241581 kernel: audit: type=1130 audit(1707766985.216:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.241594 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:43:05.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.217586 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:43:05.253749 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:43:05.253783 dracut-cmdline[200]: dracut-dracut-053 Feb 12 19:43:05.253783 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 12 19:43:05.253783 dracut-cmdline[200]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:43:05.270229 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:43:05.272054 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:43:05.290699 kernel: audit: type=1130 audit(1707766985.277:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.289045 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:43:05.300261 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:43:05.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.316443 kernel: audit: type=1130 audit(1707766985.303:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.340451 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:43:05.353448 kernel: iscsi: registered transport (tcp) Feb 12 19:43:05.378766 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:43:05.378840 kernel: QLogic iSCSI HBA Driver Feb 12 19:43:05.408543 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:43:05.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.414534 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:43:05.470448 kernel: raid6: avx512x4 gen() 18171 MB/s Feb 12 19:43:05.490433 kernel: raid6: avx512x4 xor() 8104 MB/s Feb 12 19:43:05.510437 kernel: raid6: avx512x2 gen() 18309 MB/s Feb 12 19:43:05.530436 kernel: raid6: avx512x2 xor() 27498 MB/s Feb 12 19:43:05.550434 kernel: raid6: avx512x1 gen() 18169 MB/s Feb 12 19:43:05.570432 kernel: raid6: avx512x1 xor() 25009 MB/s Feb 12 19:43:05.590436 kernel: raid6: avx2x4 gen() 18229 MB/s Feb 12 19:43:05.610431 kernel: raid6: avx2x4 xor() 7612 MB/s Feb 12 19:43:05.630430 kernel: raid6: avx2x2 gen() 18251 MB/s Feb 12 19:43:05.650440 kernel: raid6: avx2x2 xor() 20774 MB/s Feb 12 19:43:05.690440 kernel: raid6: avx2x1 gen() 13857 MB/s Feb 12 19:43:05.710435 kernel: raid6: avx2x1 xor() 16783 MB/s Feb 12 19:43:05.730430 kernel: raid6: sse2x4 gen() 10847 MB/s Feb 12 19:43:05.750439 kernel: raid6: sse2x4 xor() 6704 MB/s Feb 12 19:43:05.770439 kernel: raid6: sse2x2 gen() 11406 MB/s Feb 12 19:43:05.790434 kernel: raid6: sse2x2 xor() 7130 MB/s Feb 12 19:43:05.810433 kernel: raid6: sse2x1 gen() 11096 MB/s Feb 12 19:43:05.834045 kernel: raid6: sse2x1 xor() 5389 MB/s Feb 12 19:43:05.834080 kernel: raid6: using algorithm avx512x2 gen() 18309 MB/s Feb 12 19:43:05.834092 kernel: raid6: .... xor() 27498 MB/s, rmw enabled Feb 12 19:43:05.837680 kernel: raid6: using avx512x2 recovery algorithm Feb 12 19:43:05.856450 kernel: xor: automatically using best checksumming function avx Feb 12 19:43:05.951452 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:43:05.959738 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:43:05.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.963000 audit: BPF prog-id=7 op=LOAD Feb 12 19:43:05.963000 audit: BPF prog-id=8 op=LOAD Feb 12 19:43:05.964640 systemd[1]: Starting systemd-udevd.service... Feb 12 19:43:05.979551 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 19:43:05.986277 systemd[1]: Started systemd-udevd.service. Feb 12 19:43:05.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:05.991621 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:43:06.008030 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Feb 12 19:43:06.038754 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:43:06.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.043947 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:43:06.077822 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:43:06.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.124781 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:43:06.163890 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:43:06.163954 kernel: AES CTR mode by8 optimization enabled Feb 12 19:43:06.166972 kernel: hv_vmbus: Vmbus version:5.2 Feb 12 19:43:06.183782 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:43:06.193441 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:43:06.208330 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:43:06.208381 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:43:06.216448 kernel: scsi host1: storvsc_host_t Feb 12 19:43:06.220436 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:43:06.220482 kernel: scsi host0: storvsc_host_t Feb 12 19:43:06.228457 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:43:06.234437 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:43:06.256440 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:43:06.256498 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:43:06.267454 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:43:06.276037 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:43:06.276316 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:43:06.281438 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:43:06.281643 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:43:06.281771 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:43:06.288707 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:43:06.288904 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:43:06.289033 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:43:06.298437 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:43:06.302999 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:43:06.410127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:43:06.413035 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (442) Feb 12 19:43:06.435444 kernel: hv_netvsc 000d3ab0-4e84-000d-3ab0-4e84000d3ab0 eth0: VF slot 1 added Feb 12 19:43:06.438949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:43:06.453438 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:43:06.462189 kernel: hv_pci 0fff8d09-7950-4213-ac82-7d6d58614231: PCI VMBus probing: Using version 0x10004 Feb 12 19:43:06.462401 kernel: hv_pci 0fff8d09-7950-4213-ac82-7d6d58614231: PCI host bridge to bus 7950:00 Feb 12 19:43:06.477119 kernel: pci_bus 7950:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 12 19:43:06.477395 kernel: pci_bus 7950:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:43:06.484511 kernel: pci 7950:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 12 19:43:06.484733 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:43:06.501275 kernel: pci 7950:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:43:06.507356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:43:06.513976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:43:06.525558 systemd[1]: Starting disk-uuid.service... Feb 12 19:43:06.536456 kernel: pci 7950:00:02.0: enabling Extended Tags Feb 12 19:43:06.553880 kernel: pci 7950:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7950:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 12 19:43:06.554167 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:43:06.554181 kernel: pci_bus 7950:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:43:06.568982 kernel: pci 7950:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:43:06.569236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:43:06.779444 kernel: mlx5_core 7950:00:02.0: firmware version: 14.30.1350 Feb 12 19:43:06.939440 kernel: mlx5_core 7950:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 12 19:43:07.082232 kernel: mlx5_core 7950:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 12 19:43:07.082507 kernel: mlx5_core 7950:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Feb 12 19:43:07.093740 kernel: hv_netvsc 000d3ab0-4e84-000d-3ab0-4e84000d3ab0 eth0: VF registering: eth1 Feb 12 19:43:07.093916 kernel: mlx5_core 7950:00:02.0 eth1: joined to eth0 Feb 12 19:43:07.105438 kernel: mlx5_core 7950:00:02.0 enP31056s1: renamed from eth1 Feb 12 19:43:07.564439 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:43:07.564750 disk-uuid[553]: The operation has completed successfully. Feb 12 19:43:07.633166 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:43:07.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.633276 systemd[1]: Finished disk-uuid.service. Feb 12 19:43:07.649227 systemd[1]: Starting verity-setup.service... Feb 12 19:43:07.680447 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:43:07.765006 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:43:07.770842 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:43:07.774995 systemd[1]: Finished verity-setup.service. Feb 12 19:43:07.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.849445 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:43:07.849764 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:43:07.851651 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:43:07.886148 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:43:07.886183 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:43:07.886201 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:43:07.852416 systemd[1]: Starting ignition-setup.service... Feb 12 19:43:07.857867 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:43:07.912481 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:43:07.937531 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:43:07.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.942000 audit: BPF prog-id=9 op=LOAD Feb 12 19:43:07.943633 systemd[1]: Starting systemd-networkd.service... Feb 12 19:43:07.965961 systemd-networkd[809]: lo: Link UP Feb 12 19:43:07.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.965971 systemd-networkd[809]: lo: Gained carrier Feb 12 19:43:07.966932 systemd-networkd[809]: Enumeration completed Feb 12 19:43:07.967575 systemd[1]: Started systemd-networkd.service. Feb 12 19:43:07.970246 systemd[1]: Reached target network.target. Feb 12 19:43:07.971056 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:43:07.976773 systemd[1]: Starting iscsiuio.service... Feb 12 19:43:07.990364 systemd[1]: Finished ignition-setup.service. Feb 12 19:43:07.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.995356 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:43:08.002302 systemd[1]: Started iscsiuio.service. Feb 12 19:43:08.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.004952 systemd[1]: Starting iscsid.service... Feb 12 19:43:08.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.015438 iscsid[816]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:43:08.015438 iscsid[816]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:43:08.015438 iscsid[816]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:43:08.015438 iscsid[816]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:43:08.015438 iscsid[816]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:43:08.015438 iscsid[816]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:43:08.015438 iscsid[816]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:43:08.061837 kernel: mlx5_core 7950:00:02.0 enP31056s1: Link up Feb 12 19:43:08.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.011301 systemd[1]: Started iscsid.service. Feb 12 19:43:08.014805 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:43:08.041228 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:43:08.046880 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:43:08.047117 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:43:08.047499 systemd[1]: Reached target remote-fs.target. Feb 12 19:43:08.048765 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:43:08.077391 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:43:08.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.116445 kernel: hv_netvsc 000d3ab0-4e84-000d-3ab0-4e84000d3ab0 eth0: Data path switched to VF: enP31056s1 Feb 12 19:43:08.117619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:43:08.116781 systemd-networkd[809]: enP31056s1: Link UP Feb 12 19:43:08.116896 systemd-networkd[809]: eth0: Link UP Feb 12 19:43:08.121049 systemd-networkd[809]: eth0: Gained carrier Feb 12 19:43:08.127077 systemd-networkd[809]: enP31056s1: Gained carrier Feb 12 19:43:08.152540 systemd-networkd[809]: eth0: DHCPv4 address 10.200.8.31/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:43:08.828072 ignition[814]: Ignition 2.14.0 Feb 12 19:43:08.828087 ignition[814]: Stage: fetch-offline Feb 12 19:43:08.828157 ignition[814]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:08.828205 ignition[814]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:08.870955 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:08.873847 ignition[814]: parsed url from cmdline: "" Feb 12 19:43:08.873855 ignition[814]: no config URL provided Feb 12 19:43:08.873865 ignition[814]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:43:08.873886 ignition[814]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:43:08.873895 ignition[814]: failed to fetch config: resource requires networking Feb 12 19:43:08.882586 ignition[814]: Ignition finished successfully Feb 12 19:43:08.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.883481 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:43:08.887631 systemd[1]: Starting ignition-fetch.service... Feb 12 19:43:08.897970 ignition[835]: Ignition 2.14.0 Feb 12 19:43:08.897981 ignition[835]: Stage: fetch Feb 12 19:43:08.898119 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:08.898162 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:08.902891 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:08.903048 ignition[835]: parsed url from cmdline: "" Feb 12 19:43:08.903051 ignition[835]: no config URL provided Feb 12 19:43:08.903056 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:43:08.903065 ignition[835]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:43:08.903100 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:43:08.929777 ignition[835]: GET result: OK Feb 12 19:43:08.929933 ignition[835]: config has been read from IMDS userdata Feb 12 19:43:08.929986 ignition[835]: parsing config with SHA512: 74b2de495d6497e679202e01eeb4fa2af78c513ef8cc2672b483750d490d600121027a7479bf7dbd85ce904d33752cbe0f3bfd1aaa49e003cb231624d49a8b9f Feb 12 19:43:08.965909 unknown[835]: fetched base config from "system" Feb 12 19:43:08.966470 unknown[835]: fetched base config from "system" Feb 12 19:43:08.967644 ignition[835]: fetch: fetch complete Feb 12 19:43:08.966477 unknown[835]: fetched user config from "azure" Feb 12 19:43:08.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.967653 ignition[835]: fetch: fetch passed Feb 12 19:43:08.972388 systemd[1]: Finished ignition-fetch.service. Feb 12 19:43:08.967706 ignition[835]: Ignition finished successfully Feb 12 19:43:08.975577 systemd[1]: Starting ignition-kargs.service... Feb 12 19:43:08.987319 ignition[841]: Ignition 2.14.0 Feb 12 19:43:08.987329 ignition[841]: Stage: kargs Feb 12 19:43:08.987488 ignition[841]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:08.987519 ignition[841]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:08.994668 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:09.000886 ignition[841]: kargs: kargs passed Feb 12 19:43:09.000952 ignition[841]: Ignition finished successfully Feb 12 19:43:09.002818 systemd[1]: Finished ignition-kargs.service. Feb 12 19:43:09.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.009043 systemd[1]: Starting ignition-disks.service... Feb 12 19:43:09.017255 ignition[847]: Ignition 2.14.0 Feb 12 19:43:09.017265 ignition[847]: Stage: disks Feb 12 19:43:09.017408 ignition[847]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:09.017458 ignition[847]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:09.022178 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:09.024947 ignition[847]: disks: disks passed Feb 12 19:43:09.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.025838 systemd[1]: Finished ignition-disks.service. Feb 12 19:43:09.025007 ignition[847]: Ignition finished successfully Feb 12 19:43:09.028348 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:43:09.031986 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:43:09.033992 systemd[1]: Reached target local-fs.target. Feb 12 19:43:09.036091 systemd[1]: Reached target sysinit.target. Feb 12 19:43:09.040899 systemd[1]: Reached target basic.target. Feb 12 19:43:09.043843 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:43:09.070475 systemd-fsck[855]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 12 19:43:09.076960 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:43:09.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.082117 systemd[1]: Mounting sysroot.mount... Feb 12 19:43:09.100477 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:43:09.100822 systemd[1]: Mounted sysroot.mount. Feb 12 19:43:09.102889 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:43:09.113691 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:43:09.118936 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:43:09.124347 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:43:09.124393 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:43:09.133929 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:43:09.147210 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:43:09.150217 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:43:09.166818 initrd-setup-root[870]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:43:09.169548 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (865) Feb 12 19:43:09.181593 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:43:09.181663 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:43:09.181683 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:43:09.181699 initrd-setup-root[878]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:43:09.185357 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:43:09.189335 initrd-setup-root[906]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:43:09.197234 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:43:09.345376 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:43:09.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.352440 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:43:09.352465 kernel: audit: type=1130 audit(1707766989.347:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.353987 systemd[1]: Starting ignition-mount.service... Feb 12 19:43:09.366574 systemd[1]: Starting sysroot-boot.service... Feb 12 19:43:09.373063 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:43:09.373205 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:43:09.393477 ignition[933]: INFO : Ignition 2.14.0 Feb 12 19:43:09.395938 ignition[933]: INFO : Stage: mount Feb 12 19:43:09.397892 ignition[933]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:09.400912 systemd[1]: Finished sysroot-boot.service. Feb 12 19:43:09.404558 ignition[933]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:09.419671 kernel: audit: type=1130 audit(1707766989.404:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.419988 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:09.424891 ignition[933]: INFO : mount: mount passed Feb 12 19:43:09.428944 ignition[933]: INFO : Ignition finished successfully Feb 12 19:43:09.425753 systemd[1]: Finished ignition-mount.service. Feb 12 19:43:09.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.444442 kernel: audit: type=1130 audit(1707766989.428:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.545549 coreos-metadata[864]: Feb 12 19:43:09.545 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:43:09.550410 coreos-metadata[864]: Feb 12 19:43:09.550 INFO Fetch successful Feb 12 19:43:09.554633 systemd-networkd[809]: eth0: Gained IPv6LL Feb 12 19:43:09.586299 coreos-metadata[864]: Feb 12 19:43:09.586 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:43:09.604146 coreos-metadata[864]: Feb 12 19:43:09.604 INFO Fetch successful Feb 12 19:43:09.609581 coreos-metadata[864]: Feb 12 19:43:09.609 INFO wrote hostname ci-3510.3.2-a-2665495451 to /sysroot/etc/hostname Feb 12 19:43:09.611303 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:43:09.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.637442 systemd[1]: Starting ignition-files.service... Feb 12 19:43:09.655085 kernel: audit: type=1130 audit(1707766989.635:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.648242 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:43:09.668594 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (943) Feb 12 19:43:09.668654 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:43:09.668668 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:43:09.675237 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:43:09.679953 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:43:09.693844 ignition[962]: INFO : Ignition 2.14.0 Feb 12 19:43:09.693844 ignition[962]: INFO : Stage: files Feb 12 19:43:09.697695 ignition[962]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:09.697695 ignition[962]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:09.706202 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:09.706202 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:43:09.711860 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:43:09.711860 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:43:09.719906 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:43:09.723328 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:43:09.726701 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:43:09.726701 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:43:09.726701 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:09.723771 unknown[962]: wrote ssh authorized keys file for user: core Feb 12 19:43:10.114929 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:43:10.231257 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:43:10.236172 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:43:10.236172 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:10.889621 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:43:11.349847 ignition[962]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:43:11.357693 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:43:11.357693 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:43:11.357693 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:43:11.357693 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:43:11.357693 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:43:11.914437 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:43:12.543933 ignition[962]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:43:12.551458 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:43:12.551458 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:43:12.560154 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:13.140127 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:43:13.618214 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:43:13.623400 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:43:13.623400 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:43:13.830004 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:43:14.068701 ignition[962]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:43:14.075936 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:43:14.075936 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:43:14.075936 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:43:14.202812 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:43:14.769149 ignition[962]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:43:14.776572 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:43:14.776572 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:43:14.776572 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:43:14.776572 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:43:14.776572 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:43:14.899695 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 12 19:43:15.094285 ignition[962]: DEBUG : files: createFilesystemsFiles: createFiles: op(b): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 19:43:15.101725 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:43:15.101725 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:43:15.112310 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:43:15.116310 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:43:15.120587 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:43:15.124907 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:43:15.129146 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:43:15.133414 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:43:15.133414 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:43:15.674127 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:43:15.680148 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:43:15.680148 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:43:15.680148 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:43:15.704972 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2503775944" Feb 12 19:43:15.704972 ignition[962]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2503775944": device or resource busy Feb 12 19:43:15.704972 ignition[962]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2503775944", trying btrfs: device or resource busy Feb 12 19:43:15.704972 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2503775944" Feb 12 19:43:15.728318 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (967) Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2503775944" Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem2503775944" Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem2503775944" Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2022016502" Feb 12 19:43:15.728350 ignition[962]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2022016502": device or resource busy Feb 12 19:43:15.728350 ignition[962]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2022016502", trying btrfs: device or resource busy Feb 12 19:43:15.728350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2022016502" Feb 12 19:43:15.795518 kernel: audit: type=1130 audit(1707766995.748:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.795639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2022016502" Feb 12 19:43:15.795639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem2022016502" Feb 12 19:43:15.795639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem2022016502" Feb 12 19:43:15.795639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1a): [started] processing unit "nvidia.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1a): [finished] processing unit "nvidia.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1b): [started] processing unit "containerd.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1b): op(1c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1b): op(1c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1b): [finished] processing unit "containerd.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1d): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1d): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1f): [started] processing unit "prepare-critools.service" Feb 12 19:43:15.795639 ignition[962]: INFO : files: op(1f): op(20): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:43:15.912508 kernel: audit: type=1130 audit(1707766995.843:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.912539 kernel: audit: type=1130 audit(1707766995.858:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.912551 kernel: audit: type=1131 audit(1707766995.858:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.912561 kernel: audit: type=1130 audit(1707766995.912:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.734030 systemd[1]: mnt-oem2022016502.mount: Deactivated successfully. Feb 12 19:43:15.939333 kernel: audit: type=1131 audit(1707766995.912:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(1f): [finished] processing unit "prepare-critools.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(23): [started] setting preset to enabled for "waagent.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(23): [finished] setting preset to enabled for "waagent.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(24): [started] setting preset to enabled for "nvidia.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(24): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(25): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(26): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(27): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: op(27): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:43:15.939596 ignition[962]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:43:15.939596 ignition[962]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:43:15.939596 ignition[962]: INFO : files: files passed Feb 12 19:43:15.939596 ignition[962]: INFO : Ignition finished successfully Feb 12 19:43:15.741882 systemd[1]: Finished ignition-files.service. Feb 12 19:43:16.012186 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:43:15.749495 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:43:15.770349 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:43:15.832036 systemd[1]: Starting ignition-quench.service... Feb 12 19:43:16.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.839083 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:43:16.043885 kernel: audit: type=1130 audit(1707766996.024:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.844251 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:43:15.844352 systemd[1]: Finished ignition-quench.service. Feb 12 19:43:15.858962 systemd[1]: Reached target ignition-complete.target. Feb 12 19:43:15.892256 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:43:15.910028 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:43:15.910137 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:43:15.912980 systemd[1]: Reached target initrd-fs.target. Feb 12 19:43:15.935094 systemd[1]: Reached target initrd.target. Feb 12 19:43:15.939282 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:43:15.940370 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:43:16.022752 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:43:16.026207 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:43:16.074191 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:43:16.076490 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:43:16.080139 systemd[1]: Stopped target timers.target. Feb 12 19:43:16.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.082296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:43:16.102482 kernel: audit: type=1131 audit(1707766996.085:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.082456 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:43:16.097660 systemd[1]: Stopped target initrd.target. Feb 12 19:43:16.102641 systemd[1]: Stopped target basic.target. Feb 12 19:43:16.106327 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:43:16.110133 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:43:16.114898 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:43:16.119439 systemd[1]: Stopped target remote-fs.target. Feb 12 19:43:16.123273 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:43:16.128211 systemd[1]: Stopped target sysinit.target. Feb 12 19:43:16.132248 systemd[1]: Stopped target local-fs.target. Feb 12 19:43:16.136010 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:43:16.139616 systemd[1]: Stopped target swap.target. Feb 12 19:43:16.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.143031 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:43:16.163322 kernel: audit: type=1131 audit(1707766996.146:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.143181 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:43:16.157583 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:43:16.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.163396 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:43:16.183019 kernel: audit: type=1131 audit(1707766996.166:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.163580 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:43:16.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.178337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:43:16.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.178527 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:43:16.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.182986 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:43:16.183119 systemd[1]: Stopped ignition-files.service. Feb 12 19:43:16.186830 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:43:16.186973 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:43:16.194267 systemd[1]: Stopping ignition-mount.service... Feb 12 19:43:16.207332 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:43:16.218222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:43:16.220586 ignition[1001]: INFO : Ignition 2.14.0 Feb 12 19:43:16.220586 ignition[1001]: INFO : Stage: umount Feb 12 19:43:16.219314 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:43:16.227300 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:43:16.227300 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:43:16.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.237264 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:43:16.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.237445 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:43:16.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.247919 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:43:16.248648 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:43:16.248748 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:43:16.260484 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:43:16.266702 ignition[1001]: INFO : umount: umount passed Feb 12 19:43:16.268732 ignition[1001]: INFO : Ignition finished successfully Feb 12 19:43:16.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.268430 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:43:16.268518 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:43:16.276892 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:43:16.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.276996 systemd[1]: Stopped ignition-mount.service. Feb 12 19:43:16.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.281084 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:43:16.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.281137 systemd[1]: Stopped ignition-disks.service. Feb 12 19:43:16.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.284577 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:43:16.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.284627 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:43:16.288777 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:43:16.288829 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:43:16.290649 systemd[1]: Stopped target network.target. Feb 12 19:43:16.294003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:43:16.294064 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:43:16.296044 systemd[1]: Stopped target paths.target. Feb 12 19:43:16.297689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:43:16.304454 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:43:16.307854 systemd[1]: Stopped target slices.target. Feb 12 19:43:16.309599 systemd[1]: Stopped target sockets.target. Feb 12 19:43:16.313031 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:43:16.313070 systemd[1]: Closed iscsid.socket. Feb 12 19:43:16.320727 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:43:16.320771 systemd[1]: Closed iscsiuio.socket. Feb 12 19:43:16.336666 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:43:16.336749 systemd[1]: Stopped ignition-setup.service. Feb 12 19:43:16.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.342515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:43:16.344916 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:43:16.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.349343 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:43:16.353438 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:43:16.354469 systemd-networkd[809]: eth0: DHCPv6 lease lost Feb 12 19:43:16.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.355688 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:43:16.357669 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:43:16.365947 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:43:16.368722 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:43:16.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.372000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:43:16.372000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:43:16.373159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:43:16.373205 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:43:16.380746 systemd[1]: Stopping network-cleanup.service... Feb 12 19:43:16.385845 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:43:16.385922 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:43:16.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.392428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:43:16.394638 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:43:16.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.398254 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:43:16.398313 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:43:16.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.404761 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:43:16.409470 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:43:16.412871 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:43:16.415372 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:43:16.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.420050 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:43:16.420131 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:43:16.422439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:43:16.424813 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:43:16.428812 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:43:16.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.431263 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:43:16.437091 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:43:16.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.437154 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:43:16.442931 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:43:16.444608 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:43:16.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.451478 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:43:16.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.453687 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:43:16.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.453752 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:43:16.456175 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:43:16.456226 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:43:16.458518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:43:16.458560 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:43:16.463007 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:43:16.463105 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:43:16.497438 kernel: hv_netvsc 000d3ab0-4e84-000d-3ab0-4e84000d3ab0 eth0: Data path switched from VF: enP31056s1 Feb 12 19:43:16.515563 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:43:16.515658 systemd[1]: Stopped network-cleanup.service. Feb 12 19:43:16.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.521952 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:43:16.527144 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:43:16.537082 systemd[1]: Switching root. Feb 12 19:43:16.539000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:43:16.539000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:43:16.541000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:43:16.541000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:43:16.541000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:43:16.560767 iscsid[816]: iscsid shutting down. Feb 12 19:43:16.562793 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 12 19:43:16.562888 systemd-journald[183]: Journal stopped Feb 12 19:43:22.160768 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:43:22.160806 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:43:22.160822 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:43:22.160831 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:43:22.160842 kernel: SELinux: policy capability open_perms=1 Feb 12 19:43:22.160857 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:43:22.160876 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:43:22.160894 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:43:22.160909 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:43:22.160923 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:43:22.160933 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:43:22.160945 systemd[1]: Successfully loaded SELinux policy in 121.865ms. Feb 12 19:43:22.160963 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.239ms. Feb 12 19:43:22.160979 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:43:22.160998 systemd[1]: Detected virtualization microsoft. Feb 12 19:43:22.161010 systemd[1]: Detected architecture x86-64. Feb 12 19:43:22.161022 systemd[1]: Detected first boot. Feb 12 19:43:22.161037 systemd[1]: Hostname set to . Feb 12 19:43:22.161049 systemd[1]: Initializing machine ID from random generator. Feb 12 19:43:22.161065 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:43:22.161075 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:43:22.161087 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:22.161099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:22.161110 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:22.161120 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:43:22.161129 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:43:22.161141 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:43:22.161151 systemd[1]: Created slice system-getty.slice. Feb 12 19:43:22.161161 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:43:22.161170 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:43:22.161179 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:43:22.161188 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:43:22.161199 systemd[1]: Created slice user.slice. Feb 12 19:43:22.161211 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:43:22.161221 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:43:22.161232 systemd[1]: Set up automount boot.automount. Feb 12 19:43:22.161241 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:43:22.161252 systemd[1]: Reached target integritysetup.target. Feb 12 19:43:22.161262 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:43:22.161272 systemd[1]: Reached target remote-fs.target. Feb 12 19:43:22.161287 systemd[1]: Reached target slices.target. Feb 12 19:43:22.161300 systemd[1]: Reached target swap.target. Feb 12 19:43:22.161311 systemd[1]: Reached target torcx.target. Feb 12 19:43:22.161328 systemd[1]: Reached target veritysetup.target. Feb 12 19:43:22.161340 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:43:22.161349 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:43:22.161358 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 12 19:43:22.161367 kernel: audit: type=1400 audit(1707767001.857:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:43:22.161378 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:43:22.161391 kernel: audit: type=1335 audit(1707767001.857:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:43:22.161401 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:43:22.161413 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:43:22.162742 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:43:22.162760 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:43:22.162773 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:43:22.162790 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:43:22.162805 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:43:22.162817 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:43:22.162831 systemd[1]: Mounting media.mount... Feb 12 19:43:22.162844 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:22.162856 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:43:22.162869 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:43:22.162882 systemd[1]: Mounting tmp.mount... Feb 12 19:43:22.162894 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:43:22.162905 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:43:22.162917 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:43:22.162931 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:43:22.162944 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:43:22.162956 systemd[1]: Starting modprobe@drm.service... Feb 12 19:43:22.162968 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:43:22.162981 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:43:22.162996 systemd[1]: Starting modprobe@loop.service... Feb 12 19:43:22.163008 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:43:22.163022 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:43:22.163037 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:43:22.163051 systemd[1]: Starting systemd-journald.service... Feb 12 19:43:22.163065 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:43:22.163078 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:43:22.163092 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:43:22.163107 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:43:22.163125 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:22.163139 kernel: fuse: init (API version 7.34) Feb 12 19:43:22.163152 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:43:22.163166 kernel: audit: type=1305 audit(1707767002.145:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:43:22.163180 kernel: loop: module loaded Feb 12 19:43:22.163206 systemd-journald[1166]: Journal started Feb 12 19:43:22.163282 systemd-journald[1166]: Runtime Journal (/run/log/journal/0e7b1e7d345940668bce49ecc30df2e8) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:43:21.857000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:43:21.857000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:43:22.145000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:43:22.195451 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:43:22.195517 kernel: audit: type=1300 audit(1707767002.145:92): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd18648000 a2=4000 a3=7ffd1864809c items=0 ppid=1 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.195543 systemd[1]: Started systemd-journald.service. Feb 12 19:43:22.145000 audit[1166]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd18648000 a2=4000 a3=7ffd1864809c items=0 ppid=1 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:22.195159 systemd[1]: Mounted media.mount. Feb 12 19:43:22.198031 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:43:22.145000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:43:22.201901 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:43:22.204291 systemd[1]: Mounted tmp.mount. Feb 12 19:43:22.210919 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:43:22.238463 kernel: audit: type=1327 audit(1707767002.145:92): proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:43:22.238554 kernel: audit: type=1130 audit(1707767002.193:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.238579 kernel: audit: type=1130 audit(1707767002.224:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.225774 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:43:22.226003 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:43:22.241107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:43:22.241344 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:43:22.243994 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:43:22.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.245595 systemd[1]: Finished modprobe@drm.service. Feb 12 19:43:22.262437 kernel: audit: type=1130 audit(1707767002.240:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.262296 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:43:22.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.265033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:43:22.265201 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:43:22.290269 kernel: audit: type=1131 audit(1707767002.240:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.290328 kernel: audit: type=1130 audit(1707767002.243:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.290530 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:43:22.290762 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:43:22.295311 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:43:22.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.295568 systemd[1]: Finished modprobe@loop.service. Feb 12 19:43:22.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.298232 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:43:22.301337 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:43:22.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.304534 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:43:22.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.307245 systemd[1]: Reached target network-pre.target. Feb 12 19:43:22.311184 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:43:22.314953 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:43:22.321828 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:43:22.323870 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:43:22.327726 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:43:22.334581 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:43:22.338902 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:43:22.341518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:43:22.345385 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:43:22.349216 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:43:22.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.369921 systemd-journald[1166]: Time spent on flushing to /var/log/journal/0e7b1e7d345940668bce49ecc30df2e8 is 39.454ms for 1117 entries. Feb 12 19:43:22.369921 systemd-journald[1166]: System Journal (/var/log/journal/0e7b1e7d345940668bce49ecc30df2e8) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:43:22.438387 systemd-journald[1166]: Received client request to flush runtime journal. Feb 12 19:43:22.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.355794 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:43:22.361437 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:43:22.441350 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:43:22.363810 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:43:22.367608 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:43:22.387947 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:43:22.390451 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:43:22.409234 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:43:22.439737 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:43:22.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:22.577104 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:43:22.581163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:43:22.741000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:43:22.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:23.002600 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:43:23.007147 systemd[1]: Starting systemd-udevd.service... Feb 12 19:43:23.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:23.027821 systemd-udevd[1219]: Using default interface naming scheme 'v252'. Feb 12 19:43:23.098060 systemd[1]: Started systemd-udevd.service. Feb 12 19:43:23.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:23.107054 systemd[1]: Starting systemd-networkd.service... Feb 12 19:43:23.135374 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:43:23.149795 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:43:23.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:23.212244 systemd[1]: Started systemd-userdbd.service. Feb 12 19:43:23.244447 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:43:23.256852 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:43:23.256961 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:43:23.263164 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:43:23.273787 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:43:23.273910 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:43:23.273968 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:43:23.284838 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:43:23.284934 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:43:23.284964 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:43:23.284988 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:43:23.248000 audit[1226]: AVC avc: denied { confidentiality } for pid=1226 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:43:24.169982 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:43:24.170072 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:43:23.248000 audit[1226]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5624dc182690 a1=f884 a2=7f6e070bfbc5 a3=5 items=12 ppid=1219 pid=1226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:23.248000 audit: CWD cwd="/" Feb 12 19:43:23.248000 audit: PATH item=0 name=(null) inode=237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=1 name=(null) inode=15034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=2 name=(null) inode=15034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=3 name=(null) inode=15035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=4 name=(null) inode=15034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=5 name=(null) inode=15036 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=6 name=(null) inode=15034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=7 name=(null) inode=15037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=8 name=(null) inode=15034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=9 name=(null) inode=15038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=10 name=(null) inode=15034 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PATH item=11 name=(null) inode=15039 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:23.248000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:43:24.212233 systemd-networkd[1235]: lo: Link UP Feb 12 19:43:24.212244 systemd-networkd[1235]: lo: Gained carrier Feb 12 19:43:24.212877 systemd-networkd[1235]: Enumeration completed Feb 12 19:43:24.213030 systemd[1]: Started systemd-networkd.service. Feb 12 19:43:24.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:24.217286 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:43:24.223120 systemd-networkd[1235]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:43:24.285742 kernel: mlx5_core 7950:00:02.0 enP31056s1: Link up Feb 12 19:43:24.337229 kernel: hv_netvsc 000d3ab0-4e84-000d-3ab0-4e84000d3ab0 eth0: Data path switched to VF: enP31056s1 Feb 12 19:43:24.345572 systemd-networkd[1235]: enP31056s1: Link UP Feb 12 19:43:24.346268 systemd-networkd[1235]: eth0: Link UP Feb 12 19:43:24.346356 systemd-networkd[1235]: eth0: Gained carrier Feb 12 19:43:24.359185 systemd-networkd[1235]: enP31056s1: Gained carrier Feb 12 19:43:24.377868 systemd-networkd[1235]: eth0: DHCPv4 address 10.200.8.31/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:43:24.406727 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1240) Feb 12 19:43:24.445250 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 19:43:24.462731 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 12 19:43:24.496246 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:43:24.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:24.500931 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:43:24.581538 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:43:24.608009 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:43:24.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:24.610765 systemd[1]: Reached target cryptsetup.target. Feb 12 19:43:24.614960 systemd[1]: Starting lvm2-activation.service... Feb 12 19:43:24.621863 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:43:24.667078 systemd[1]: Finished lvm2-activation.service. Feb 12 19:43:24.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:24.669395 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:43:24.671448 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:43:24.671482 systemd[1]: Reached target local-fs.target. Feb 12 19:43:24.673310 systemd[1]: Reached target machines.target. Feb 12 19:43:24.676960 systemd[1]: Starting ldconfig.service... Feb 12 19:43:24.679494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:43:24.679590 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:24.681520 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:43:24.685077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:43:24.689325 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:43:24.691959 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:43:24.692054 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:43:24.693667 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:43:24.704249 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1302 (bootctl) Feb 12 19:43:24.705782 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:43:24.797551 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:43:25.350073 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:43:25.493005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:43:25.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:25.498251 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:43:25.973988 systemd-networkd[1235]: eth0: Gained IPv6LL Feb 12 19:43:25.979723 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:43:25.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.260989 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:43:26.261764 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:43:26.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.349698 systemd-fsck[1311]: fsck.fat 4.2 (2021-01-31) Feb 12 19:43:26.349698 systemd-fsck[1311]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 19:43:26.352019 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:43:26.356653 systemd[1]: Mounting boot.mount... Feb 12 19:43:26.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.396312 systemd[1]: Mounted boot.mount. Feb 12 19:43:26.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.413794 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:43:26.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.493849 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:43:26.498125 systemd[1]: Starting audit-rules.service... Feb 12 19:43:26.501850 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:43:26.509864 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:43:26.514847 systemd[1]: Starting systemd-resolved.service... Feb 12 19:43:26.519634 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:43:26.527553 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:43:26.533264 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:43:26.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.536422 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:43:26.550000 audit[1333]: SYSTEM_BOOT pid=1333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.553932 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:43:26.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.588670 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:43:26.655756 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:43:26.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:26.658716 systemd[1]: Reached target time-set.target. Feb 12 19:43:26.667696 systemd-resolved[1329]: Positive Trust Anchors: Feb 12 19:43:26.667729 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:43:26.667772 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:43:26.669654 augenrules[1348]: No rules Feb 12 19:43:26.668000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:43:26.668000 audit[1348]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9edd6040 a2=420 a3=0 items=0 ppid=1323 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:26.668000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:43:26.670516 systemd[1]: Finished audit-rules.service. Feb 12 19:43:26.700327 systemd-resolved[1329]: Using system hostname 'ci-3510.3.2-a-2665495451'. Feb 12 19:43:26.702072 systemd[1]: Started systemd-resolved.service. Feb 12 19:43:26.702167 systemd-timesyncd[1330]: Contacted time server 85.91.1.180:123 (0.flatcar.pool.ntp.org). Feb 12 19:43:26.702211 systemd-timesyncd[1330]: Initial clock synchronization to Mon 2024-02-12 19:43:26.702302 UTC. Feb 12 19:43:26.705337 systemd[1]: Reached target network.target. Feb 12 19:43:26.707782 systemd[1]: Reached target network-online.target. Feb 12 19:43:26.710185 systemd[1]: Reached target nss-lookup.target. Feb 12 19:43:29.160901 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:43:29.174509 systemd[1]: Finished ldconfig.service. Feb 12 19:43:29.179160 systemd[1]: Starting systemd-update-done.service... Feb 12 19:43:29.188893 systemd[1]: Finished systemd-update-done.service. Feb 12 19:43:29.191399 systemd[1]: Reached target sysinit.target. Feb 12 19:43:29.193494 systemd[1]: Started motdgen.path. Feb 12 19:43:29.195245 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:43:29.197922 systemd[1]: Started logrotate.timer. Feb 12 19:43:29.200067 systemd[1]: Started mdadm.timer. Feb 12 19:43:29.201654 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:43:29.203884 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:43:29.204045 systemd[1]: Reached target paths.target. Feb 12 19:43:29.205933 systemd[1]: Reached target timers.target. Feb 12 19:43:29.211052 systemd[1]: Listening on dbus.socket. Feb 12 19:43:29.214233 systemd[1]: Starting docker.socket... Feb 12 19:43:29.221870 systemd[1]: Listening on sshd.socket. Feb 12 19:43:29.223890 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:29.224358 systemd[1]: Listening on docker.socket. Feb 12 19:43:29.226197 systemd[1]: Reached target sockets.target. Feb 12 19:43:29.228026 systemd[1]: Reached target basic.target. Feb 12 19:43:29.229888 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:43:29.229949 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:43:29.229977 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:43:29.231104 systemd[1]: Starting containerd.service... Feb 12 19:43:29.234460 systemd[1]: Starting dbus.service... Feb 12 19:43:29.237863 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:43:29.241620 systemd[1]: Starting extend-filesystems.service... Feb 12 19:43:29.246279 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:43:29.247888 systemd[1]: Starting motdgen.service... Feb 12 19:43:29.252033 systemd[1]: Started nvidia.service. Feb 12 19:43:29.255798 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:43:29.259489 systemd[1]: Starting prepare-critools.service... Feb 12 19:43:29.263202 systemd[1]: Starting prepare-helm.service... Feb 12 19:43:29.268114 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:43:29.282638 jq[1363]: false Feb 12 19:43:29.279054 systemd[1]: Starting sshd-keygen.service... Feb 12 19:43:29.286085 systemd[1]: Starting systemd-logind.service... Feb 12 19:43:29.290360 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:29.290466 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:43:29.344647 jq[1386]: true Feb 12 19:43:29.292926 systemd[1]: Starting update-engine.service... Feb 12 19:43:29.296439 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:43:29.303141 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:43:29.303462 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:43:29.322686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda1 Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda2 Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda3 Feb 12 19:43:29.356544 extend-filesystems[1364]: Found usr Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda4 Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda6 Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda7 Feb 12 19:43:29.356544 extend-filesystems[1364]: Found sda9 Feb 12 19:43:29.356544 extend-filesystems[1364]: Checking size of /dev/sda9 Feb 12 19:43:29.323023 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:43:29.364344 dbus-daemon[1361]: [system] SELinux support is enabled Feb 12 19:43:29.347282 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:43:29.405921 jq[1393]: true Feb 12 19:43:29.347580 systemd[1]: Finished motdgen.service. Feb 12 19:43:29.365871 systemd[1]: Started dbus.service. Feb 12 19:43:29.372367 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:43:29.372450 systemd[1]: Reached target system-config.target. Feb 12 19:43:29.376772 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:43:29.376798 systemd[1]: Reached target user-config.target. Feb 12 19:43:29.413121 extend-filesystems[1364]: Old size kept for /dev/sda9 Feb 12 19:43:29.437072 extend-filesystems[1364]: Found sr0 Feb 12 19:43:29.414821 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:43:29.415059 systemd[1]: Finished extend-filesystems.service. Feb 12 19:43:29.492288 env[1398]: time="2024-02-12T19:43:29.492227191Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:43:29.502900 bash[1428]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:43:29.503851 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:43:29.523731 tar[1388]: ./ Feb 12 19:43:29.523731 tar[1388]: ./macvlan Feb 12 19:43:29.533495 tar[1389]: crictl Feb 12 19:43:29.534283 tar[1390]: linux-amd64/helm Feb 12 19:43:29.581973 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:43:29.602246 env[1398]: time="2024-02-12T19:43:29.602190030Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:43:29.605971 env[1398]: time="2024-02-12T19:43:29.605939979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:29.609792 env[1398]: time="2024-02-12T19:43:29.609749029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:29.609988 env[1398]: time="2024-02-12T19:43:29.609970732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:29.610509 env[1398]: time="2024-02-12T19:43:29.610479838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:29.610617 env[1398]: time="2024-02-12T19:43:29.610602940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:29.610769 env[1398]: time="2024-02-12T19:43:29.610748642Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:43:29.610848 env[1398]: time="2024-02-12T19:43:29.610834343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:29.611051 env[1398]: time="2024-02-12T19:43:29.610983345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:29.611772 env[1398]: time="2024-02-12T19:43:29.611747255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:29.612543 env[1398]: time="2024-02-12T19:43:29.612510965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:29.612831 systemd-logind[1384]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:43:29.613760 env[1398]: time="2024-02-12T19:43:29.613737081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:43:29.613926 env[1398]: time="2024-02-12T19:43:29.613907683Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:43:29.615922 systemd-logind[1384]: New seat seat0. Feb 12 19:43:29.618157 systemd[1]: Started systemd-logind.service. Feb 12 19:43:29.621550 env[1398]: time="2024-02-12T19:43:29.621424382Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639341516Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639415817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639435017Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639491218Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639513718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639596119Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639618920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639639120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639668020Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639687021Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639729121Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639748921Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.639911823Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:43:29.641421 env[1398]: time="2024-02-12T19:43:29.640005425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640612733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640661333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640683834Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640768635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640848036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640875636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640892036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640907836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640924437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640954237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.640983737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.641006538Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.641184240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.641203840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642011 env[1398]: time="2024-02-12T19:43:29.641233441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.642522 env[1398]: time="2024-02-12T19:43:29.641263541Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:43:29.642522 env[1398]: time="2024-02-12T19:43:29.641283741Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:43:29.642522 env[1398]: time="2024-02-12T19:43:29.641297542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:43:29.642522 env[1398]: time="2024-02-12T19:43:29.641318542Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:43:29.642522 env[1398]: time="2024-02-12T19:43:29.641368643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.643014064Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.643126166Z" level=info msg="Connect containerd service" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.643167566Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.643951876Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.644088978Z" level=info msg="Start subscribing containerd event" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.644138679Z" level=info msg="Start recovering state" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.644201680Z" level=info msg="Start event monitor" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.644215580Z" level=info msg="Start snapshots syncer" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.644226780Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:43:29.644523 env[1398]: time="2024-02-12T19:43:29.644236780Z" level=info msg="Start streaming server" Feb 12 19:43:29.654695 env[1398]: time="2024-02-12T19:43:29.644827088Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:43:29.654695 env[1398]: time="2024-02-12T19:43:29.644914589Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:43:29.669318 systemd[1]: Started containerd.service. Feb 12 19:43:29.672190 update_engine[1385]: I0212 19:43:29.670799 1385 main.cc:92] Flatcar Update Engine starting Feb 12 19:43:29.686255 systemd[1]: Started update-engine.service. Feb 12 19:43:29.694165 update_engine[1385]: I0212 19:43:29.686332 1385 update_check_scheduler.cc:74] Next update check in 5m46s Feb 12 19:43:29.691459 systemd[1]: Started locksmithd.service. Feb 12 19:43:29.698830 env[1398]: time="2024-02-12T19:43:29.698791694Z" level=info msg="containerd successfully booted in 0.219372s" Feb 12 19:43:29.703866 tar[1388]: ./static Feb 12 19:43:29.797278 tar[1388]: ./vlan Feb 12 19:43:29.902206 tar[1388]: ./portmap Feb 12 19:43:29.979960 tar[1388]: ./host-local Feb 12 19:43:30.051536 tar[1388]: ./vrf Feb 12 19:43:30.127721 tar[1388]: ./bridge Feb 12 19:43:30.217352 tar[1388]: ./tuning Feb 12 19:43:30.286388 tar[1388]: ./firewall Feb 12 19:43:30.378904 tar[1388]: ./host-device Feb 12 19:43:30.458476 tar[1388]: ./sbr Feb 12 19:43:30.535907 tar[1388]: ./loopback Feb 12 19:43:30.604029 tar[1388]: ./dhcp Feb 12 19:43:30.655427 tar[1390]: linux-amd64/LICENSE Feb 12 19:43:30.662401 tar[1390]: linux-amd64/README.md Feb 12 19:43:30.669043 systemd[1]: Finished prepare-helm.service. Feb 12 19:43:30.693304 sshd_keygen[1408]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:43:30.707601 systemd[1]: Finished prepare-critools.service. Feb 12 19:43:30.740343 systemd[1]: Finished sshd-keygen.service. Feb 12 19:43:30.744843 systemd[1]: Starting issuegen.service... Feb 12 19:43:30.749945 systemd[1]: Started waagent.service. Feb 12 19:43:30.752864 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:43:30.753151 systemd[1]: Finished issuegen.service. Feb 12 19:43:30.762810 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:43:30.772035 tar[1388]: ./ptp Feb 12 19:43:30.775459 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:43:30.785324 systemd[1]: Started getty@tty1.service. Feb 12 19:43:30.789309 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:43:30.793343 systemd[1]: Reached target getty.target. Feb 12 19:43:30.833939 tar[1388]: ./ipvlan Feb 12 19:43:30.867890 tar[1388]: ./bandwidth Feb 12 19:43:30.923929 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:43:30.927133 systemd[1]: Reached target multi-user.target. Feb 12 19:43:30.931685 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:43:30.941492 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:43:30.941730 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:43:30.946604 systemd[1]: Startup finished in 643ms (firmware) + 7.711s (loader) + 13.480s (kernel) + 12.784s (userspace) = 34.620s. Feb 12 19:43:31.081873 login[1511]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:43:31.082448 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:43:31.153195 systemd[1]: Created slice user-500.slice. Feb 12 19:43:31.154966 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:43:31.165023 systemd-logind[1384]: New session 1 of user core. Feb 12 19:43:31.169537 systemd-logind[1384]: New session 2 of user core. Feb 12 19:43:31.173900 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:43:31.177562 systemd[1]: Starting user@500.service... Feb 12 19:43:31.187855 (systemd)[1530]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:31.210574 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:43:31.303040 systemd[1530]: Queued start job for default target default.target. Feb 12 19:43:31.303347 systemd[1530]: Reached target paths.target. Feb 12 19:43:31.303369 systemd[1530]: Reached target sockets.target. Feb 12 19:43:31.303387 systemd[1530]: Reached target timers.target. Feb 12 19:43:31.303402 systemd[1530]: Reached target basic.target. Feb 12 19:43:31.303461 systemd[1530]: Reached target default.target. Feb 12 19:43:31.303494 systemd[1530]: Startup finished in 109ms. Feb 12 19:43:31.303570 systemd[1]: Started user@500.service. Feb 12 19:43:31.304960 systemd[1]: Started session-1.scope. Feb 12 19:43:31.306200 systemd[1]: Started session-2.scope. Feb 12 19:43:32.877981 waagent[1503]: 2024-02-12T19:43:32.877870Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:43:32.882559 waagent[1503]: 2024-02-12T19:43:32.882464Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:43:32.885111 waagent[1503]: 2024-02-12T19:43:32.885029Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:43:32.887571 waagent[1503]: 2024-02-12T19:43:32.887488Z INFO Daemon Daemon Run daemon Feb 12 19:43:32.890216 waagent[1503]: 2024-02-12T19:43:32.889825Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:43:32.902893 waagent[1503]: 2024-02-12T19:43:32.902766Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:43:32.910095 waagent[1503]: 2024-02-12T19:43:32.909969Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.911599Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.912365Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.913748Z INFO Daemon Daemon Activate resource disk Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.914446Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.922370Z INFO Daemon Daemon Found device: None Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.923244Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.924227Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.926463Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.927224Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.937143Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.940141Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.941514Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:43:32.952050 waagent[1503]: 2024-02-12T19:43:32.942272Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:43:32.988392 waagent[1503]: 2024-02-12T19:43:32.988213Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:43:33.023272 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:43:33.032203 waagent[1503]: 2024-02-12T19:43:33.032069Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:43:33.035483 waagent[1503]: 2024-02-12T19:43:33.035399Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:43:33.038697 waagent[1503]: 2024-02-12T19:43:33.038623Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:43:33.041811 waagent[1503]: 2024-02-12T19:43:33.041741Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:43:33.044603 waagent[1503]: 2024-02-12T19:43:33.044529Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:43:33.047284 waagent[1503]: 2024-02-12T19:43:33.047215Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:43:33.081458 waagent[1503]: 2024-02-12T19:43:33.081382Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:43:33.088700 waagent[1503]: 2024-02-12T19:43:33.083182Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:43:33.088700 waagent[1503]: 2024-02-12T19:43:33.083902Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:43:33.263864 waagent[1503]: 2024-02-12T19:43:33.263638Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:43:33.276746 waagent[1503]: 2024-02-12T19:43:33.276661Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:43:33.279567 waagent[1503]: 2024-02-12T19:43:33.279503Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:43:33.351819 waagent[1503]: 2024-02-12T19:43:33.351662Z INFO Daemon Daemon Found private key matching thumbprint 968AF7D34C8666843574C27BDEA51509744B33D8 Feb 12 19:43:33.355806 waagent[1503]: 2024-02-12T19:43:33.355726Z INFO Daemon Daemon Certificate with thumbprint 072AB5C0C6488C73A352633D06A5F32704C30332 has no matching private key. Feb 12 19:43:33.360131 waagent[1503]: 2024-02-12T19:43:33.360059Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:43:33.385336 waagent[1503]: 2024-02-12T19:43:33.385264Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: cfc023fa-e4fc-4a18-85c5-2ec91505f0c3 New eTag: 2154492762990159908] Feb 12 19:43:33.390563 waagent[1503]: 2024-02-12T19:43:33.390489Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:43:33.403231 waagent[1503]: 2024-02-12T19:43:33.403161Z INFO Daemon Daemon Starting provisioning Feb 12 19:43:33.405688 waagent[1503]: 2024-02-12T19:43:33.405613Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:43:33.408295 waagent[1503]: 2024-02-12T19:43:33.408226Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-2665495451] Feb 12 19:43:33.413864 waagent[1503]: 2024-02-12T19:43:33.413757Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-2665495451] Feb 12 19:43:33.417304 waagent[1503]: 2024-02-12T19:43:33.417229Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:43:33.420406 waagent[1503]: 2024-02-12T19:43:33.420341Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:43:33.433474 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:43:33.433799 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:43:33.433871 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:43:33.434108 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:43:33.439752 systemd-networkd[1235]: eth0: DHCPv6 lease lost Feb 12 19:43:33.441251 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:43:33.441556 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:43:33.444452 systemd[1]: Starting systemd-networkd.service... Feb 12 19:43:33.481360 systemd-networkd[1576]: enP31056s1: Link UP Feb 12 19:43:33.481370 systemd-networkd[1576]: enP31056s1: Gained carrier Feb 12 19:43:33.482759 systemd-networkd[1576]: eth0: Link UP Feb 12 19:43:33.482768 systemd-networkd[1576]: eth0: Gained carrier Feb 12 19:43:33.483194 systemd-networkd[1576]: lo: Link UP Feb 12 19:43:33.483203 systemd-networkd[1576]: lo: Gained carrier Feb 12 19:43:33.483514 systemd-networkd[1576]: eth0: Gained IPv6LL Feb 12 19:43:33.483804 systemd-networkd[1576]: Enumeration completed Feb 12 19:43:33.483958 systemd[1]: Started systemd-networkd.service. Feb 12 19:43:33.486634 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:43:33.487096 systemd-networkd[1576]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:43:33.490140 waagent[1503]: 2024-02-12T19:43:33.489744Z INFO Daemon Daemon Create user account if not exists Feb 12 19:43:33.491796 waagent[1503]: 2024-02-12T19:43:33.491727Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:43:33.492588 waagent[1503]: 2024-02-12T19:43:33.492537Z INFO Daemon Daemon Configure sudoer Feb 12 19:43:33.494131 waagent[1503]: 2024-02-12T19:43:33.494075Z INFO Daemon Daemon Configure sshd Feb 12 19:43:33.495082 waagent[1503]: 2024-02-12T19:43:33.495032Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:43:33.527853 systemd-networkd[1576]: eth0: DHCPv4 address 10.200.8.31/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:43:33.531153 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:43:34.767341 waagent[1503]: 2024-02-12T19:43:34.767227Z INFO Daemon Daemon Provisioning complete Feb 12 19:43:34.791905 waagent[1503]: 2024-02-12T19:43:34.791821Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:43:34.795540 waagent[1503]: 2024-02-12T19:43:34.795438Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:43:34.801462 waagent[1503]: 2024-02-12T19:43:34.801356Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:43:35.076685 waagent[1586]: 2024-02-12T19:43:35.076503Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:43:35.077462 waagent[1586]: 2024-02-12T19:43:35.077391Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:43:35.077611 waagent[1586]: 2024-02-12T19:43:35.077559Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:43:35.089036 waagent[1586]: 2024-02-12T19:43:35.088949Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:43:35.089234 waagent[1586]: 2024-02-12T19:43:35.089175Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:43:35.155082 waagent[1586]: 2024-02-12T19:43:35.154943Z INFO ExtHandler ExtHandler Found private key matching thumbprint 968AF7D34C8666843574C27BDEA51509744B33D8 Feb 12 19:43:35.155330 waagent[1586]: 2024-02-12T19:43:35.155270Z INFO ExtHandler ExtHandler Certificate with thumbprint 072AB5C0C6488C73A352633D06A5F32704C30332 has no matching private key. Feb 12 19:43:35.155575 waagent[1586]: 2024-02-12T19:43:35.155525Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:43:35.170494 waagent[1586]: 2024-02-12T19:43:35.170426Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 62614404-4e1f-4704-bf94-2e301c39b98b New eTag: 2154492762990159908] Feb 12 19:43:35.171120 waagent[1586]: 2024-02-12T19:43:35.171059Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:43:35.220537 waagent[1586]: 2024-02-12T19:43:35.220392Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:43:35.230415 waagent[1586]: 2024-02-12T19:43:35.230322Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1586 Feb 12 19:43:35.235259 waagent[1586]: 2024-02-12T19:43:35.235177Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:43:35.237066 waagent[1586]: 2024-02-12T19:43:35.236988Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:43:35.263149 waagent[1586]: 2024-02-12T19:43:35.263086Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:43:35.263570 waagent[1586]: 2024-02-12T19:43:35.263504Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:43:35.271774 waagent[1586]: 2024-02-12T19:43:35.271697Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:43:35.272279 waagent[1586]: 2024-02-12T19:43:35.272219Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:43:35.273378 waagent[1586]: 2024-02-12T19:43:35.273310Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:43:35.274658 waagent[1586]: 2024-02-12T19:43:35.274598Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:43:35.275267 waagent[1586]: 2024-02-12T19:43:35.275210Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:43:35.275791 waagent[1586]: 2024-02-12T19:43:35.275728Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:43:35.276014 waagent[1586]: 2024-02-12T19:43:35.275958Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:43:35.276454 waagent[1586]: 2024-02-12T19:43:35.276400Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:43:35.276659 waagent[1586]: 2024-02-12T19:43:35.276607Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:43:35.277562 waagent[1586]: 2024-02-12T19:43:35.277507Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:43:35.277680 waagent[1586]: 2024-02-12T19:43:35.277617Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:43:35.277916 waagent[1586]: 2024-02-12T19:43:35.277867Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:43:35.278902 waagent[1586]: 2024-02-12T19:43:35.278847Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:43:35.279501 waagent[1586]: 2024-02-12T19:43:35.279442Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:43:35.280890 waagent[1586]: 2024-02-12T19:43:35.280835Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:43:35.283353 waagent[1586]: 2024-02-12T19:43:35.283294Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:43:35.283759 waagent[1586]: 2024-02-12T19:43:35.283680Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:43:35.283759 waagent[1586]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:43:35.283759 waagent[1586]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:43:35.283759 waagent[1586]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:43:35.283759 waagent[1586]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:43:35.283759 waagent[1586]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:43:35.283759 waagent[1586]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:43:35.284057 waagent[1586]: 2024-02-12T19:43:35.283860Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:43:35.284057 waagent[1586]: 2024-02-12T19:43:35.284003Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:43:35.295090 waagent[1586]: 2024-02-12T19:43:35.295011Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:43:35.296850 waagent[1586]: 2024-02-12T19:43:35.296797Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:43:35.297721 waagent[1586]: 2024-02-12T19:43:35.297658Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:43:35.313670 waagent[1586]: 2024-02-12T19:43:35.313594Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1576' Feb 12 19:43:35.345106 waagent[1586]: 2024-02-12T19:43:35.343474Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:43:35.345106 waagent[1586]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:43:35.345106 waagent[1586]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:43:35.345106 waagent[1586]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b0:4e:84 brd ff:ff:ff:ff:ff:ff Feb 12 19:43:35.345106 waagent[1586]: 3: enP31056s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b0:4e:84 brd ff:ff:ff:ff:ff:ff\ altname enP31056p0s2 Feb 12 19:43:35.345106 waagent[1586]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:43:35.345106 waagent[1586]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:43:35.345106 waagent[1586]: 2: eth0 inet 10.200.8.31/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:43:35.345106 waagent[1586]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:43:35.345106 waagent[1586]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:43:35.345106 waagent[1586]: 2: eth0 inet6 fe80::20d:3aff:feb0:4e84/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:43:35.352797 waagent[1586]: 2024-02-12T19:43:35.352681Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:43:35.522250 waagent[1586]: 2024-02-12T19:43:35.522109Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 12 19:43:35.525502 waagent[1586]: 2024-02-12T19:43:35.525396Z INFO EnvHandler ExtHandler Firewall rules: Feb 12 19:43:35.525502 waagent[1586]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:43:35.525502 waagent[1586]: pkts bytes target prot opt in out source destination Feb 12 19:43:35.525502 waagent[1586]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:43:35.525502 waagent[1586]: pkts bytes target prot opt in out source destination Feb 12 19:43:35.525502 waagent[1586]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:43:35.525502 waagent[1586]: pkts bytes target prot opt in out source destination Feb 12 19:43:35.525502 waagent[1586]: 3 856 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:43:35.525502 waagent[1586]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:43:35.526894 waagent[1586]: 2024-02-12T19:43:35.526837Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:43:35.676107 waagent[1586]: 2024-02-12T19:43:35.675930Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:43:35.805407 waagent[1503]: 2024-02-12T19:43:35.805224Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:43:35.812307 waagent[1503]: 2024-02-12T19:43:35.812225Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:43:36.809960 waagent[1627]: 2024-02-12T19:43:36.809840Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:43:36.810700 waagent[1627]: 2024-02-12T19:43:36.810631Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:43:36.810869 waagent[1627]: 2024-02-12T19:43:36.810812Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:43:36.820557 waagent[1627]: 2024-02-12T19:43:36.820432Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:43:36.821002 waagent[1627]: 2024-02-12T19:43:36.820937Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:43:36.821180 waagent[1627]: 2024-02-12T19:43:36.821127Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:43:36.833862 waagent[1627]: 2024-02-12T19:43:36.833766Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:43:36.842791 waagent[1627]: 2024-02-12T19:43:36.842697Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:43:36.843851 waagent[1627]: 2024-02-12T19:43:36.843786Z INFO ExtHandler Feb 12 19:43:36.844016 waagent[1627]: 2024-02-12T19:43:36.843963Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d4a00053-162c-4452-8532-24e8440f2bd0 eTag: 2154492762990159908 source: Fabric] Feb 12 19:43:36.844737 waagent[1627]: 2024-02-12T19:43:36.844668Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:43:36.845849 waagent[1627]: 2024-02-12T19:43:36.845791Z INFO ExtHandler Feb 12 19:43:36.845987 waagent[1627]: 2024-02-12T19:43:36.845939Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:43:36.852915 waagent[1627]: 2024-02-12T19:43:36.852857Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:43:36.853388 waagent[1627]: 2024-02-12T19:43:36.853335Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:43:36.875459 waagent[1627]: 2024-02-12T19:43:36.875366Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:43:36.945305 waagent[1627]: 2024-02-12T19:43:36.945156Z INFO ExtHandler Downloaded certificate {'thumbprint': '968AF7D34C8666843574C27BDEA51509744B33D8', 'hasPrivateKey': True} Feb 12 19:43:36.946355 waagent[1627]: 2024-02-12T19:43:36.946281Z INFO ExtHandler Downloaded certificate {'thumbprint': '072AB5C0C6488C73A352633D06A5F32704C30332', 'hasPrivateKey': False} Feb 12 19:43:36.947347 waagent[1627]: 2024-02-12T19:43:36.947285Z INFO ExtHandler Fetch goal state completed Feb 12 19:43:36.973744 waagent[1627]: 2024-02-12T19:43:36.973607Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1627 Feb 12 19:43:36.977285 waagent[1627]: 2024-02-12T19:43:36.977200Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:43:36.978771 waagent[1627]: 2024-02-12T19:43:36.978687Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:43:36.983974 waagent[1627]: 2024-02-12T19:43:36.983911Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:43:36.984364 waagent[1627]: 2024-02-12T19:43:36.984303Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:43:36.992885 waagent[1627]: 2024-02-12T19:43:36.992822Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:43:36.993448 waagent[1627]: 2024-02-12T19:43:36.993384Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:43:37.006917 waagent[1627]: 2024-02-12T19:43:37.006779Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 12 19:43:37.010043 waagent[1627]: 2024-02-12T19:43:37.009922Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 12 19:43:37.014949 waagent[1627]: 2024-02-12T19:43:37.014879Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:43:37.016447 waagent[1627]: 2024-02-12T19:43:37.016382Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:43:37.016897 waagent[1627]: 2024-02-12T19:43:37.016838Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:43:37.017054 waagent[1627]: 2024-02-12T19:43:37.017004Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:43:37.017588 waagent[1627]: 2024-02-12T19:43:37.017529Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:43:37.018292 waagent[1627]: 2024-02-12T19:43:37.018233Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:43:37.018540 waagent[1627]: 2024-02-12T19:43:37.018482Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:43:37.018665 waagent[1627]: 2024-02-12T19:43:37.018615Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:43:37.018665 waagent[1627]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:43:37.018665 waagent[1627]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:43:37.018665 waagent[1627]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:43:37.018665 waagent[1627]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:43:37.018665 waagent[1627]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:43:37.018665 waagent[1627]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:43:37.019137 waagent[1627]: 2024-02-12T19:43:37.019085Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:43:37.019513 waagent[1627]: 2024-02-12T19:43:37.019453Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:43:37.019856 waagent[1627]: 2024-02-12T19:43:37.019802Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:43:37.022681 waagent[1627]: 2024-02-12T19:43:37.022559Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:43:37.022922 waagent[1627]: 2024-02-12T19:43:37.022861Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:43:37.023082 waagent[1627]: 2024-02-12T19:43:37.023025Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:43:37.025354 waagent[1627]: 2024-02-12T19:43:37.025291Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:43:37.025471 waagent[1627]: 2024-02-12T19:43:37.025411Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:43:37.028085 waagent[1627]: 2024-02-12T19:43:37.027827Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:43:37.045219 waagent[1627]: 2024-02-12T19:43:37.045128Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:43:37.045219 waagent[1627]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:43:37.045219 waagent[1627]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:43:37.045219 waagent[1627]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b0:4e:84 brd ff:ff:ff:ff:ff:ff Feb 12 19:43:37.045219 waagent[1627]: 3: enP31056s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b0:4e:84 brd ff:ff:ff:ff:ff:ff\ altname enP31056p0s2 Feb 12 19:43:37.045219 waagent[1627]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:43:37.045219 waagent[1627]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:43:37.045219 waagent[1627]: 2: eth0 inet 10.200.8.31/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:43:37.045219 waagent[1627]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:43:37.045219 waagent[1627]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:43:37.045219 waagent[1627]: 2: eth0 inet6 fe80::20d:3aff:feb0:4e84/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:43:37.057290 waagent[1627]: 2024-02-12T19:43:37.057179Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:43:37.058939 waagent[1627]: 2024-02-12T19:43:37.058871Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:43:37.112555 waagent[1627]: 2024-02-12T19:43:37.112436Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:43:37.112555 waagent[1627]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:43:37.112555 waagent[1627]: pkts bytes target prot opt in out source destination Feb 12 19:43:37.112555 waagent[1627]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:43:37.112555 waagent[1627]: pkts bytes target prot opt in out source destination Feb 12 19:43:37.112555 waagent[1627]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:43:37.112555 waagent[1627]: pkts bytes target prot opt in out source destination Feb 12 19:43:37.112555 waagent[1627]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:43:37.112555 waagent[1627]: 131 15406 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:43:37.112555 waagent[1627]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:43:37.154778 waagent[1627]: 2024-02-12T19:43:37.154692Z INFO ExtHandler ExtHandler Feb 12 19:43:37.154942 waagent[1627]: 2024-02-12T19:43:37.154875Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4d87776a-b734-4f40-941f-ead6b0372917 correlation 3671e1bb-a129-45d2-baaa-d22b5a1dd387 created: 2024-02-12T19:42:46.062019Z] Feb 12 19:43:37.155757 waagent[1627]: 2024-02-12T19:43:37.155681Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:43:37.157466 waagent[1627]: 2024-02-12T19:43:37.157408Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 12 19:43:37.177289 waagent[1627]: 2024-02-12T19:43:37.177212Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:43:37.187562 waagent[1627]: 2024-02-12T19:43:37.187473Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8D315DDB-AB41-4943-B4F0-F3420E4543E2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:44:12.287951 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 12 19:44:14.685893 update_engine[1385]: I0212 19:44:14.685814 1385 update_attempter.cc:509] Updating boot flags... Feb 12 19:44:23.231483 systemd[1]: Created slice system-sshd.slice. Feb 12 19:44:23.233127 systemd[1]: Started sshd@0-10.200.8.31:22-10.200.12.6:53688.service. Feb 12 19:44:23.901409 sshd[1707]: Accepted publickey for core from 10.200.12.6 port 53688 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:44:23.902877 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:23.906765 systemd-logind[1384]: New session 3 of user core. Feb 12 19:44:23.907882 systemd[1]: Started session-3.scope. Feb 12 19:44:24.445697 systemd[1]: Started sshd@1-10.200.8.31:22-10.200.12.6:53704.service. Feb 12 19:44:25.065870 sshd[1715]: Accepted publickey for core from 10.200.12.6 port 53704 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:44:25.067555 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:25.072357 systemd[1]: Started session-4.scope. Feb 12 19:44:25.073101 systemd-logind[1384]: New session 4 of user core. Feb 12 19:44:25.503451 sshd[1715]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:25.506356 systemd[1]: sshd@1-10.200.8.31:22-10.200.12.6:53704.service: Deactivated successfully. Feb 12 19:44:25.507956 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:44:25.508755 systemd-logind[1384]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:44:25.509888 systemd-logind[1384]: Removed session 4. Feb 12 19:44:25.605354 systemd[1]: Started sshd@2-10.200.8.31:22-10.200.12.6:53718.service. Feb 12 19:44:26.239307 sshd[1722]: Accepted publickey for core from 10.200.12.6 port 53718 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:44:26.240698 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:26.245439 systemd[1]: Started session-5.scope. Feb 12 19:44:26.245686 systemd-logind[1384]: New session 5 of user core. Feb 12 19:44:26.673680 sshd[1722]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:26.677262 systemd[1]: sshd@2-10.200.8.31:22-10.200.12.6:53718.service: Deactivated successfully. Feb 12 19:44:26.678481 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:44:26.680223 systemd-logind[1384]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:44:26.681645 systemd-logind[1384]: Removed session 5. Feb 12 19:44:26.777477 systemd[1]: Started sshd@3-10.200.8.31:22-10.200.12.6:53732.service. Feb 12 19:44:27.396194 sshd[1729]: Accepted publickey for core from 10.200.12.6 port 53732 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:44:27.397899 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:27.402922 systemd[1]: Started session-6.scope. Feb 12 19:44:27.403182 systemd-logind[1384]: New session 6 of user core. Feb 12 19:44:27.844573 sshd[1729]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:27.847751 systemd[1]: sshd@3-10.200.8.31:22-10.200.12.6:53732.service: Deactivated successfully. Feb 12 19:44:27.849018 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:44:27.849029 systemd-logind[1384]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:44:27.850165 systemd-logind[1384]: Removed session 6. Feb 12 19:44:27.948271 systemd[1]: Started sshd@4-10.200.8.31:22-10.200.12.6:46732.service. Feb 12 19:44:28.577391 sshd[1736]: Accepted publickey for core from 10.200.12.6 port 46732 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:44:28.578841 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:28.583495 systemd[1]: Started session-7.scope. Feb 12 19:44:28.583789 systemd-logind[1384]: New session 7 of user core. Feb 12 19:44:29.031756 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:44:29.032018 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:44:29.710334 systemd[1]: Starting docker.service... Feb 12 19:44:29.750599 env[1755]: time="2024-02-12T19:44:29.750532636Z" level=info msg="Starting up" Feb 12 19:44:29.751852 env[1755]: time="2024-02-12T19:44:29.751819236Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:44:29.751852 env[1755]: time="2024-02-12T19:44:29.751842236Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:44:29.752016 env[1755]: time="2024-02-12T19:44:29.751864536Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:44:29.752016 env[1755]: time="2024-02-12T19:44:29.751878936Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:44:29.753515 env[1755]: time="2024-02-12T19:44:29.753482136Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:44:29.753515 env[1755]: time="2024-02-12T19:44:29.753500636Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:44:29.753664 env[1755]: time="2024-02-12T19:44:29.753517636Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:44:29.753664 env[1755]: time="2024-02-12T19:44:29.753529336Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:44:29.761010 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1401675881-merged.mount: Deactivated successfully. Feb 12 19:44:29.935538 env[1755]: time="2024-02-12T19:44:29.935495686Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 19:44:29.935538 env[1755]: time="2024-02-12T19:44:29.935520786Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 19:44:29.935850 env[1755]: time="2024-02-12T19:44:29.935740786Z" level=info msg="Loading containers: start." Feb 12 19:44:30.040788 kernel: Initializing XFRM netlink socket Feb 12 19:44:30.092839 env[1755]: time="2024-02-12T19:44:30.092798127Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:44:30.159774 systemd-networkd[1576]: docker0: Link UP Feb 12 19:44:30.178458 env[1755]: time="2024-02-12T19:44:30.178414749Z" level=info msg="Loading containers: done." Feb 12 19:44:30.197664 env[1755]: time="2024-02-12T19:44:30.197608754Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:44:30.197887 env[1755]: time="2024-02-12T19:44:30.197854054Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:44:30.197993 env[1755]: time="2024-02-12T19:44:30.197970654Z" level=info msg="Daemon has completed initialization" Feb 12 19:44:30.233680 systemd[1]: Started docker.service. Feb 12 19:44:30.243762 env[1755]: time="2024-02-12T19:44:30.243695366Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:44:30.260675 systemd[1]: Reloading. Feb 12 19:44:30.338244 /usr/lib/systemd/system-generators/torcx-generator[1885]: time="2024-02-12T19:44:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:44:30.340144 /usr/lib/systemd/system-generators/torcx-generator[1885]: time="2024-02-12T19:44:30Z" level=info msg="torcx already run" Feb 12 19:44:30.431611 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:44:30.431632 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:44:30.449809 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:44:30.530853 systemd[1]: Started kubelet.service. Feb 12 19:44:30.606017 kubelet[1952]: E0212 19:44:30.605945 1952 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:44:30.607878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:44:30.608100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:44:37.075336 env[1398]: time="2024-02-12T19:44:37.075273398Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:44:37.733823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64818772.mount: Deactivated successfully. Feb 12 19:44:39.953581 env[1398]: time="2024-02-12T19:44:39.953524430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:39.964256 env[1398]: time="2024-02-12T19:44:39.964206043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:39.968316 env[1398]: time="2024-02-12T19:44:39.968273062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:39.973293 env[1398]: time="2024-02-12T19:44:39.973253108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:39.973987 env[1398]: time="2024-02-12T19:44:39.973949328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 19:44:39.984151 env[1398]: time="2024-02-12T19:44:39.984105426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:44:40.681056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:44:40.681324 systemd[1]: Stopped kubelet.service. Feb 12 19:44:40.683846 systemd[1]: Started kubelet.service. Feb 12 19:44:40.738626 kubelet[1977]: E0212 19:44:40.738569 1977 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:44:40.742606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:44:40.742831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:44:42.023608 env[1398]: time="2024-02-12T19:44:42.023551363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:42.055220 env[1398]: time="2024-02-12T19:44:42.055162116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:42.062714 env[1398]: time="2024-02-12T19:44:42.062646018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:42.067127 env[1398]: time="2024-02-12T19:44:42.067079538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:42.067968 env[1398]: time="2024-02-12T19:44:42.067937361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 19:44:42.078185 env[1398]: time="2024-02-12T19:44:42.078147637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:44:43.264723 env[1398]: time="2024-02-12T19:44:43.264656276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:43.270782 env[1398]: time="2024-02-12T19:44:43.270740835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:43.275400 env[1398]: time="2024-02-12T19:44:43.275364957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:43.280808 env[1398]: time="2024-02-12T19:44:43.280772599Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:43.281426 env[1398]: time="2024-02-12T19:44:43.281392615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 19:44:43.291168 env[1398]: time="2024-02-12T19:44:43.291135071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:44:44.509550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828731550.mount: Deactivated successfully. Feb 12 19:44:44.985413 env[1398]: time="2024-02-12T19:44:44.985356607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:44.993598 env[1398]: time="2024-02-12T19:44:44.993548116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:44.997318 env[1398]: time="2024-02-12T19:44:44.997278212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.001063 env[1398]: time="2024-02-12T19:44:45.001024108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.001414 env[1398]: time="2024-02-12T19:44:45.001379016Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:44:45.011568 env[1398]: time="2024-02-12T19:44:45.011533669Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:44:45.515589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894392569.mount: Deactivated successfully. Feb 12 19:44:45.554500 env[1398]: time="2024-02-12T19:44:45.554446193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.563658 env[1398]: time="2024-02-12T19:44:45.563606122Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.567634 env[1398]: time="2024-02-12T19:44:45.567598221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.576004 env[1398]: time="2024-02-12T19:44:45.575966030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:45.576428 env[1398]: time="2024-02-12T19:44:45.576397140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:44:45.586259 env[1398]: time="2024-02-12T19:44:45.586223085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:44:46.480596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912648236.mount: Deactivated successfully. Feb 12 19:44:50.633327 env[1398]: time="2024-02-12T19:44:50.633266060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:50.645294 env[1398]: time="2024-02-12T19:44:50.645231221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:50.651054 env[1398]: time="2024-02-12T19:44:50.651008647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:50.658785 env[1398]: time="2024-02-12T19:44:50.658746816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:50.659192 env[1398]: time="2024-02-12T19:44:50.659162926Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 19:44:50.669152 env[1398]: time="2024-02-12T19:44:50.669114343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:44:50.930832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:44:50.931086 systemd[1]: Stopped kubelet.service. Feb 12 19:44:50.932949 systemd[1]: Started kubelet.service. Feb 12 19:44:50.984620 kubelet[2009]: E0212 19:44:50.984555 2009 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:44:50.986457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:44:50.986677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:44:51.342695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870487991.mount: Deactivated successfully. Feb 12 19:44:51.937519 env[1398]: time="2024-02-12T19:44:51.937460136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:51.946432 env[1398]: time="2024-02-12T19:44:51.946369026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:51.951681 env[1398]: time="2024-02-12T19:44:51.951622538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:51.957148 env[1398]: time="2024-02-12T19:44:51.957089254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:51.957803 env[1398]: time="2024-02-12T19:44:51.957762969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 19:44:55.137616 systemd[1]: Stopped kubelet.service. Feb 12 19:44:55.153369 systemd[1]: Reloading. Feb 12 19:44:55.236268 /usr/lib/systemd/system-generators/torcx-generator[2095]: time="2024-02-12T19:44:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:44:55.236308 /usr/lib/systemd/system-generators/torcx-generator[2095]: time="2024-02-12T19:44:55Z" level=info msg="torcx already run" Feb 12 19:44:55.330773 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:44:55.330795 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:44:55.349079 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:44:55.438513 systemd[1]: Started kubelet.service. Feb 12 19:44:55.489958 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:44:55.490350 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:44:55.490508 kubelet[2163]: I0212 19:44:55.490479 2163 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:44:55.492076 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:44:55.492178 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:44:56.030325 kubelet[2163]: I0212 19:44:56.030286 2163 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:44:56.030325 kubelet[2163]: I0212 19:44:56.030316 2163 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:44:56.030595 kubelet[2163]: I0212 19:44:56.030576 2163 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:44:56.033561 kubelet[2163]: E0212 19:44:56.033532 2163 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.033785 kubelet[2163]: I0212 19:44:56.033768 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:44:56.036540 kubelet[2163]: I0212 19:44:56.036511 2163 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:44:56.036925 kubelet[2163]: I0212 19:44:56.036902 2163 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:44:56.037014 kubelet[2163]: I0212 19:44:56.036978 2163 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:44:56.037014 kubelet[2163]: I0212 19:44:56.037006 2163 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:44:56.037192 kubelet[2163]: I0212 19:44:56.037023 2163 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:44:56.037192 kubelet[2163]: I0212 19:44:56.037139 2163 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:44:56.039919 kubelet[2163]: I0212 19:44:56.039898 2163 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:44:56.039919 kubelet[2163]: I0212 19:44:56.039921 2163 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:44:56.040139 kubelet[2163]: I0212 19:44:56.039946 2163 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:44:56.040139 kubelet[2163]: I0212 19:44:56.039963 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:44:56.041062 kubelet[2163]: I0212 19:44:56.041048 2163 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:44:56.041434 kubelet[2163]: W0212 19:44:56.041418 2163 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:44:56.042034 kubelet[2163]: I0212 19:44:56.042017 2163 server.go:1186] "Started kubelet" Feb 12 19:44:56.042279 kubelet[2163]: W0212 19:44:56.042239 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2665495451&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.042812 kubelet[2163]: E0212 19:44:56.042793 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2665495451&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.046259 kubelet[2163]: W0212 19:44:56.045937 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.046259 kubelet[2163]: E0212 19:44:56.045983 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.046259 kubelet[2163]: E0212 19:44:56.046043 2163 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b335202eaefe2a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 41995818, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 41995818, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.31:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.31:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:44:56.051497 kubelet[2163]: E0212 19:44:56.051479 2163 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:44:56.051622 kubelet[2163]: E0212 19:44:56.051614 2163 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:44:56.052504 kubelet[2163]: I0212 19:44:56.052492 2163 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:44:56.054161 kubelet[2163]: I0212 19:44:56.054144 2163 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:44:56.054244 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:44:56.054336 kubelet[2163]: I0212 19:44:56.054318 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:44:56.057606 kubelet[2163]: I0212 19:44:56.057585 2163 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:44:56.058328 kubelet[2163]: W0212 19:44:56.058288 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.058328 kubelet[2163]: E0212 19:44:56.058331 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.059441 kubelet[2163]: E0212 19:44:56.058387 2163 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2665495451?timeout=10s": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.059441 kubelet[2163]: I0212 19:44:56.058596 2163 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:44:56.154587 kubelet[2163]: I0212 19:44:56.154563 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:44:56.154845 kubelet[2163]: I0212 19:44:56.154832 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:44:56.154959 kubelet[2163]: I0212 19:44:56.154951 2163 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:44:56.158653 kubelet[2163]: I0212 19:44:56.158630 2163 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:56.159019 kubelet[2163]: E0212 19:44:56.158991 2163 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:56.258847 kubelet[2163]: E0212 19:44:56.258799 2163 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2665495451?timeout=10s": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.359147 kubelet[2163]: I0212 19:44:56.359048 2163 policy_none.go:49] "None policy: Start" Feb 12 19:44:56.362309 kubelet[2163]: I0212 19:44:56.362283 2163 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:44:56.362418 kubelet[2163]: I0212 19:44:56.362313 2163 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:44:56.362825 kubelet[2163]: I0212 19:44:56.362801 2163 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:56.363160 kubelet[2163]: E0212 19:44:56.363142 2163 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:56.375087 kubelet[2163]: I0212 19:44:56.375060 2163 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:44:56.377244 kubelet[2163]: I0212 19:44:56.377217 2163 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:44:56.377453 kubelet[2163]: I0212 19:44:56.377434 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:44:56.380560 kubelet[2163]: E0212 19:44:56.380538 2163 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-2665495451\" not found" Feb 12 19:44:56.432060 kubelet[2163]: I0212 19:44:56.432028 2163 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:44:56.432060 kubelet[2163]: I0212 19:44:56.432061 2163 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:44:56.432288 kubelet[2163]: I0212 19:44:56.432090 2163 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:44:56.432288 kubelet[2163]: E0212 19:44:56.432153 2163 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:44:56.433501 kubelet[2163]: W0212 19:44:56.433474 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.433685 kubelet[2163]: E0212 19:44:56.433672 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.532455 kubelet[2163]: I0212 19:44:56.532409 2163 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:56.534157 kubelet[2163]: I0212 19:44:56.534138 2163 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:56.535597 kubelet[2163]: I0212 19:44:56.535575 2163 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:56.537886 kubelet[2163]: I0212 19:44:56.537862 2163 status_manager.go:698] "Failed to get status for pod" podUID=98be0c838f73a2dff0660c880a5a41f6 pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" err="Get \"https://10.200.8.31:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-2665495451\": dial tcp 10.200.8.31:6443: connect: connection refused" Feb 12 19:44:56.541767 kubelet[2163]: I0212 19:44:56.541743 2163 status_manager.go:698] "Failed to get status for pod" podUID=32f4d462dc00b56029ba9ef332844ec3 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" err="Get \"https://10.200.8.31:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-2665495451\": dial tcp 10.200.8.31:6443: connect: connection refused" Feb 12 19:44:56.543517 kubelet[2163]: I0212 19:44:56.543495 2163 status_manager.go:698] "Failed to get status for pod" podUID=686ac3ff44d5173182f64eb3a28de187 pod="kube-system/kube-scheduler-ci-3510.3.2-a-2665495451" err="Get \"https://10.200.8.31:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-2665495451\": dial tcp 10.200.8.31:6443: connect: connection refused" Feb 12 19:44:56.562026 kubelet[2163]: I0212 19:44:56.561981 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562026 kubelet[2163]: I0212 19:44:56.562032 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562259 kubelet[2163]: I0212 19:44:56.562064 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562259 kubelet[2163]: I0212 19:44:56.562090 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98be0c838f73a2dff0660c880a5a41f6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2665495451\" (UID: \"98be0c838f73a2dff0660c880a5a41f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562259 kubelet[2163]: I0212 19:44:56.562119 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98be0c838f73a2dff0660c880a5a41f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-2665495451\" (UID: \"98be0c838f73a2dff0660c880a5a41f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562259 kubelet[2163]: I0212 19:44:56.562143 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562259 kubelet[2163]: I0212 19:44:56.562169 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562416 kubelet[2163]: I0212 19:44:56.562197 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/686ac3ff44d5173182f64eb3a28de187-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-2665495451\" (UID: \"686ac3ff44d5173182f64eb3a28de187\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.562416 kubelet[2163]: I0212 19:44:56.562224 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98be0c838f73a2dff0660c880a5a41f6-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2665495451\" (UID: \"98be0c838f73a2dff0660c880a5a41f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:44:56.659764 kubelet[2163]: E0212 19:44:56.659621 2163 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2665495451?timeout=10s": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:56.764944 kubelet[2163]: I0212 19:44:56.764913 2163 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:56.765295 kubelet[2163]: E0212 19:44:56.765275 2163 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:56.842581 env[1398]: time="2024-02-12T19:44:56.842384538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-2665495451,Uid:98be0c838f73a2dff0660c880a5a41f6,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:56.842581 env[1398]: time="2024-02-12T19:44:56.842401038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-2665495451,Uid:32f4d462dc00b56029ba9ef332844ec3,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:56.844245 env[1398]: time="2024-02-12T19:44:56.844188572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-2665495451,Uid:686ac3ff44d5173182f64eb3a28de187,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:57.055280 kubelet[2163]: W0212 19:44:57.055221 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.055280 kubelet[2163]: E0212 19:44:57.055287 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.197460 kubelet[2163]: W0212 19:44:57.197405 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2665495451&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.197460 kubelet[2163]: E0212 19:44:57.197469 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2665495451&limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.364594 kubelet[2163]: W0212 19:44:57.364481 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.364594 kubelet[2163]: E0212 19:44:57.364535 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.388718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871080585.mount: Deactivated successfully. Feb 12 19:44:57.397013 kubelet[2163]: W0212 19:44:57.396961 2163 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.397013 kubelet[2163]: E0212 19:44:57.397019 2163 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.425055 env[1398]: time="2024-02-12T19:44:57.424999576Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.432955 env[1398]: time="2024-02-12T19:44:57.432905321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.460597 kubelet[2163]: E0212 19:44:57.460552 2163 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2665495451?timeout=10s": dial tcp 10.200.8.31:6443: connect: connection refused Feb 12 19:44:57.475015 env[1398]: time="2024-02-12T19:44:57.474960191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.478997 env[1398]: time="2024-02-12T19:44:57.478955364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.485041 env[1398]: time="2024-02-12T19:44:57.484997675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.492212 env[1398]: time="2024-02-12T19:44:57.492164606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.504433 env[1398]: time="2024-02-12T19:44:57.504389730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.511261 env[1398]: time="2024-02-12T19:44:57.511216555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.521104 env[1398]: time="2024-02-12T19:44:57.521056735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.527178 env[1398]: time="2024-02-12T19:44:57.527069945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.534678 env[1398]: time="2024-02-12T19:44:57.534628083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.540663 env[1398]: time="2024-02-12T19:44:57.540611493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:57.567805 kubelet[2163]: I0212 19:44:57.567700 2163 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:57.568242 kubelet[2163]: E0212 19:44:57.568077 2163 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.31:6443/api/v1/nodes\": dial tcp 10.200.8.31:6443: connect: connection refused" node="ci-3510.3.2-a-2665495451" Feb 12 19:44:57.649600 env[1398]: time="2024-02-12T19:44:57.648786773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:57.649600 env[1398]: time="2024-02-12T19:44:57.648884175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:57.649600 env[1398]: time="2024-02-12T19:44:57.648918075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:57.649600 env[1398]: time="2024-02-12T19:44:57.649102879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc6424f1d1501f08d0b25fe421a4fb1cdebfda3e0dc99e9f25ed3f5cdea13a82 pid=2240 runtime=io.containerd.runc.v2 Feb 12 19:44:57.674299 env[1398]: time="2024-02-12T19:44:57.674203738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:57.674486 env[1398]: time="2024-02-12T19:44:57.674316240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:57.674486 env[1398]: time="2024-02-12T19:44:57.674345241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:57.674593 env[1398]: time="2024-02-12T19:44:57.674515044Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a168307116f465a87c8607d062e3d98c9e47ca8c8351285737da2745acc5c4af pid=2270 runtime=io.containerd.runc.v2 Feb 12 19:44:57.675134 env[1398]: time="2024-02-12T19:44:57.674886151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:57.675134 env[1398]: time="2024-02-12T19:44:57.674943252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:57.675134 env[1398]: time="2024-02-12T19:44:57.674959352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:57.675512 env[1398]: time="2024-02-12T19:44:57.675446661Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d32a85294d5e92fbeee8766ea7746e164b9985c7b437c362c5ca1a4b6a685c52 pid=2263 runtime=io.containerd.runc.v2 Feb 12 19:44:57.744001 env[1398]: time="2024-02-12T19:44:57.743557408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-2665495451,Uid:98be0c838f73a2dff0660c880a5a41f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc6424f1d1501f08d0b25fe421a4fb1cdebfda3e0dc99e9f25ed3f5cdea13a82\"" Feb 12 19:44:57.748585 env[1398]: time="2024-02-12T19:44:57.748536299Z" level=info msg="CreateContainer within sandbox \"cc6424f1d1501f08d0b25fe421a4fb1cdebfda3e0dc99e9f25ed3f5cdea13a82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:44:57.782599 env[1398]: time="2024-02-12T19:44:57.778256143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-2665495451,Uid:32f4d462dc00b56029ba9ef332844ec3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d32a85294d5e92fbeee8766ea7746e164b9985c7b437c362c5ca1a4b6a685c52\"" Feb 12 19:44:57.782599 env[1398]: time="2024-02-12T19:44:57.781361900Z" level=info msg="CreateContainer within sandbox \"d32a85294d5e92fbeee8766ea7746e164b9985c7b437c362c5ca1a4b6a685c52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:44:57.799687 env[1398]: time="2024-02-12T19:44:57.799626034Z" level=info msg="CreateContainer within sandbox \"cc6424f1d1501f08d0b25fe421a4fb1cdebfda3e0dc99e9f25ed3f5cdea13a82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f632c16da5651e03e8615afd5007b058c0e423718fcb86c6c7933c99a87011a\"" Feb 12 19:44:57.801003 env[1398]: time="2024-02-12T19:44:57.800975459Z" level=info msg="StartContainer for \"9f632c16da5651e03e8615afd5007b058c0e423718fcb86c6c7933c99a87011a\"" Feb 12 19:44:57.806156 env[1398]: time="2024-02-12T19:44:57.806111653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-2665495451,Uid:686ac3ff44d5173182f64eb3a28de187,Namespace:kube-system,Attempt:0,} returns sandbox id \"a168307116f465a87c8607d062e3d98c9e47ca8c8351285737da2745acc5c4af\"" Feb 12 19:44:57.808858 env[1398]: time="2024-02-12T19:44:57.808822503Z" level=info msg="CreateContainer within sandbox \"a168307116f465a87c8607d062e3d98c9e47ca8c8351285737da2745acc5c4af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:44:57.850862 env[1398]: time="2024-02-12T19:44:57.850805271Z" level=info msg="CreateContainer within sandbox \"d32a85294d5e92fbeee8766ea7746e164b9985c7b437c362c5ca1a4b6a685c52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491\"" Feb 12 19:44:57.851460 env[1398]: time="2024-02-12T19:44:57.851430783Z" level=info msg="StartContainer for \"8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491\"" Feb 12 19:44:57.880800 env[1398]: time="2024-02-12T19:44:57.879581098Z" level=info msg="CreateContainer within sandbox \"a168307116f465a87c8607d062e3d98c9e47ca8c8351285737da2745acc5c4af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c\"" Feb 12 19:44:57.881867 env[1398]: time="2024-02-12T19:44:57.881825239Z" level=info msg="StartContainer for \"193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c\"" Feb 12 19:44:57.891862 env[1398]: time="2024-02-12T19:44:57.891809522Z" level=info msg="StartContainer for \"9f632c16da5651e03e8615afd5007b058c0e423718fcb86c6c7933c99a87011a\" returns successfully" Feb 12 19:44:57.997447 env[1398]: time="2024-02-12T19:44:57.997374554Z" level=info msg="StartContainer for \"8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491\" returns successfully" Feb 12 19:44:58.079944 env[1398]: time="2024-02-12T19:44:58.079882630Z" level=info msg="StartContainer for \"193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c\" returns successfully" Feb 12 19:44:59.170501 kubelet[2163]: I0212 19:44:59.170478 2163 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2665495451" Feb 12 19:45:00.448902 kubelet[2163]: I0212 19:45:00.448864 2163 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-2665495451" Feb 12 19:45:00.973965 kubelet[2163]: E0212 19:45:00.973854 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b335202eaefe2a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 41995818, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 41995818, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.027316 kubelet[2163]: E0212 19:45:01.027209 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b335202f41956f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 51602799, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 51602799, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.048097 kubelet[2163]: I0212 19:45:01.048048 2163 apiserver.go:52] "Watching apiserver" Feb 12 19:45:01.058281 kubelet[2163]: I0212 19:45:01.058239 2163 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:45:01.086544 kubelet[2163]: I0212 19:45:01.086489 2163 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:45:01.094678 kubelet[2163]: E0212 19:45:01.094574 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a1f2a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153874218, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153874218, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.148825 kubelet[2163]: E0212 19:45:01.148722 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a4252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153883218, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153883218, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.202504 kubelet[2163]: E0212 19:45:01.202408 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a5896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153888918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153888918, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.259572 kubelet[2163]: E0212 19:45:01.259318 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a1f2a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153874218, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 158597106, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.320326 kubelet[2163]: E0212 19:45:01.320207 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a4252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153883218, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 158602707, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.376761 kubelet[2163]: E0212 19:45:01.376632 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a5896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153888918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 158606407, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.573847 kubelet[2163]: E0212 19:45:01.573650 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a1f2a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153874218, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 362768938, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:01.971734 kubelet[2163]: E0212 19:45:01.971605 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a4252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153883218, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 362773738, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:02.368166 kubelet[2163]: E0212 19:45:02.367965 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b33520355a5896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-2665495451 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 153888918, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 362778638, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:02.558570 kubelet[2163]: E0212 19:45:02.558467 2163 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2665495451.17b3352042cabafd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2665495451", UID:"ci-3510.3.2-a-2665495451", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 379357949, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 44, 56, 379357949, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:45:04.085638 systemd[1]: Reloading. Feb 12 19:45:04.185583 /usr/lib/systemd/system-generators/torcx-generator[2496]: time="2024-02-12T19:45:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:04.185621 /usr/lib/systemd/system-generators/torcx-generator[2496]: time="2024-02-12T19:45:04Z" level=info msg="torcx already run" Feb 12 19:45:04.278903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:04.278923 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:04.297308 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:04.440872 systemd[1]: Stopping kubelet.service... Feb 12 19:45:04.458200 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:45:04.458647 systemd[1]: Stopped kubelet.service. Feb 12 19:45:04.462194 systemd[1]: Started kubelet.service. Feb 12 19:45:04.585443 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:04.585443 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:04.585443 kubelet[2567]: I0212 19:45:04.585385 2567 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:45:04.587672 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:04.587672 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:04.591069 kubelet[2567]: I0212 19:45:04.591023 2567 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:45:04.591069 kubelet[2567]: I0212 19:45:04.591053 2567 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:45:04.591464 kubelet[2567]: I0212 19:45:04.591444 2567 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:45:04.596633 kubelet[2567]: I0212 19:45:04.596607 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:45:04.598953 kubelet[2567]: I0212 19:45:04.598910 2567 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:04.603818 kubelet[2567]: I0212 19:45:04.603790 2567 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:45:04.604300 kubelet[2567]: I0212 19:45:04.604281 2567 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:45:04.604379 kubelet[2567]: I0212 19:45:04.604365 2567 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:45:04.604495 kubelet[2567]: I0212 19:45:04.604392 2567 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:45:04.604495 kubelet[2567]: I0212 19:45:04.604408 2567 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:45:04.604495 kubelet[2567]: I0212 19:45:04.604449 2567 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:04.607127 kubelet[2567]: I0212 19:45:04.607106 2567 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:45:04.607127 kubelet[2567]: I0212 19:45:04.607131 2567 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:45:04.607260 kubelet[2567]: I0212 19:45:04.607152 2567 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:45:04.607260 kubelet[2567]: I0212 19:45:04.607166 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:45:04.609126 kubelet[2567]: I0212 19:45:04.609108 2567 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:45:04.609789 kubelet[2567]: I0212 19:45:04.609773 2567 server.go:1186] "Started kubelet" Feb 12 19:45:04.619723 kubelet[2567]: I0212 19:45:04.619689 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:45:04.623226 kubelet[2567]: I0212 19:45:04.623206 2567 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:45:04.624139 kubelet[2567]: I0212 19:45:04.624122 2567 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:45:04.628174 kubelet[2567]: I0212 19:45:04.628154 2567 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:45:04.631660 kubelet[2567]: I0212 19:45:04.631636 2567 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:45:04.636217 kubelet[2567]: E0212 19:45:04.635843 2567 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:45:04.636217 kubelet[2567]: E0212 19:45:04.635870 2567 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:45:04.677577 kubelet[2567]: I0212 19:45:04.677545 2567 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:45:04.704523 kubelet[2567]: I0212 19:45:04.704417 2567 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:45:04.704523 kubelet[2567]: I0212 19:45:04.704446 2567 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:45:04.704523 kubelet[2567]: I0212 19:45:04.704468 2567 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:45:04.706016 kubelet[2567]: E0212 19:45:04.705988 2567 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:45:04.732537 sudo[2617]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:45:04.733129 sudo[2617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:45:04.739895 kubelet[2567]: I0212 19:45:04.739324 2567 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2665495451" Feb 12 19:45:04.755764 kubelet[2567]: I0212 19:45:04.755732 2567 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-2665495451" Feb 12 19:45:04.756039 kubelet[2567]: I0212 19:45:04.756025 2567 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-2665495451" Feb 12 19:45:04.781986 kubelet[2567]: I0212 19:45:04.781954 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:45:04.791277 kubelet[2567]: I0212 19:45:04.791231 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:45:04.791510 kubelet[2567]: I0212 19:45:04.791499 2567 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:04.791817 kubelet[2567]: I0212 19:45:04.791792 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:45:04.791933 kubelet[2567]: I0212 19:45:04.791925 2567 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:45:04.791999 kubelet[2567]: I0212 19:45:04.791992 2567 policy_none.go:49] "None policy: Start" Feb 12 19:45:04.798152 kubelet[2567]: I0212 19:45:04.798113 2567 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:45:04.798152 kubelet[2567]: I0212 19:45:04.798151 2567 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:45:04.798417 kubelet[2567]: I0212 19:45:04.798398 2567 state_mem.go:75] "Updated machine memory state" Feb 12 19:45:04.799662 kubelet[2567]: I0212 19:45:04.799631 2567 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:45:04.805696 kubelet[2567]: I0212 19:45:04.802353 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:45:04.808656 kubelet[2567]: I0212 19:45:04.808635 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:04.808897 kubelet[2567]: I0212 19:45:04.808884 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:04.809014 kubelet[2567]: I0212 19:45:04.809004 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:04.825090 kubelet[2567]: E0212 19:45:04.825063 2567 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-2665495451\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835338 kubelet[2567]: I0212 19:45:04.834264 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98be0c838f73a2dff0660c880a5a41f6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2665495451\" (UID: \"98be0c838f73a2dff0660c880a5a41f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835338 kubelet[2567]: I0212 19:45:04.834333 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98be0c838f73a2dff0660c880a5a41f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-2665495451\" (UID: \"98be0c838f73a2dff0660c880a5a41f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835338 kubelet[2567]: I0212 19:45:04.834386 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835338 kubelet[2567]: I0212 19:45:04.834420 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835338 kubelet[2567]: I0212 19:45:04.834475 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835802 kubelet[2567]: I0212 19:45:04.834507 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/686ac3ff44d5173182f64eb3a28de187-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-2665495451\" (UID: \"686ac3ff44d5173182f64eb3a28de187\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835802 kubelet[2567]: I0212 19:45:04.834548 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98be0c838f73a2dff0660c880a5a41f6-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2665495451\" (UID: \"98be0c838f73a2dff0660c880a5a41f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835802 kubelet[2567]: I0212 19:45:04.834579 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:45:04.835802 kubelet[2567]: I0212 19:45:04.834622 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32f4d462dc00b56029ba9ef332844ec3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-2665495451\" (UID: \"32f4d462dc00b56029ba9ef332844ec3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:45:05.338786 sudo[2617]: pam_unix(sudo:session): session closed for user root Feb 12 19:45:05.614478 kubelet[2567]: I0212 19:45:05.613817 2567 apiserver.go:52] "Watching apiserver" Feb 12 19:45:05.632324 kubelet[2567]: I0212 19:45:05.632279 2567 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:45:05.638448 kubelet[2567]: I0212 19:45:05.638407 2567 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:45:06.017382 kubelet[2567]: E0212 19:45:06.017344 2567 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-2665495451\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" Feb 12 19:45:06.217717 kubelet[2567]: E0212 19:45:06.217652 2567 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-2665495451\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" Feb 12 19:45:06.417983 kubelet[2567]: E0212 19:45:06.417870 2567 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-2665495451\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-2665495451" Feb 12 19:45:06.627961 kubelet[2567]: I0212 19:45:06.627923 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2665495451" podStartSLOduration=2.627860228 pod.CreationTimestamp="2024-02-12 19:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:06.626923914 +0000 UTC m=+2.150662272" watchObservedRunningTime="2024-02-12 19:45:06.627860228 +0000 UTC m=+2.151598586" Feb 12 19:45:07.317295 sudo[1740]: pam_unix(sudo:session): session closed for user root Feb 12 19:45:07.412630 kubelet[2567]: I0212 19:45:07.412595 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2665495451" podStartSLOduration=6.412555878 pod.CreationTimestamp="2024-02-12 19:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:07.014736536 +0000 UTC m=+2.538474994" watchObservedRunningTime="2024-02-12 19:45:07.412555878 +0000 UTC m=+2.936294236" Feb 12 19:45:07.417342 sshd[1736]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:07.420224 systemd[1]: sshd@4-10.200.8.31:22-10.200.12.6:46732.service: Deactivated successfully. Feb 12 19:45:07.421126 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:45:07.422354 systemd-logind[1384]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:45:07.423349 systemd-logind[1384]: Removed session 7. Feb 12 19:45:09.410452 kubelet[2567]: I0212 19:45:09.410416 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-2665495451" podStartSLOduration=5.410360725 pod.CreationTimestamp="2024-02-12 19:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:07.412904583 +0000 UTC m=+2.936642941" watchObservedRunningTime="2024-02-12 19:45:09.410360725 +0000 UTC m=+4.934099183" Feb 12 19:45:17.255030 kubelet[2567]: I0212 19:45:17.255005 2567 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:45:17.256068 env[1398]: time="2024-02-12T19:45:17.256030494Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:45:17.256734 kubelet[2567]: I0212 19:45:17.256718 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:45:17.509885 kubelet[2567]: I0212 19:45:17.509765 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:17.557834 kubelet[2567]: I0212 19:45:17.557800 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:17.579399 kubelet[2567]: W0212 19:45:17.579361 2567 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-2665495451" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2665495451' and this object Feb 12 19:45:17.579399 kubelet[2567]: E0212 19:45:17.579411 2567 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-2665495451" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2665495451' and this object Feb 12 19:45:17.579677 kubelet[2567]: W0212 19:45:17.579506 2567 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-2665495451" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2665495451' and this object Feb 12 19:45:17.579677 kubelet[2567]: E0212 19:45:17.579519 2567 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-2665495451" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2665495451' and this object Feb 12 19:45:17.579677 kubelet[2567]: W0212 19:45:17.579558 2567 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-2665495451" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2665495451' and this object Feb 12 19:45:17.579677 kubelet[2567]: E0212 19:45:17.579569 2567 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-2665495451" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2665495451' and this object Feb 12 19:45:17.608642 kubelet[2567]: I0212 19:45:17.608605 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-lib-modules\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.608855 kubelet[2567]: I0212 19:45:17.608662 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/822e1966-c2d6-46d2-a483-9e774a6be580-clustermesh-secrets\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.608855 kubelet[2567]: I0212 19:45:17.608691 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llqjx\" (UniqueName: \"kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-kube-api-access-llqjx\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.608855 kubelet[2567]: I0212 19:45:17.608728 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-cgroup\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.608855 kubelet[2567]: I0212 19:45:17.608755 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-kernel\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.608855 kubelet[2567]: I0212 19:45:17.608778 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ebb48567-f005-45da-8a5a-d3259e951ddc-kube-proxy\") pod \"kube-proxy-tlfqt\" (UID: \"ebb48567-f005-45da-8a5a-d3259e951ddc\") " pod="kube-system/kube-proxy-tlfqt" Feb 12 19:45:17.609138 kubelet[2567]: I0212 19:45:17.608804 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebb48567-f005-45da-8a5a-d3259e951ddc-xtables-lock\") pod \"kube-proxy-tlfqt\" (UID: \"ebb48567-f005-45da-8a5a-d3259e951ddc\") " pod="kube-system/kube-proxy-tlfqt" Feb 12 19:45:17.609138 kubelet[2567]: I0212 19:45:17.608827 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-bpf-maps\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.611236 kubelet[2567]: I0212 19:45:17.611211 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-net\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.611544 kubelet[2567]: I0212 19:45:17.611436 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-hubble-tls\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.612146 kubelet[2567]: I0212 19:45:17.612125 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-run\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.613834 kubelet[2567]: I0212 19:45:17.613815 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cni-path\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.614148 kubelet[2567]: I0212 19:45:17.614128 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-xtables-lock\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.614243 kubelet[2567]: I0212 19:45:17.614198 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebb48567-f005-45da-8a5a-d3259e951ddc-lib-modules\") pod \"kube-proxy-tlfqt\" (UID: \"ebb48567-f005-45da-8a5a-d3259e951ddc\") " pod="kube-system/kube-proxy-tlfqt" Feb 12 19:45:17.614296 kubelet[2567]: I0212 19:45:17.614244 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-hostproc\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.614296 kubelet[2567]: I0212 19:45:17.614282 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-config-path\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.614432 kubelet[2567]: I0212 19:45:17.614411 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctb69\" (UniqueName: \"kubernetes.io/projected/ebb48567-f005-45da-8a5a-d3259e951ddc-kube-api-access-ctb69\") pod \"kube-proxy-tlfqt\" (UID: \"ebb48567-f005-45da-8a5a-d3259e951ddc\") " pod="kube-system/kube-proxy-tlfqt" Feb 12 19:45:17.614503 kubelet[2567]: I0212 19:45:17.614470 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-etc-cni-netd\") pod \"cilium-xrn7p\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " pod="kube-system/cilium-xrn7p" Feb 12 19:45:17.723902 kubelet[2567]: E0212 19:45:17.723867 2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 12 19:45:17.723902 kubelet[2567]: E0212 19:45:17.723896 2567 projected.go:198] Error preparing data for projected volume kube-api-access-ctb69 for pod kube-system/kube-proxy-tlfqt: configmap "kube-root-ca.crt" not found Feb 12 19:45:17.724144 kubelet[2567]: E0212 19:45:17.723972 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebb48567-f005-45da-8a5a-d3259e951ddc-kube-api-access-ctb69 podName:ebb48567-f005-45da-8a5a-d3259e951ddc nodeName:}" failed. No retries permitted until 2024-02-12 19:45:18.223950427 +0000 UTC m=+13.747688885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ctb69" (UniqueName: "kubernetes.io/projected/ebb48567-f005-45da-8a5a-d3259e951ddc-kube-api-access-ctb69") pod "kube-proxy-tlfqt" (UID: "ebb48567-f005-45da-8a5a-d3259e951ddc") : configmap "kube-root-ca.crt" not found Feb 12 19:45:17.724943 kubelet[2567]: E0212 19:45:17.724920 2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 12 19:45:17.724943 kubelet[2567]: E0212 19:45:17.724944 2567 projected.go:198] Error preparing data for projected volume kube-api-access-llqjx for pod kube-system/cilium-xrn7p: configmap "kube-root-ca.crt" not found Feb 12 19:45:17.725095 kubelet[2567]: E0212 19:45:17.724994 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-kube-api-access-llqjx podName:822e1966-c2d6-46d2-a483-9e774a6be580 nodeName:}" failed. No retries permitted until 2024-02-12 19:45:18.224969739 +0000 UTC m=+13.748708097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-llqjx" (UniqueName: "kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-kube-api-access-llqjx") pod "cilium-xrn7p" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580") : configmap "kube-root-ca.crt" not found Feb 12 19:45:18.285557 kubelet[2567]: I0212 19:45:18.285522 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:18.321099 kubelet[2567]: I0212 19:45:18.321057 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5rwc\" (UniqueName: \"kubernetes.io/projected/5bdee7ad-e4a2-4685-8608-60cec7b7b943-kube-api-access-f5rwc\") pod \"cilium-operator-f59cbd8c6-khwwb\" (UID: \"5bdee7ad-e4a2-4685-8608-60cec7b7b943\") " pod="kube-system/cilium-operator-f59cbd8c6-khwwb" Feb 12 19:45:18.321320 kubelet[2567]: I0212 19:45:18.321176 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bdee7ad-e4a2-4685-8608-60cec7b7b943-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-khwwb\" (UID: \"5bdee7ad-e4a2-4685-8608-60cec7b7b943\") " pod="kube-system/cilium-operator-f59cbd8c6-khwwb" Feb 12 19:45:18.415411 env[1398]: time="2024-02-12T19:45:18.415362959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlfqt,Uid:ebb48567-f005-45da-8a5a-d3259e951ddc,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:18.469103 env[1398]: time="2024-02-12T19:45:18.469036670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:18.469268 env[1398]: time="2024-02-12T19:45:18.469114371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:18.469268 env[1398]: time="2024-02-12T19:45:18.469143671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:18.469486 env[1398]: time="2024-02-12T19:45:18.469438374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/588ed476f396f6227c6ca2ae850e99ae89ccb4916b120fd086744dc7795c9dfe pid=2669 runtime=io.containerd.runc.v2 Feb 12 19:45:18.510509 env[1398]: time="2024-02-12T19:45:18.510456341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlfqt,Uid:ebb48567-f005-45da-8a5a-d3259e951ddc,Namespace:kube-system,Attempt:0,} returns sandbox id \"588ed476f396f6227c6ca2ae850e99ae89ccb4916b120fd086744dc7795c9dfe\"" Feb 12 19:45:18.513259 env[1398]: time="2024-02-12T19:45:18.513219672Z" level=info msg="CreateContainer within sandbox \"588ed476f396f6227c6ca2ae850e99ae89ccb4916b120fd086744dc7795c9dfe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:45:18.571808 env[1398]: time="2024-02-12T19:45:18.571679537Z" level=info msg="CreateContainer within sandbox \"588ed476f396f6227c6ca2ae850e99ae89ccb4916b120fd086744dc7795c9dfe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46d7c62dee442d04da7961f2356e54fa15305be7610a0b70b4e27351ec2e2c24\"" Feb 12 19:45:18.574392 env[1398]: time="2024-02-12T19:45:18.572932552Z" level=info msg="StartContainer for \"46d7c62dee442d04da7961f2356e54fa15305be7610a0b70b4e27351ec2e2c24\"" Feb 12 19:45:18.634183 env[1398]: time="2024-02-12T19:45:18.634121348Z" level=info msg="StartContainer for \"46d7c62dee442d04da7961f2356e54fa15305be7610a0b70b4e27351ec2e2c24\" returns successfully" Feb 12 19:45:18.717164 kubelet[2567]: E0212 19:45:18.717128 2567 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 12 19:45:18.717368 kubelet[2567]: E0212 19:45:18.717231 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822e1966-c2d6-46d2-a483-9e774a6be580-clustermesh-secrets podName:822e1966-c2d6-46d2-a483-9e774a6be580 nodeName:}" failed. No retries permitted until 2024-02-12 19:45:19.217209893 +0000 UTC m=+14.740948351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/822e1966-c2d6-46d2-a483-9e774a6be580-clustermesh-secrets") pod "cilium-xrn7p" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580") : failed to sync secret cache: timed out waiting for the condition Feb 12 19:45:18.717483 kubelet[2567]: E0212 19:45:18.717127 2567 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:45:18.717535 kubelet[2567]: E0212 19:45:18.717503 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-config-path podName:822e1966-c2d6-46d2-a483-9e774a6be580 nodeName:}" failed. No retries permitted until 2024-02-12 19:45:19.217489496 +0000 UTC m=+14.741227854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-config-path") pod "cilium-xrn7p" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:45:19.117549 kubelet[2567]: I0212 19:45:19.117506 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tlfqt" podStartSLOduration=2.11746342 pod.CreationTimestamp="2024-02-12 19:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:19.116986715 +0000 UTC m=+14.640725073" watchObservedRunningTime="2024-02-12 19:45:19.11746342 +0000 UTC m=+14.641201778" Feb 12 19:45:19.362542 env[1398]: time="2024-02-12T19:45:19.362489951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xrn7p,Uid:822e1966-c2d6-46d2-a483-9e774a6be580,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:19.413105 env[1398]: time="2024-02-12T19:45:19.412955814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:19.413105 env[1398]: time="2024-02-12T19:45:19.412995014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:19.413105 env[1398]: time="2024-02-12T19:45:19.413010014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:19.413618 env[1398]: time="2024-02-12T19:45:19.413569121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857 pid=2853 runtime=io.containerd.runc.v2 Feb 12 19:45:19.457290 env[1398]: time="2024-02-12T19:45:19.457234907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xrn7p,Uid:822e1966-c2d6-46d2-a483-9e774a6be580,Namespace:kube-system,Attempt:0,} returns sandbox id \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\"" Feb 12 19:45:19.459789 env[1398]: time="2024-02-12T19:45:19.459756236Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:45:19.491296 env[1398]: time="2024-02-12T19:45:19.491255987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-khwwb,Uid:5bdee7ad-e4a2-4685-8608-60cec7b7b943,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:19.541182 env[1398]: time="2024-02-12T19:45:19.541117443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:19.541398 env[1398]: time="2024-02-12T19:45:19.541159243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:19.541398 env[1398]: time="2024-02-12T19:45:19.541172943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:19.541398 env[1398]: time="2024-02-12T19:45:19.541303245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f pid=2899 runtime=io.containerd.runc.v2 Feb 12 19:45:19.593541 env[1398]: time="2024-02-12T19:45:19.593495826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-khwwb,Uid:5bdee7ad-e4a2-4685-8608-60cec7b7b943,Namespace:kube-system,Attempt:0,} returns sandbox id \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\"" Feb 12 19:45:25.108332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861174368.mount: Deactivated successfully. Feb 12 19:45:27.894855 env[1398]: time="2024-02-12T19:45:27.894797622Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.902699 env[1398]: time="2024-02-12T19:45:27.902592597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.909604 env[1398]: time="2024-02-12T19:45:27.909560764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:27.910278 env[1398]: time="2024-02-12T19:45:27.910234070Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:45:27.912145 env[1398]: time="2024-02-12T19:45:27.912109988Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:45:27.913559 env[1398]: time="2024-02-12T19:45:27.913132898Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:45:27.959198 env[1398]: time="2024-02-12T19:45:27.959141737Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\"" Feb 12 19:45:27.960736 env[1398]: time="2024-02-12T19:45:27.959937945Z" level=info msg="StartContainer for \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\"" Feb 12 19:45:28.029744 env[1398]: time="2024-02-12T19:45:28.028387395Z" level=info msg="StartContainer for \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\" returns successfully" Feb 12 19:45:28.946108 systemd[1]: run-containerd-runc-k8s.io-4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35-runc.WKrJEH.mount: Deactivated successfully. Feb 12 19:45:28.946300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35-rootfs.mount: Deactivated successfully. Feb 12 19:45:32.308586 env[1398]: time="2024-02-12T19:45:32.308525901Z" level=info msg="shim disconnected" id=4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35 Feb 12 19:45:32.308586 env[1398]: time="2024-02-12T19:45:32.308585502Z" level=warning msg="cleaning up after shim disconnected" id=4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35 namespace=k8s.io Feb 12 19:45:32.309212 env[1398]: time="2024-02-12T19:45:32.308597802Z" level=info msg="cleaning up dead shim" Feb 12 19:45:32.316437 env[1398]: time="2024-02-12T19:45:32.316394070Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2981 runtime=io.containerd.runc.v2\n" Feb 12 19:45:32.811843 env[1398]: time="2024-02-12T19:45:32.811788206Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:45:32.843380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885399002.mount: Deactivated successfully. Feb 12 19:45:32.865983 env[1398]: time="2024-02-12T19:45:32.865847079Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\"" Feb 12 19:45:32.869504 env[1398]: time="2024-02-12T19:45:32.869457511Z" level=info msg="StartContainer for \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\"" Feb 12 19:45:32.948955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:45:32.949333 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:45:32.949522 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:45:32.952838 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:45:32.957348 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:45:32.970343 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:45:33.009501 env[1398]: time="2024-02-12T19:45:33.009450935Z" level=info msg="StartContainer for \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\" returns successfully" Feb 12 19:45:33.050864 env[1398]: time="2024-02-12T19:45:33.050612589Z" level=info msg="shim disconnected" id=d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c Feb 12 19:45:33.050864 env[1398]: time="2024-02-12T19:45:33.050665789Z" level=warning msg="cleaning up after shim disconnected" id=d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c namespace=k8s.io Feb 12 19:45:33.050864 env[1398]: time="2024-02-12T19:45:33.050681890Z" level=info msg="cleaning up dead shim" Feb 12 19:45:33.058492 env[1398]: time="2024-02-12T19:45:33.058440156Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\n" Feb 12 19:45:33.788745 env[1398]: time="2024-02-12T19:45:33.784053901Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:45:33.839511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c-rootfs.mount: Deactivated successfully. Feb 12 19:45:33.895553 env[1398]: time="2024-02-12T19:45:33.895495060Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\"" Feb 12 19:45:33.897188 env[1398]: time="2024-02-12T19:45:33.896087365Z" level=info msg="StartContainer for \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\"" Feb 12 19:45:33.907351 env[1398]: time="2024-02-12T19:45:33.907315162Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.923068 env[1398]: time="2024-02-12T19:45:33.923031397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.935758 env[1398]: time="2024-02-12T19:45:33.935680406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:33.936538 env[1398]: time="2024-02-12T19:45:33.936488113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:45:33.942911 env[1398]: time="2024-02-12T19:45:33.942872668Z" level=info msg="CreateContainer within sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:45:33.988840 env[1398]: time="2024-02-12T19:45:33.988789163Z" level=info msg="CreateContainer within sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\"" Feb 12 19:45:33.991075 env[1398]: time="2024-02-12T19:45:33.989525369Z" level=info msg="StartContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\"" Feb 12 19:45:33.991376 env[1398]: time="2024-02-12T19:45:33.991347485Z" level=info msg="StartContainer for \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\" returns successfully" Feb 12 19:45:34.458569 env[1398]: time="2024-02-12T19:45:34.458520840Z" level=info msg="shim disconnected" id=373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09 Feb 12 19:45:34.458827 env[1398]: time="2024-02-12T19:45:34.458790443Z" level=warning msg="cleaning up after shim disconnected" id=373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09 namespace=k8s.io Feb 12 19:45:34.458827 env[1398]: time="2024-02-12T19:45:34.458811843Z" level=info msg="cleaning up dead shim" Feb 12 19:45:34.472236 env[1398]: time="2024-02-12T19:45:34.472193956Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3129 runtime=io.containerd.runc.v2\n" Feb 12 19:45:34.490675 env[1398]: time="2024-02-12T19:45:34.490620512Z" level=info msg="StartContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" returns successfully" Feb 12 19:45:34.788346 env[1398]: time="2024-02-12T19:45:34.787872328Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:45:34.844218 systemd[1]: run-containerd-runc-k8s.io-373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09-runc.ctaTqq.mount: Deactivated successfully. Feb 12 19:45:34.844403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09-rootfs.mount: Deactivated successfully. Feb 12 19:45:34.869187 env[1398]: time="2024-02-12T19:45:34.869027015Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\"" Feb 12 19:45:34.872969 env[1398]: time="2024-02-12T19:45:34.870259725Z" level=info msg="StartContainer for \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\"" Feb 12 19:45:35.056036 env[1398]: time="2024-02-12T19:45:35.055941689Z" level=info msg="StartContainer for \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\" returns successfully" Feb 12 19:45:35.088819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9-rootfs.mount: Deactivated successfully. Feb 12 19:45:35.103403 env[1398]: time="2024-02-12T19:45:35.103352584Z" level=info msg="shim disconnected" id=d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9 Feb 12 19:45:35.103661 env[1398]: time="2024-02-12T19:45:35.103638286Z" level=warning msg="cleaning up after shim disconnected" id=d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9 namespace=k8s.io Feb 12 19:45:35.103783 env[1398]: time="2024-02-12T19:45:35.103765287Z" level=info msg="cleaning up dead shim" Feb 12 19:45:35.129801 env[1398]: time="2024-02-12T19:45:35.129746904Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3198 runtime=io.containerd.runc.v2\n" Feb 12 19:45:35.803742 env[1398]: time="2024-02-12T19:45:35.803620215Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:45:35.816532 kubelet[2567]: I0212 19:45:35.816500 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-khwwb" podStartSLOduration=-9.22337201903834e+09 pod.CreationTimestamp="2024-02-12 19:45:18 +0000 UTC" firstStartedPulling="2024-02-12 19:45:19.594617039 +0000 UTC m=+15.118355497" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:34.879976307 +0000 UTC m=+30.403714665" watchObservedRunningTime="2024-02-12 19:45:35.816436421 +0000 UTC m=+31.340174879" Feb 12 19:45:35.853175 env[1398]: time="2024-02-12T19:45:35.853032926Z" level=info msg="CreateContainer within sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\"" Feb 12 19:45:35.855765 env[1398]: time="2024-02-12T19:45:35.855255845Z" level=info msg="StartContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\"" Feb 12 19:45:35.922038 env[1398]: time="2024-02-12T19:45:35.921978100Z" level=info msg="StartContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" returns successfully" Feb 12 19:45:35.947819 systemd[1]: run-containerd-runc-k8s.io-2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472-runc.8srv47.mount: Deactivated successfully. Feb 12 19:45:36.102014 kubelet[2567]: I0212 19:45:36.101917 2567 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:45:36.128042 kubelet[2567]: I0212 19:45:36.127997 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:36.133251 kubelet[2567]: I0212 19:45:36.133220 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:36.155002 kubelet[2567]: I0212 19:45:36.154970 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnnd\" (UniqueName: \"kubernetes.io/projected/65e3a002-0ad6-4143-97a3-b5111387a40e-kube-api-access-zqnnd\") pod \"coredns-787d4945fb-j4cbs\" (UID: \"65e3a002-0ad6-4143-97a3-b5111387a40e\") " pod="kube-system/coredns-787d4945fb-j4cbs" Feb 12 19:45:36.155281 kubelet[2567]: I0212 19:45:36.155267 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65e3a002-0ad6-4143-97a3-b5111387a40e-config-volume\") pod \"coredns-787d4945fb-j4cbs\" (UID: \"65e3a002-0ad6-4143-97a3-b5111387a40e\") " pod="kube-system/coredns-787d4945fb-j4cbs" Feb 12 19:45:36.155400 kubelet[2567]: I0212 19:45:36.155390 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/680f93d5-16fb-45be-b309-809700c1b476-config-volume\") pod \"coredns-787d4945fb-smwgk\" (UID: \"680f93d5-16fb-45be-b309-809700c1b476\") " pod="kube-system/coredns-787d4945fb-smwgk" Feb 12 19:45:36.155498 kubelet[2567]: I0212 19:45:36.155490 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ws9f\" (UniqueName: \"kubernetes.io/projected/680f93d5-16fb-45be-b309-809700c1b476-kube-api-access-2ws9f\") pod \"coredns-787d4945fb-smwgk\" (UID: \"680f93d5-16fb-45be-b309-809700c1b476\") " pod="kube-system/coredns-787d4945fb-smwgk" Feb 12 19:45:36.431530 env[1398]: time="2024-02-12T19:45:36.431482085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j4cbs,Uid:65e3a002-0ad6-4143-97a3-b5111387a40e,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:36.436756 env[1398]: time="2024-02-12T19:45:36.436673628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-smwgk,Uid:680f93d5-16fb-45be-b309-809700c1b476,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:38.340298 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:45:38.340474 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:45:38.347182 systemd-networkd[1576]: cilium_host: Link UP Feb 12 19:45:38.348660 systemd-networkd[1576]: cilium_net: Link UP Feb 12 19:45:38.348918 systemd-networkd[1576]: cilium_net: Gained carrier Feb 12 19:45:38.349146 systemd-networkd[1576]: cilium_host: Gained carrier Feb 12 19:45:38.356582 systemd-networkd[1576]: cilium_host: Gained IPv6LL Feb 12 19:45:38.466290 systemd-networkd[1576]: cilium_vxlan: Link UP Feb 12 19:45:38.466299 systemd-networkd[1576]: cilium_vxlan: Gained carrier Feb 12 19:45:38.735740 kernel: NET: Registered PF_ALG protocol family Feb 12 19:45:39.221835 systemd-networkd[1576]: cilium_net: Gained IPv6LL Feb 12 19:45:39.514312 systemd-networkd[1576]: lxc_health: Link UP Feb 12 19:45:39.523025 systemd-networkd[1576]: lxc_health: Gained carrier Feb 12 19:45:39.523745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:45:39.989855 systemd-networkd[1576]: cilium_vxlan: Gained IPv6LL Feb 12 19:45:40.041362 systemd-networkd[1576]: lxca580a0f22472: Link UP Feb 12 19:45:40.046793 kernel: eth0: renamed from tmpc6650 Feb 12 19:45:40.055897 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca580a0f22472: link becomes ready Feb 12 19:45:40.055132 systemd-networkd[1576]: lxca580a0f22472: Gained carrier Feb 12 19:45:40.065187 systemd-networkd[1576]: lxc391c1601a4be: Link UP Feb 12 19:45:40.072868 kernel: eth0: renamed from tmp83e8e Feb 12 19:45:40.099485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc391c1601a4be: link becomes ready Feb 12 19:45:40.100198 systemd-networkd[1576]: lxc391c1601a4be: Gained carrier Feb 12 19:45:40.885887 systemd-networkd[1576]: lxc_health: Gained IPv6LL Feb 12 19:45:41.384228 kubelet[2567]: I0212 19:45:41.384188 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xrn7p" podStartSLOduration=-9.22337201247063e+09 pod.CreationTimestamp="2024-02-12 19:45:17 +0000 UTC" firstStartedPulling="2024-02-12 19:45:19.458962827 +0000 UTC m=+14.982701185" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:36.818627357 +0000 UTC m=+32.342365715" watchObservedRunningTime="2024-02-12 19:45:41.384144881 +0000 UTC m=+36.907883239" Feb 12 19:45:41.461860 systemd-networkd[1576]: lxca580a0f22472: Gained IPv6LL Feb 12 19:45:41.653882 systemd-networkd[1576]: lxc391c1601a4be: Gained IPv6LL Feb 12 19:45:43.874449 env[1398]: time="2024-02-12T19:45:43.874312975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:43.874449 env[1398]: time="2024-02-12T19:45:43.874359875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:43.874449 env[1398]: time="2024-02-12T19:45:43.874374475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:43.877267 env[1398]: time="2024-02-12T19:45:43.875258682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6650a0badbeb5dca5c7141c45cc6b2056f36083f29940f5a3bb03b6f619f1ca pid=3743 runtime=io.containerd.runc.v2 Feb 12 19:45:43.917275 env[1398]: time="2024-02-12T19:45:43.917178291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:43.917275 env[1398]: time="2024-02-12T19:45:43.917230591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:43.917541 env[1398]: time="2024-02-12T19:45:43.917259391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:43.917541 env[1398]: time="2024-02-12T19:45:43.917450193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83e8e91b2a7e5e62227b826f1a5337867eb1cf8c77ff1a4e1728b36b55a9ebf8 pid=3765 runtime=io.containerd.runc.v2 Feb 12 19:45:43.994550 env[1398]: time="2024-02-12T19:45:43.994499061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j4cbs,Uid:65e3a002-0ad6-4143-97a3-b5111387a40e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6650a0badbeb5dca5c7141c45cc6b2056f36083f29940f5a3bb03b6f619f1ca\"" Feb 12 19:45:43.999570 env[1398]: time="2024-02-12T19:45:43.999528598Z" level=info msg="CreateContainer within sandbox \"c6650a0badbeb5dca5c7141c45cc6b2056f36083f29940f5a3bb03b6f619f1ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:45:44.044074 env[1398]: time="2024-02-12T19:45:44.044023021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-smwgk,Uid:680f93d5-16fb-45be-b309-809700c1b476,Namespace:kube-system,Attempt:0,} returns sandbox id \"83e8e91b2a7e5e62227b826f1a5337867eb1cf8c77ff1a4e1728b36b55a9ebf8\"" Feb 12 19:45:44.053577 env[1398]: time="2024-02-12T19:45:44.053532690Z" level=info msg="CreateContainer within sandbox \"83e8e91b2a7e5e62227b826f1a5337867eb1cf8c77ff1a4e1728b36b55a9ebf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:45:44.062733 env[1398]: time="2024-02-12T19:45:44.058152424Z" level=info msg="CreateContainer within sandbox \"c6650a0badbeb5dca5c7141c45cc6b2056f36083f29940f5a3bb03b6f619f1ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7f43547bc9811854976ee0b9b0b5aa300f36175698c026d5e224f519abddbb8\"" Feb 12 19:45:44.062733 env[1398]: time="2024-02-12T19:45:44.058831429Z" level=info msg="StartContainer for \"a7f43547bc9811854976ee0b9b0b5aa300f36175698c026d5e224f519abddbb8\"" Feb 12 19:45:44.096514 env[1398]: time="2024-02-12T19:45:44.096455602Z" level=info msg="CreateContainer within sandbox \"83e8e91b2a7e5e62227b826f1a5337867eb1cf8c77ff1a4e1728b36b55a9ebf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1dae68610bdb9a32c5ef9e2b98b166f97fd9f7823c51e435b1aa518f715d5313\"" Feb 12 19:45:44.099945 env[1398]: time="2024-02-12T19:45:44.099897827Z" level=info msg="StartContainer for \"1dae68610bdb9a32c5ef9e2b98b166f97fd9f7823c51e435b1aa518f715d5313\"" Feb 12 19:45:44.165231 env[1398]: time="2024-02-12T19:45:44.163420389Z" level=info msg="StartContainer for \"a7f43547bc9811854976ee0b9b0b5aa300f36175698c026d5e224f519abddbb8\" returns successfully" Feb 12 19:45:44.224730 env[1398]: time="2024-02-12T19:45:44.219804298Z" level=info msg="StartContainer for \"1dae68610bdb9a32c5ef9e2b98b166f97fd9f7823c51e435b1aa518f715d5313\" returns successfully" Feb 12 19:45:44.850647 kubelet[2567]: I0212 19:45:44.850619 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-smwgk" podStartSLOduration=26.850563282 pod.CreationTimestamp="2024-02-12 19:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:44.848562668 +0000 UTC m=+40.372301126" watchObservedRunningTime="2024-02-12 19:45:44.850563282 +0000 UTC m=+40.374301640" Feb 12 19:45:44.885303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870129890.mount: Deactivated successfully. Feb 12 19:45:44.916200 kubelet[2567]: I0212 19:45:44.916165 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-j4cbs" podStartSLOduration=26.916113659 pod.CreationTimestamp="2024-02-12 19:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:44.90528808 +0000 UTC m=+40.429026438" watchObservedRunningTime="2024-02-12 19:45:44.916113659 +0000 UTC m=+40.439852017" Feb 12 19:48:12.898133 systemd[1]: Started sshd@5-10.200.8.31:22-10.200.12.6:52424.service. Feb 12 19:48:13.513997 sshd[3962]: Accepted publickey for core from 10.200.12.6 port 52424 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:13.515559 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:13.519757 systemd-logind[1384]: New session 8 of user core. Feb 12 19:48:13.520803 systemd[1]: Started session-8.scope. Feb 12 19:48:14.230212 sshd[3962]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:14.233177 systemd[1]: sshd@5-10.200.8.31:22-10.200.12.6:52424.service: Deactivated successfully. Feb 12 19:48:14.235170 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:48:14.236109 systemd-logind[1384]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:48:14.237328 systemd-logind[1384]: Removed session 8. Feb 12 19:48:19.335439 systemd[1]: Started sshd@6-10.200.8.31:22-10.200.12.6:45920.service. Feb 12 19:48:19.957527 sshd[4000]: Accepted publickey for core from 10.200.12.6 port 45920 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:19.959125 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:19.963623 systemd-logind[1384]: New session 9 of user core. Feb 12 19:48:19.964240 systemd[1]: Started session-9.scope. Feb 12 19:48:20.448024 sshd[4000]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:20.451544 systemd[1]: sshd@6-10.200.8.31:22-10.200.12.6:45920.service: Deactivated successfully. Feb 12 19:48:20.454282 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:48:20.455316 systemd-logind[1384]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:48:20.457070 systemd-logind[1384]: Removed session 9. Feb 12 19:48:25.560166 systemd[1]: Started sshd@7-10.200.8.31:22-10.200.12.6:45936.service. Feb 12 19:48:26.180101 sshd[4015]: Accepted publickey for core from 10.200.12.6 port 45936 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:26.181903 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:26.188796 systemd[1]: Started session-10.scope. Feb 12 19:48:26.190464 systemd-logind[1384]: New session 10 of user core. Feb 12 19:48:26.672161 sshd[4015]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:26.675058 systemd[1]: sshd@7-10.200.8.31:22-10.200.12.6:45936.service: Deactivated successfully. Feb 12 19:48:26.677436 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:48:26.677945 systemd-logind[1384]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:48:26.679443 systemd-logind[1384]: Removed session 10. Feb 12 19:48:31.775262 systemd[1]: Started sshd@8-10.200.8.31:22-10.200.12.6:49776.service. Feb 12 19:48:32.396811 sshd[4032]: Accepted publickey for core from 10.200.12.6 port 49776 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:32.398279 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:32.403451 systemd-logind[1384]: New session 11 of user core. Feb 12 19:48:32.404424 systemd[1]: Started session-11.scope. Feb 12 19:48:32.898694 sshd[4032]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:32.902003 systemd[1]: sshd@8-10.200.8.31:22-10.200.12.6:49776.service: Deactivated successfully. Feb 12 19:48:32.902846 systemd-logind[1384]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:48:32.903051 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:48:32.904116 systemd-logind[1384]: Removed session 11. Feb 12 19:48:38.001523 systemd[1]: Started sshd@9-10.200.8.31:22-10.200.12.6:53306.service. Feb 12 19:48:38.616677 sshd[4050]: Accepted publickey for core from 10.200.12.6 port 53306 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:38.618131 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:38.623026 systemd-logind[1384]: New session 12 of user core. Feb 12 19:48:38.623689 systemd[1]: Started session-12.scope. Feb 12 19:48:39.115237 sshd[4050]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:39.118873 systemd[1]: sshd@9-10.200.8.31:22-10.200.12.6:53306.service: Deactivated successfully. Feb 12 19:48:39.121100 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:48:39.121836 systemd-logind[1384]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:48:39.123340 systemd-logind[1384]: Removed session 12. Feb 12 19:48:39.220538 systemd[1]: Started sshd@10-10.200.8.31:22-10.200.12.6:53318.service. Feb 12 19:48:39.838618 sshd[4064]: Accepted publickey for core from 10.200.12.6 port 53318 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:39.840736 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:39.847103 systemd[1]: Started session-13.scope. Feb 12 19:48:39.848762 systemd-logind[1384]: New session 13 of user core. Feb 12 19:48:41.038506 sshd[4064]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:41.042021 systemd[1]: sshd@10-10.200.8.31:22-10.200.12.6:53318.service: Deactivated successfully. Feb 12 19:48:41.043398 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:48:41.043430 systemd-logind[1384]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:48:41.045160 systemd-logind[1384]: Removed session 13. Feb 12 19:48:41.142046 systemd[1]: Started sshd@11-10.200.8.31:22-10.200.12.6:53324.service. Feb 12 19:48:41.770260 sshd[4075]: Accepted publickey for core from 10.200.12.6 port 53324 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:41.771796 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:41.777028 systemd[1]: Started session-14.scope. Feb 12 19:48:41.777775 systemd-logind[1384]: New session 14 of user core. Feb 12 19:48:42.260545 sshd[4075]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:42.263986 systemd-logind[1384]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:48:42.264170 systemd[1]: sshd@11-10.200.8.31:22-10.200.12.6:53324.service: Deactivated successfully. Feb 12 19:48:42.265368 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:48:42.267075 systemd-logind[1384]: Removed session 14. Feb 12 19:48:47.362993 systemd[1]: Started sshd@12-10.200.8.31:22-10.200.12.6:46920.service. Feb 12 19:48:47.991596 sshd[4088]: Accepted publickey for core from 10.200.12.6 port 46920 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:47.993076 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:47.998200 systemd[1]: Started session-15.scope. Feb 12 19:48:47.998454 systemd-logind[1384]: New session 15 of user core. Feb 12 19:48:48.486586 sshd[4088]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:48.489842 systemd[1]: sshd@12-10.200.8.31:22-10.200.12.6:46920.service: Deactivated successfully. Feb 12 19:48:48.492091 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:48:48.492996 systemd-logind[1384]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:48:48.494210 systemd-logind[1384]: Removed session 15. Feb 12 19:48:53.593739 systemd[1]: Started sshd@13-10.200.8.31:22-10.200.12.6:46934.service. Feb 12 19:48:54.221429 sshd[4103]: Accepted publickey for core from 10.200.12.6 port 46934 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:54.223038 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:54.228081 systemd[1]: Started session-16.scope. Feb 12 19:48:54.228541 systemd-logind[1384]: New session 16 of user core. Feb 12 19:48:54.718796 sshd[4103]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:54.721726 systemd[1]: sshd@13-10.200.8.31:22-10.200.12.6:46934.service: Deactivated successfully. Feb 12 19:48:54.723112 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:48:54.723129 systemd-logind[1384]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:48:54.724472 systemd-logind[1384]: Removed session 16. Feb 12 19:48:54.822806 systemd[1]: Started sshd@14-10.200.8.31:22-10.200.12.6:46942.service. Feb 12 19:48:55.447873 sshd[4116]: Accepted publickey for core from 10.200.12.6 port 46942 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:55.449441 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:55.454790 systemd[1]: Started session-17.scope. Feb 12 19:48:55.455715 systemd-logind[1384]: New session 17 of user core. Feb 12 19:48:56.014584 sshd[4116]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:56.018047 systemd[1]: sshd@14-10.200.8.31:22-10.200.12.6:46942.service: Deactivated successfully. Feb 12 19:48:56.019928 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:48:56.020555 systemd-logind[1384]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:48:56.021567 systemd-logind[1384]: Removed session 17. Feb 12 19:48:56.132136 systemd[1]: Started sshd@15-10.200.8.31:22-10.200.12.6:46950.service. Feb 12 19:48:56.749629 sshd[4127]: Accepted publickey for core from 10.200.12.6 port 46950 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:56.751167 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:56.756012 systemd[1]: Started session-18.scope. Feb 12 19:48:56.756568 systemd-logind[1384]: New session 18 of user core. Feb 12 19:48:58.254028 sshd[4127]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:58.257154 systemd[1]: sshd@15-10.200.8.31:22-10.200.12.6:46950.service: Deactivated successfully. Feb 12 19:48:58.258350 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:48:58.259923 systemd-logind[1384]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:48:58.261096 systemd-logind[1384]: Removed session 18. Feb 12 19:48:58.355590 systemd[1]: Started sshd@16-10.200.8.31:22-10.200.12.6:53820.service. Feb 12 19:48:58.984740 sshd[4193]: Accepted publickey for core from 10.200.12.6 port 53820 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:58.985655 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:58.990752 systemd[1]: Started session-19.scope. Feb 12 19:48:58.991665 systemd-logind[1384]: New session 19 of user core. Feb 12 19:48:59.595116 sshd[4193]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:59.598062 systemd[1]: sshd@16-10.200.8.31:22-10.200.12.6:53820.service: Deactivated successfully. Feb 12 19:48:59.599438 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:48:59.599464 systemd-logind[1384]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:48:59.601181 systemd-logind[1384]: Removed session 19. Feb 12 19:48:59.699056 systemd[1]: Started sshd@17-10.200.8.31:22-10.200.12.6:53830.service. Feb 12 19:49:00.318243 sshd[4204]: Accepted publickey for core from 10.200.12.6 port 53830 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:00.319780 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:00.325083 systemd[1]: Started session-20.scope. Feb 12 19:49:00.325547 systemd-logind[1384]: New session 20 of user core. Feb 12 19:49:00.814124 sshd[4204]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:00.817621 systemd[1]: sshd@17-10.200.8.31:22-10.200.12.6:53830.service: Deactivated successfully. Feb 12 19:49:00.819285 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:49:00.819954 systemd-logind[1384]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:49:00.820975 systemd-logind[1384]: Removed session 20. Feb 12 19:49:05.920137 systemd[1]: Started sshd@18-10.200.8.31:22-10.200.12.6:53836.service. Feb 12 19:49:06.566766 sshd[4247]: Accepted publickey for core from 10.200.12.6 port 53836 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:06.568216 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:06.572647 systemd-logind[1384]: New session 21 of user core. Feb 12 19:49:06.573300 systemd[1]: Started session-21.scope. Feb 12 19:49:07.069520 sshd[4247]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:07.072389 systemd[1]: sshd@18-10.200.8.31:22-10.200.12.6:53836.service: Deactivated successfully. Feb 12 19:49:07.074310 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:49:07.074966 systemd-logind[1384]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:49:07.075947 systemd-logind[1384]: Removed session 21. Feb 12 19:49:12.172686 systemd[1]: Started sshd@19-10.200.8.31:22-10.200.12.6:45184.service. Feb 12 19:49:12.788128 sshd[4260]: Accepted publickey for core from 10.200.12.6 port 45184 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:12.789874 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:12.794935 systemd-logind[1384]: New session 22 of user core. Feb 12 19:49:12.795633 systemd[1]: Started session-22.scope. Feb 12 19:49:13.282467 sshd[4260]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:13.285263 systemd[1]: sshd@19-10.200.8.31:22-10.200.12.6:45184.service: Deactivated successfully. Feb 12 19:49:13.286843 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:49:13.287527 systemd-logind[1384]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:49:13.288666 systemd-logind[1384]: Removed session 22. Feb 12 19:49:15.684924 update_engine[1385]: I0212 19:49:15.684842 1385 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 12 19:49:15.684924 update_engine[1385]: I0212 19:49:15.684906 1385 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 12 19:49:15.685682 update_engine[1385]: I0212 19:49:15.685074 1385 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 12 19:49:15.685796 update_engine[1385]: I0212 19:49:15.685700 1385 omaha_request_params.cc:62] Current group set to lts Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686008 1385 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686023 1385 update_attempter.cc:643] Scheduling an action processor start. Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686044 1385 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686087 1385 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686168 1385 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686175 1385 omaha_request_action.cc:271] Request: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: Feb 12 19:49:15.686493 update_engine[1385]: I0212 19:49:15.686183 1385 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:49:15.687386 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 12 19:49:15.688224 update_engine[1385]: I0212 19:49:15.687903 1385 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:49:15.688224 update_engine[1385]: I0212 19:49:15.688182 1385 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:49:15.717180 update_engine[1385]: E0212 19:49:15.717105 1385 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:49:15.717360 update_engine[1385]: I0212 19:49:15.717283 1385 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 12 19:49:18.386807 systemd[1]: Started sshd@20-10.200.8.31:22-10.200.12.6:44470.service. Feb 12 19:49:19.006513 sshd[4272]: Accepted publickey for core from 10.200.12.6 port 44470 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:19.007969 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:19.011804 systemd-logind[1384]: New session 23 of user core. Feb 12 19:49:19.013040 systemd[1]: Started session-23.scope. Feb 12 19:49:19.494810 sshd[4272]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:19.497950 systemd[1]: sshd@20-10.200.8.31:22-10.200.12.6:44470.service: Deactivated successfully. Feb 12 19:49:19.498817 systemd-logind[1384]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:49:19.498950 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:49:19.500023 systemd-logind[1384]: Removed session 23. Feb 12 19:49:19.600320 systemd[1]: Started sshd@21-10.200.8.31:22-10.200.12.6:44486.service. Feb 12 19:49:20.232464 sshd[4287]: Accepted publickey for core from 10.200.12.6 port 44486 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:20.234954 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:20.240348 systemd[1]: Started session-24.scope. Feb 12 19:49:20.241622 systemd-logind[1384]: New session 24 of user core. Feb 12 19:49:21.925769 env[1398]: time="2024-02-12T19:49:21.925679293Z" level=info msg="StopContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" with timeout 30 (s)" Feb 12 19:49:21.929786 env[1398]: time="2024-02-12T19:49:21.929743723Z" level=info msg="Stop container \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" with signal terminated" Feb 12 19:49:21.943826 systemd[1]: run-containerd-runc-k8s.io-2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472-runc.P0X667.mount: Deactivated successfully. Feb 12 19:49:21.973101 env[1398]: time="2024-02-12T19:49:21.973018249Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:49:21.981679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185-rootfs.mount: Deactivated successfully. Feb 12 19:49:21.985348 env[1398]: time="2024-02-12T19:49:21.985303642Z" level=info msg="StopContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" with timeout 1 (s)" Feb 12 19:49:21.985611 env[1398]: time="2024-02-12T19:49:21.985580544Z" level=info msg="Stop container \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" with signal terminated" Feb 12 19:49:21.993139 systemd-networkd[1576]: lxc_health: Link DOWN Feb 12 19:49:21.993150 systemd-networkd[1576]: lxc_health: Lost carrier Feb 12 19:49:22.036521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472-rootfs.mount: Deactivated successfully. Feb 12 19:49:22.038037 env[1398]: time="2024-02-12T19:49:22.037988637Z" level=info msg="shim disconnected" id=2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472 Feb 12 19:49:22.038319 env[1398]: time="2024-02-12T19:49:22.038295139Z" level=warning msg="cleaning up after shim disconnected" id=2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472 namespace=k8s.io Feb 12 19:49:22.038451 env[1398]: time="2024-02-12T19:49:22.038434141Z" level=info msg="cleaning up dead shim" Feb 12 19:49:22.038529 env[1398]: time="2024-02-12T19:49:22.038272539Z" level=info msg="shim disconnected" id=06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185 Feb 12 19:49:22.038581 env[1398]: time="2024-02-12T19:49:22.038554141Z" level=warning msg="cleaning up after shim disconnected" id=06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185 namespace=k8s.io Feb 12 19:49:22.038581 env[1398]: time="2024-02-12T19:49:22.038569942Z" level=info msg="cleaning up dead shim" Feb 12 19:49:22.050095 env[1398]: time="2024-02-12T19:49:22.050042328Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4356 runtime=io.containerd.runc.v2\n" Feb 12 19:49:22.051382 env[1398]: time="2024-02-12T19:49:22.051343737Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4355 runtime=io.containerd.runc.v2\n" Feb 12 19:49:22.056560 env[1398]: time="2024-02-12T19:49:22.056522276Z" level=info msg="StopContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" returns successfully" Feb 12 19:49:22.057430 env[1398]: time="2024-02-12T19:49:22.057395183Z" level=info msg="StopPodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\"" Feb 12 19:49:22.057629 env[1398]: time="2024-02-12T19:49:22.057604284Z" level=info msg="Container to stop \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:22.057729 env[1398]: time="2024-02-12T19:49:22.057693985Z" level=info msg="Container to stop \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:22.057812 env[1398]: time="2024-02-12T19:49:22.057790286Z" level=info msg="Container to stop \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:22.057885 env[1398]: time="2024-02-12T19:49:22.057810786Z" level=info msg="Container to stop \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:22.057885 env[1398]: time="2024-02-12T19:49:22.057829386Z" level=info msg="Container to stop \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:22.058004 env[1398]: time="2024-02-12T19:49:22.057519684Z" level=info msg="StopContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" returns successfully" Feb 12 19:49:22.058446 env[1398]: time="2024-02-12T19:49:22.058421290Z" level=info msg="StopPodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\"" Feb 12 19:49:22.058605 env[1398]: time="2024-02-12T19:49:22.058583292Z" level=info msg="Container to stop \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:22.060603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f-shm.mount: Deactivated successfully. Feb 12 19:49:22.060815 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857-shm.mount: Deactivated successfully. Feb 12 19:49:22.114915 env[1398]: time="2024-02-12T19:49:22.114858514Z" level=info msg="shim disconnected" id=217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857 Feb 12 19:49:22.115144 env[1398]: time="2024-02-12T19:49:22.114918214Z" level=warning msg="cleaning up after shim disconnected" id=217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857 namespace=k8s.io Feb 12 19:49:22.115144 env[1398]: time="2024-02-12T19:49:22.114931115Z" level=info msg="cleaning up dead shim" Feb 12 19:49:22.115527 env[1398]: time="2024-02-12T19:49:22.115487419Z" level=info msg="shim disconnected" id=92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f Feb 12 19:49:22.115623 env[1398]: time="2024-02-12T19:49:22.115525919Z" level=warning msg="cleaning up after shim disconnected" id=92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f namespace=k8s.io Feb 12 19:49:22.115623 env[1398]: time="2024-02-12T19:49:22.115536819Z" level=info msg="cleaning up dead shim" Feb 12 19:49:22.128217 env[1398]: time="2024-02-12T19:49:22.128165214Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4421 runtime=io.containerd.runc.v2\n" Feb 12 19:49:22.128662 env[1398]: time="2024-02-12T19:49:22.128625517Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4422 runtime=io.containerd.runc.v2\n" Feb 12 19:49:22.129016 env[1398]: time="2024-02-12T19:49:22.128981320Z" level=info msg="TearDown network for sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" successfully" Feb 12 19:49:22.129090 env[1398]: time="2024-02-12T19:49:22.129018120Z" level=info msg="StopPodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" returns successfully" Feb 12 19:49:22.129155 env[1398]: time="2024-02-12T19:49:22.128995920Z" level=info msg="TearDown network for sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" successfully" Feb 12 19:49:22.129203 env[1398]: time="2024-02-12T19:49:22.129158521Z" level=info msg="StopPodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" returns successfully" Feb 12 19:49:22.248992 kubelet[2567]: I0212 19:49:22.248877 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-cgroup\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.248992 kubelet[2567]: I0212 19:49:22.248950 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-hostproc\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.249674 kubelet[2567]: I0212 19:49:22.249010 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-run\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.249674 kubelet[2567]: I0212 19:49:22.249065 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/822e1966-c2d6-46d2-a483-9e774a6be580-clustermesh-secrets\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.249674 kubelet[2567]: I0212 19:49:22.249098 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cni-path\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.249674 kubelet[2567]: I0212 19:49:22.249133 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-lib-modules\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.249674 kubelet[2567]: I0212 19:49:22.249171 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-kernel\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.249674 kubelet[2567]: I0212 19:49:22.249201 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-bpf-maps\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250067 kubelet[2567]: I0212 19:49:22.249236 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5rwc\" (UniqueName: \"kubernetes.io/projected/5bdee7ad-e4a2-4685-8608-60cec7b7b943-kube-api-access-f5rwc\") pod \"5bdee7ad-e4a2-4685-8608-60cec7b7b943\" (UID: \"5bdee7ad-e4a2-4685-8608-60cec7b7b943\") " Feb 12 19:49:22.250067 kubelet[2567]: I0212 19:49:22.249273 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bdee7ad-e4a2-4685-8608-60cec7b7b943-cilium-config-path\") pod \"5bdee7ad-e4a2-4685-8608-60cec7b7b943\" (UID: \"5bdee7ad-e4a2-4685-8608-60cec7b7b943\") " Feb 12 19:49:22.250067 kubelet[2567]: I0212 19:49:22.249307 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-net\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250067 kubelet[2567]: I0212 19:49:22.249362 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-hubble-tls\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250067 kubelet[2567]: I0212 19:49:22.249398 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-xtables-lock\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250067 kubelet[2567]: I0212 19:49:22.249432 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-config-path\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250393 kubelet[2567]: I0212 19:49:22.249461 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-etc-cni-netd\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250393 kubelet[2567]: I0212 19:49:22.249496 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llqjx\" (UniqueName: \"kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-kube-api-access-llqjx\") pod \"822e1966-c2d6-46d2-a483-9e774a6be580\" (UID: \"822e1966-c2d6-46d2-a483-9e774a6be580\") " Feb 12 19:49:22.250507 kubelet[2567]: I0212 19:49:22.250411 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.252728 kubelet[2567]: I0212 19:49:22.250592 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.252728 kubelet[2567]: I0212 19:49:22.250642 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-hostproc" (OuterVolumeSpecName: "hostproc") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.252728 kubelet[2567]: W0212 19:49:22.250804 2567 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5bdee7ad-e4a2-4685-8608-60cec7b7b943/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:49:22.252964 kubelet[2567]: I0212 19:49:22.252826 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cni-path" (OuterVolumeSpecName: "cni-path") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.252964 kubelet[2567]: I0212 19:49:22.252878 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.252964 kubelet[2567]: I0212 19:49:22.252902 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.252964 kubelet[2567]: I0212 19:49:22.252926 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.253878 kubelet[2567]: I0212 19:49:22.253849 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bdee7ad-e4a2-4685-8608-60cec7b7b943-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5bdee7ad-e4a2-4685-8608-60cec7b7b943" (UID: "5bdee7ad-e4a2-4685-8608-60cec7b7b943"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:22.254266 kubelet[2567]: I0212 19:49:22.253995 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.254266 kubelet[2567]: I0212 19:49:22.254015 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.254410 kubelet[2567]: W0212 19:49:22.254384 2567 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/822e1966-c2d6-46d2-a483-9e774a6be580/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:49:22.255979 kubelet[2567]: I0212 19:49:22.255949 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:22.256927 kubelet[2567]: I0212 19:49:22.256893 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:22.258836 kubelet[2567]: I0212 19:49:22.258798 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-kube-api-access-llqjx" (OuterVolumeSpecName: "kube-api-access-llqjx") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "kube-api-access-llqjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:22.258934 kubelet[2567]: I0212 19:49:22.258911 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/822e1966-c2d6-46d2-a483-9e774a6be580-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:22.260762 kubelet[2567]: I0212 19:49:22.260737 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bdee7ad-e4a2-4685-8608-60cec7b7b943-kube-api-access-f5rwc" (OuterVolumeSpecName: "kube-api-access-f5rwc") pod "5bdee7ad-e4a2-4685-8608-60cec7b7b943" (UID: "5bdee7ad-e4a2-4685-8608-60cec7b7b943"). InnerVolumeSpecName "kube-api-access-f5rwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:22.261687 kubelet[2567]: I0212 19:49:22.261650 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "822e1966-c2d6-46d2-a483-9e774a6be580" (UID: "822e1966-c2d6-46d2-a483-9e774a6be580"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:22.286893 kubelet[2567]: I0212 19:49:22.286864 2567 scope.go:115] "RemoveContainer" containerID="06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185" Feb 12 19:49:22.290445 env[1398]: time="2024-02-12T19:49:22.290059329Z" level=info msg="RemoveContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\"" Feb 12 19:49:22.303622 env[1398]: time="2024-02-12T19:49:22.303570330Z" level=info msg="RemoveContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" returns successfully" Feb 12 19:49:22.303991 kubelet[2567]: I0212 19:49:22.303969 2567 scope.go:115] "RemoveContainer" containerID="06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185" Feb 12 19:49:22.304340 env[1398]: time="2024-02-12T19:49:22.304252235Z" level=error msg="ContainerStatus for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\": not found" Feb 12 19:49:22.304527 kubelet[2567]: E0212 19:49:22.304510 2567 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\": not found" containerID="06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185" Feb 12 19:49:22.304613 kubelet[2567]: I0212 19:49:22.304552 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185} err="failed to get container status \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\": rpc error: code = NotFound desc = an error occurred when try to find container \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\": not found" Feb 12 19:49:22.304613 kubelet[2567]: I0212 19:49:22.304567 2567 scope.go:115] "RemoveContainer" containerID="2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472" Feb 12 19:49:22.305972 env[1398]: time="2024-02-12T19:49:22.305667046Z" level=info msg="RemoveContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\"" Feb 12 19:49:22.317402 env[1398]: time="2024-02-12T19:49:22.317318133Z" level=info msg="RemoveContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" returns successfully" Feb 12 19:49:22.317848 kubelet[2567]: I0212 19:49:22.317827 2567 scope.go:115] "RemoveContainer" containerID="d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9" Feb 12 19:49:22.322949 env[1398]: time="2024-02-12T19:49:22.322593073Z" level=info msg="RemoveContainer for \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\"" Feb 12 19:49:22.335236 env[1398]: time="2024-02-12T19:49:22.334749864Z" level=info msg="RemoveContainer for \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\" returns successfully" Feb 12 19:49:22.337016 kubelet[2567]: I0212 19:49:22.335918 2567 scope.go:115] "RemoveContainer" containerID="373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09" Feb 12 19:49:22.337681 env[1398]: time="2024-02-12T19:49:22.337204582Z" level=info msg="RemoveContainer for \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\"" Feb 12 19:49:22.349927 kubelet[2567]: I0212 19:49:22.349891 2567 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-lib-modules\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.349927 kubelet[2567]: I0212 19:49:22.349927 2567 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.349927 kubelet[2567]: I0212 19:49:22.349941 2567 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-bpf-maps\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.349956 2567 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-f5rwc\" (UniqueName: \"kubernetes.io/projected/5bdee7ad-e4a2-4685-8608-60cec7b7b943-kube-api-access-f5rwc\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.349970 2567 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-host-proc-sys-net\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.349982 2567 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-hubble-tls\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.349993 2567 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-xtables-lock\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.350008 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bdee7ad-e4a2-4685-8608-60cec7b7b943-cilium-config-path\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.350021 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-config-path\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.350035 2567 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-etc-cni-netd\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350185 kubelet[2567]: I0212 19:49:22.350050 2567 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-llqjx\" (UniqueName: \"kubernetes.io/projected/822e1966-c2d6-46d2-a483-9e774a6be580-kube-api-access-llqjx\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350399 kubelet[2567]: I0212 19:49:22.350064 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-cgroup\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350399 kubelet[2567]: I0212 19:49:22.350076 2567 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-hostproc\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350399 kubelet[2567]: I0212 19:49:22.350088 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cilium-run\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350399 kubelet[2567]: I0212 19:49:22.350101 2567 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/822e1966-c2d6-46d2-a483-9e774a6be580-clustermesh-secrets\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.350399 kubelet[2567]: I0212 19:49:22.350114 2567 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/822e1966-c2d6-46d2-a483-9e774a6be580-cni-path\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:22.352416 env[1398]: time="2024-02-12T19:49:22.352368196Z" level=info msg="RemoveContainer for \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\" returns successfully" Feb 12 19:49:22.352647 kubelet[2567]: I0212 19:49:22.352617 2567 scope.go:115] "RemoveContainer" containerID="d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c" Feb 12 19:49:22.353784 env[1398]: time="2024-02-12T19:49:22.353752706Z" level=info msg="RemoveContainer for \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\"" Feb 12 19:49:22.362016 env[1398]: time="2024-02-12T19:49:22.361981568Z" level=info msg="RemoveContainer for \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\" returns successfully" Feb 12 19:49:22.362218 kubelet[2567]: I0212 19:49:22.362175 2567 scope.go:115] "RemoveContainer" containerID="4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35" Feb 12 19:49:22.363209 env[1398]: time="2024-02-12T19:49:22.363181377Z" level=info msg="RemoveContainer for \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\"" Feb 12 19:49:22.372288 env[1398]: time="2024-02-12T19:49:22.372256745Z" level=info msg="RemoveContainer for \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\" returns successfully" Feb 12 19:49:22.372459 kubelet[2567]: I0212 19:49:22.372423 2567 scope.go:115] "RemoveContainer" containerID="2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472" Feb 12 19:49:22.372687 env[1398]: time="2024-02-12T19:49:22.372635548Z" level=error msg="ContainerStatus for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\": not found" Feb 12 19:49:22.372933 kubelet[2567]: E0212 19:49:22.372884 2567 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\": not found" containerID="2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472" Feb 12 19:49:22.373029 kubelet[2567]: I0212 19:49:22.372938 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472} err="failed to get container status \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\": not found" Feb 12 19:49:22.373029 kubelet[2567]: I0212 19:49:22.372955 2567 scope.go:115] "RemoveContainer" containerID="d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9" Feb 12 19:49:22.373187 env[1398]: time="2024-02-12T19:49:22.373133752Z" level=error msg="ContainerStatus for \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\": not found" Feb 12 19:49:22.373305 kubelet[2567]: E0212 19:49:22.373286 2567 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\": not found" containerID="d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9" Feb 12 19:49:22.373377 kubelet[2567]: I0212 19:49:22.373320 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9} err="failed to get container status \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3b3c5b93a85f04c621157dbc02bd5658d4d8afeee90a94a40d5e08fc1df22d9\": not found" Feb 12 19:49:22.373377 kubelet[2567]: I0212 19:49:22.373336 2567 scope.go:115] "RemoveContainer" containerID="373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09" Feb 12 19:49:22.373537 env[1398]: time="2024-02-12T19:49:22.373492455Z" level=error msg="ContainerStatus for \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\": not found" Feb 12 19:49:22.373671 kubelet[2567]: E0212 19:49:22.373632 2567 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\": not found" containerID="373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09" Feb 12 19:49:22.373671 kubelet[2567]: I0212 19:49:22.373665 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09} err="failed to get container status \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\": rpc error: code = NotFound desc = an error occurred when try to find container \"373c05988c4954030b35579477dc867ca95b3c7ab7ad87ff647ce79370537b09\": not found" Feb 12 19:49:22.373848 kubelet[2567]: I0212 19:49:22.373679 2567 scope.go:115] "RemoveContainer" containerID="d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c" Feb 12 19:49:22.373931 env[1398]: time="2024-02-12T19:49:22.373875957Z" level=error msg="ContainerStatus for \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\": not found" Feb 12 19:49:22.374096 kubelet[2567]: E0212 19:49:22.374079 2567 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\": not found" containerID="d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c" Feb 12 19:49:22.374172 kubelet[2567]: I0212 19:49:22.374109 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c} err="failed to get container status \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d54a3fa08b98c487a17739336e1043e4528b579af6c01fda277cdd83d9cb2e8c\": not found" Feb 12 19:49:22.374172 kubelet[2567]: I0212 19:49:22.374122 2567 scope.go:115] "RemoveContainer" containerID="4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35" Feb 12 19:49:22.374327 env[1398]: time="2024-02-12T19:49:22.374279360Z" level=error msg="ContainerStatus for \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\": not found" Feb 12 19:49:22.374441 kubelet[2567]: E0212 19:49:22.374423 2567 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\": not found" containerID="4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35" Feb 12 19:49:22.374515 kubelet[2567]: I0212 19:49:22.374457 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35} err="failed to get container status \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b51cce69cb4464f878f2644fb93b2260be13295b3e163e636908a447fce0e35\": not found" Feb 12 19:49:22.707115 env[1398]: time="2024-02-12T19:49:22.707057357Z" level=info msg="StopContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" with timeout 1 (s)" Feb 12 19:49:22.707536 env[1398]: time="2024-02-12T19:49:22.707417260Z" level=error msg="StopContainer for \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\": not found" Feb 12 19:49:22.708016 env[1398]: time="2024-02-12T19:49:22.707984864Z" level=info msg="StopContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" with timeout 1 (s)" Feb 12 19:49:22.708242 env[1398]: time="2024-02-12T19:49:22.708163966Z" level=error msg="StopContainer for \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\": not found" Feb 12 19:49:22.709348 kubelet[2567]: E0212 19:49:22.708894 2567 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472\": not found" containerID="2b9a62b570a7244985b7d7b2488db55acad348f1f024a7d81f9ee30f7b89c472" Feb 12 19:49:22.709348 kubelet[2567]: E0212 19:49:22.709079 2567 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185\": not found" containerID="06fad33f432b647ace32f0f99fed1aee06ab27076fd37a10d0eced72b0928185" Feb 12 19:49:22.711491 env[1398]: time="2024-02-12T19:49:22.709648577Z" level=info msg="StopPodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\"" Feb 12 19:49:22.711491 env[1398]: time="2024-02-12T19:49:22.709786378Z" level=info msg="TearDown network for sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" successfully" Feb 12 19:49:22.711491 env[1398]: time="2024-02-12T19:49:22.709839778Z" level=info msg="StopPodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" returns successfully" Feb 12 19:49:22.712810 kubelet[2567]: I0212 19:49:22.711320 2567 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5bdee7ad-e4a2-4685-8608-60cec7b7b943 path="/var/lib/kubelet/pods/5bdee7ad-e4a2-4685-8608-60cec7b7b943/volumes" Feb 12 19:49:22.712810 kubelet[2567]: I0212 19:49:22.711996 2567 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=822e1966-c2d6-46d2-a483-9e774a6be580 path="/var/lib/kubelet/pods/822e1966-c2d6-46d2-a483-9e774a6be580/volumes" Feb 12 19:49:22.713029 env[1398]: time="2024-02-12T19:49:22.712974502Z" level=info msg="StopPodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\"" Feb 12 19:49:22.714595 env[1398]: time="2024-02-12T19:49:22.713667607Z" level=info msg="TearDown network for sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" successfully" Feb 12 19:49:22.714595 env[1398]: time="2024-02-12T19:49:22.713752208Z" level=info msg="StopPodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" returns successfully" Feb 12 19:49:22.942587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f-rootfs.mount: Deactivated successfully. Feb 12 19:49:22.942902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857-rootfs.mount: Deactivated successfully. Feb 12 19:49:22.943032 systemd[1]: var-lib-kubelet-pods-822e1966\x2dc2d6\x2d46d2\x2da483\x2d9e774a6be580-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:22.943154 systemd[1]: var-lib-kubelet-pods-822e1966\x2dc2d6\x2d46d2\x2da483\x2d9e774a6be580-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:49:22.943285 systemd[1]: var-lib-kubelet-pods-5bdee7ad\x2de4a2\x2d4685\x2d8608\x2d60cec7b7b943-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5rwc.mount: Deactivated successfully. Feb 12 19:49:22.943401 systemd[1]: var-lib-kubelet-pods-822e1966\x2dc2d6\x2d46d2\x2da483\x2d9e774a6be580-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dllqjx.mount: Deactivated successfully. Feb 12 19:49:23.975114 sshd[4287]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:23.978554 systemd[1]: sshd@21-10.200.8.31:22-10.200.12.6:44486.service: Deactivated successfully. Feb 12 19:49:23.979895 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:49:23.981008 systemd-logind[1384]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:49:23.982436 systemd-logind[1384]: Removed session 24. Feb 12 19:49:24.081876 systemd[1]: Started sshd@22-10.200.8.31:22-10.200.12.6:44490.service. Feb 12 19:49:24.700684 sshd[4459]: Accepted publickey for core from 10.200.12.6 port 44490 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:24.702143 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:24.708148 systemd[1]: Started session-25.scope. Feb 12 19:49:24.708736 systemd-logind[1384]: New session 25 of user core. Feb 12 19:49:24.861375 kubelet[2567]: E0212 19:49:24.861350 2567 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:49:25.508382 kubelet[2567]: I0212 19:49:25.508332 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:49:25.508737 kubelet[2567]: E0212 19:49:25.508718 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="822e1966-c2d6-46d2-a483-9e774a6be580" containerName="apply-sysctl-overwrites" Feb 12 19:49:25.508867 kubelet[2567]: E0212 19:49:25.508857 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="822e1966-c2d6-46d2-a483-9e774a6be580" containerName="mount-bpf-fs" Feb 12 19:49:25.508957 kubelet[2567]: E0212 19:49:25.508948 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bdee7ad-e4a2-4685-8608-60cec7b7b943" containerName="cilium-operator" Feb 12 19:49:25.509045 kubelet[2567]: E0212 19:49:25.509037 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="822e1966-c2d6-46d2-a483-9e774a6be580" containerName="clean-cilium-state" Feb 12 19:49:25.509132 kubelet[2567]: E0212 19:49:25.509125 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="822e1966-c2d6-46d2-a483-9e774a6be580" containerName="mount-cgroup" Feb 12 19:49:25.509207 kubelet[2567]: E0212 19:49:25.509200 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="822e1966-c2d6-46d2-a483-9e774a6be580" containerName="cilium-agent" Feb 12 19:49:25.509311 kubelet[2567]: I0212 19:49:25.509302 2567 memory_manager.go:346] "RemoveStaleState removing state" podUID="822e1966-c2d6-46d2-a483-9e774a6be580" containerName="cilium-agent" Feb 12 19:49:25.509387 kubelet[2567]: I0212 19:49:25.509380 2567 memory_manager.go:346] "RemoveStaleState removing state" podUID="5bdee7ad-e4a2-4685-8608-60cec7b7b943" containerName="cilium-operator" Feb 12 19:49:25.569395 kubelet[2567]: I0212 19:49:25.569363 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-bpf-maps\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.569608 kubelet[2567]: I0212 19:49:25.569598 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cni-path\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.569762 kubelet[2567]: I0212 19:49:25.569754 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/759c567f-4944-4768-a844-ea61e3c429e0-cilium-config-path\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.569844 kubelet[2567]: I0212 19:49:25.569837 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-etc-cni-netd\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.569913 kubelet[2567]: I0212 19:49:25.569907 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-lib-modules\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.569979 kubelet[2567]: I0212 19:49:25.569973 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-cgroup\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570042 kubelet[2567]: I0212 19:49:25.570032 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-net\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570107 kubelet[2567]: I0212 19:49:25.570101 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl2kc\" (UniqueName: \"kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-kube-api-access-fl2kc\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570172 kubelet[2567]: I0212 19:49:25.570167 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-hostproc\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570227 kubelet[2567]: I0212 19:49:25.570221 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-hubble-tls\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570291 kubelet[2567]: I0212 19:49:25.570286 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-kernel\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570362 kubelet[2567]: I0212 19:49:25.570356 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-xtables-lock\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570461 kubelet[2567]: I0212 19:49:25.570453 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-cilium-ipsec-secrets\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570533 kubelet[2567]: I0212 19:49:25.570527 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-run\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.570604 kubelet[2567]: I0212 19:49:25.570596 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-clustermesh-secrets\") pod \"cilium-kpphk\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " pod="kube-system/cilium-kpphk" Feb 12 19:49:25.600816 sshd[4459]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:25.603894 systemd[1]: sshd@22-10.200.8.31:22-10.200.12.6:44490.service: Deactivated successfully. Feb 12 19:49:25.606110 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:49:25.606753 systemd-logind[1384]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:49:25.607991 systemd-logind[1384]: Removed session 25. Feb 12 19:49:25.684566 update_engine[1385]: I0212 19:49:25.684512 1385 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:49:25.685085 update_engine[1385]: I0212 19:49:25.684746 1385 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:49:25.685085 update_engine[1385]: I0212 19:49:25.684943 1385 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:49:25.704748 systemd[1]: Started sshd@23-10.200.8.31:22-10.200.12.6:44502.service. Feb 12 19:49:25.721207 update_engine[1385]: E0212 19:49:25.721053 1385 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:49:25.721207 update_engine[1385]: I0212 19:49:25.721169 1385 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 12 19:49:25.818097 env[1398]: time="2024-02-12T19:49:25.817959988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpphk,Uid:759c567f-4944-4768-a844-ea61e3c429e0,Namespace:kube-system,Attempt:0,}" Feb 12 19:49:25.865023 env[1398]: time="2024-02-12T19:49:25.864953038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:49:25.865023 env[1398]: time="2024-02-12T19:49:25.864989538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:49:25.865023 env[1398]: time="2024-02-12T19:49:25.865003438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:49:25.865497 env[1398]: time="2024-02-12T19:49:25.865428441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7 pid=4485 runtime=io.containerd.runc.v2 Feb 12 19:49:25.906737 env[1398]: time="2024-02-12T19:49:25.906665748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpphk,Uid:759c567f-4944-4768-a844-ea61e3c429e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\"" Feb 12 19:49:25.910121 env[1398]: time="2024-02-12T19:49:25.909677571Z" level=info msg="CreateContainer within sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:49:25.971101 env[1398]: time="2024-02-12T19:49:25.971037227Z" level=info msg="CreateContainer within sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf\"" Feb 12 19:49:25.974270 env[1398]: time="2024-02-12T19:49:25.974224251Z" level=info msg="StartContainer for \"87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf\"" Feb 12 19:49:26.026626 env[1398]: time="2024-02-12T19:49:26.026567040Z" level=info msg="StartContainer for \"87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf\" returns successfully" Feb 12 19:49:26.111602 env[1398]: time="2024-02-12T19:49:26.111451870Z" level=info msg="shim disconnected" id=87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf Feb 12 19:49:26.111602 env[1398]: time="2024-02-12T19:49:26.111514771Z" level=warning msg="cleaning up after shim disconnected" id=87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf namespace=k8s.io Feb 12 19:49:26.111602 env[1398]: time="2024-02-12T19:49:26.111528171Z" level=info msg="cleaning up dead shim" Feb 12 19:49:26.120744 env[1398]: time="2024-02-12T19:49:26.120107434Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4571 runtime=io.containerd.runc.v2\n" Feb 12 19:49:26.307006 env[1398]: time="2024-02-12T19:49:26.306960521Z" level=info msg="CreateContainer within sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:49:26.335750 sshd[4475]: Accepted publickey for core from 10.200.12.6 port 44502 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:26.337191 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:26.342392 systemd[1]: Started session-26.scope. Feb 12 19:49:26.343363 systemd-logind[1384]: New session 26 of user core. Feb 12 19:49:26.381206 env[1398]: time="2024-02-12T19:49:26.380664468Z" level=info msg="CreateContainer within sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4\"" Feb 12 19:49:26.381836 env[1398]: time="2024-02-12T19:49:26.381805977Z" level=info msg="StartContainer for \"b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4\"" Feb 12 19:49:26.438905 env[1398]: time="2024-02-12T19:49:26.438855000Z" level=info msg="StartContainer for \"b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4\" returns successfully" Feb 12 19:49:26.468626 env[1398]: time="2024-02-12T19:49:26.468572721Z" level=info msg="shim disconnected" id=b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4 Feb 12 19:49:26.468626 env[1398]: time="2024-02-12T19:49:26.468623921Z" level=warning msg="cleaning up after shim disconnected" id=b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4 namespace=k8s.io Feb 12 19:49:26.468626 env[1398]: time="2024-02-12T19:49:26.468636221Z" level=info msg="cleaning up dead shim" Feb 12 19:49:26.476803 env[1398]: time="2024-02-12T19:49:26.476759182Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4635 runtime=io.containerd.runc.v2\n" Feb 12 19:49:26.904826 sshd[4475]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:26.908262 systemd[1]: sshd@23-10.200.8.31:22-10.200.12.6:44502.service: Deactivated successfully. Feb 12 19:49:26.911049 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:49:26.912366 systemd-logind[1384]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:49:26.913469 systemd-logind[1384]: Removed session 26. Feb 12 19:49:27.007740 systemd[1]: Started sshd@24-10.200.8.31:22-10.200.12.6:44508.service. Feb 12 19:49:27.326142 env[1398]: time="2024-02-12T19:49:27.323092558Z" level=info msg="CreateContainer within sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:49:27.359734 env[1398]: time="2024-02-12T19:49:27.359655828Z" level=info msg="CreateContainer within sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6\"" Feb 12 19:49:27.360331 env[1398]: time="2024-02-12T19:49:27.360284333Z" level=info msg="StartContainer for \"ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6\"" Feb 12 19:49:27.445241 env[1398]: time="2024-02-12T19:49:27.445187961Z" level=info msg="StartContainer for \"ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6\" returns successfully" Feb 12 19:49:27.491450 env[1398]: time="2024-02-12T19:49:27.491389704Z" level=info msg="shim disconnected" id=ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6 Feb 12 19:49:27.491450 env[1398]: time="2024-02-12T19:49:27.491439204Z" level=warning msg="cleaning up after shim disconnected" id=ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6 namespace=k8s.io Feb 12 19:49:27.491450 env[1398]: time="2024-02-12T19:49:27.491452104Z" level=info msg="cleaning up dead shim" Feb 12 19:49:27.499007 env[1398]: time="2024-02-12T19:49:27.498931259Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4703 runtime=io.containerd.runc.v2\n" Feb 12 19:49:27.631046 sshd[4656]: Accepted publickey for core from 10.200.12.6 port 44508 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:27.632155 sshd[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:27.637229 systemd[1]: Started session-27.scope. Feb 12 19:49:27.637998 systemd-logind[1384]: New session 27 of user core. Feb 12 19:49:27.678723 systemd[1]: run-containerd-runc-k8s.io-ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6-runc.3CjR39.mount: Deactivated successfully. Feb 12 19:49:27.678909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6-rootfs.mount: Deactivated successfully. Feb 12 19:49:28.316729 env[1398]: time="2024-02-12T19:49:28.312623077Z" level=info msg="StopPodSandbox for \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\"" Feb 12 19:49:28.316729 env[1398]: time="2024-02-12T19:49:28.312721778Z" level=info msg="Container to stop \"87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:28.316729 env[1398]: time="2024-02-12T19:49:28.312745978Z" level=info msg="Container to stop \"b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:28.316729 env[1398]: time="2024-02-12T19:49:28.312763278Z" level=info msg="Container to stop \"ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:28.316151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7-shm.mount: Deactivated successfully. Feb 12 19:49:28.356218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7-rootfs.mount: Deactivated successfully. Feb 12 19:49:28.389100 env[1398]: time="2024-02-12T19:49:28.389033742Z" level=info msg="shim disconnected" id=212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7 Feb 12 19:49:28.389100 env[1398]: time="2024-02-12T19:49:28.389100742Z" level=warning msg="cleaning up after shim disconnected" id=212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7 namespace=k8s.io Feb 12 19:49:28.389718 env[1398]: time="2024-02-12T19:49:28.389112742Z" level=info msg="cleaning up dead shim" Feb 12 19:49:28.397629 env[1398]: time="2024-02-12T19:49:28.397578805Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4744 runtime=io.containerd.runc.v2\n" Feb 12 19:49:28.397975 env[1398]: time="2024-02-12T19:49:28.397940907Z" level=info msg="TearDown network for sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" successfully" Feb 12 19:49:28.397975 env[1398]: time="2024-02-12T19:49:28.397973208Z" level=info msg="StopPodSandbox for \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" returns successfully" Feb 12 19:49:28.591403 kubelet[2567]: I0212 19:49:28.591253 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/759c567f-4944-4768-a844-ea61e3c429e0-cilium-config-path\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.591403 kubelet[2567]: I0212 19:49:28.591317 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-cgroup\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.591403 kubelet[2567]: I0212 19:49:28.591349 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-kernel\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.593467 kubelet[2567]: I0212 19:49:28.592172 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.593467 kubelet[2567]: I0212 19:49:28.592212 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-cilium-ipsec-secrets\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.593467 kubelet[2567]: I0212 19:49:28.592294 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cni-path\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.593467 kubelet[2567]: I0212 19:49:28.592329 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-lib-modules\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.593467 kubelet[2567]: I0212 19:49:28.592377 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-xtables-lock\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.593467 kubelet[2567]: I0212 19:49:28.592420 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-clustermesh-secrets\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594359 kubelet[2567]: I0212 19:49:28.592471 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-hostproc\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594359 kubelet[2567]: I0212 19:49:28.592505 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-hubble-tls\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594359 kubelet[2567]: I0212 19:49:28.592554 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-run\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594359 kubelet[2567]: I0212 19:49:28.592587 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-bpf-maps\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594359 kubelet[2567]: I0212 19:49:28.592648 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-net\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594359 kubelet[2567]: I0212 19:49:28.592680 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-etc-cni-netd\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594684 kubelet[2567]: I0212 19:49:28.592744 2567 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl2kc\" (UniqueName: \"kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-kube-api-access-fl2kc\") pod \"759c567f-4944-4768-a844-ea61e3c429e0\" (UID: \"759c567f-4944-4768-a844-ea61e3c429e0\") " Feb 12 19:49:28.594684 kubelet[2567]: I0212 19:49:28.592818 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-cgroup\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.594684 kubelet[2567]: I0212 19:49:28.592983 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.594684 kubelet[2567]: I0212 19:49:28.593027 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.594684 kubelet[2567]: W0212 19:49:28.593186 2567 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/759c567f-4944-4768-a844-ea61e3c429e0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:49:28.596057 kubelet[2567]: I0212 19:49:28.596028 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759c567f-4944-4768-a844-ea61e3c429e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:28.596170 kubelet[2567]: I0212 19:49:28.596080 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.596170 kubelet[2567]: I0212 19:49:28.596102 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.596170 kubelet[2567]: I0212 19:49:28.596125 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.596353 kubelet[2567]: I0212 19:49:28.596331 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.596422 kubelet[2567]: I0212 19:49:28.596366 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.596422 kubelet[2567]: I0212 19:49:28.596389 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.596422 kubelet[2567]: I0212 19:49:28.596413 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:28.601505 systemd[1]: var-lib-kubelet-pods-759c567f\x2d4944\x2d4768\x2da844\x2dea61e3c429e0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:28.605210 systemd[1]: var-lib-kubelet-pods-759c567f\x2d4944\x2d4768\x2da844\x2dea61e3c429e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfl2kc.mount: Deactivated successfully. Feb 12 19:49:28.606522 kubelet[2567]: I0212 19:49:28.606493 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-kube-api-access-fl2kc" (OuterVolumeSpecName: "kube-api-access-fl2kc") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "kube-api-access-fl2kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:28.606758 kubelet[2567]: I0212 19:49:28.606735 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:28.608887 kubelet[2567]: I0212 19:49:28.608861 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:28.609178 kubelet[2567]: I0212 19:49:28.609153 2567 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "759c567f-4944-4768-a844-ea61e3c429e0" (UID: "759c567f-4944-4768-a844-ea61e3c429e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:28.679358 systemd[1]: var-lib-kubelet-pods-759c567f\x2d4944\x2d4768\x2da844\x2dea61e3c429e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:49:28.679583 systemd[1]: var-lib-kubelet-pods-759c567f\x2d4944\x2d4768\x2da844\x2dea61e3c429e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:28.693604 kubelet[2567]: I0212 19:49:28.693559 2567 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cni-path\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693604 kubelet[2567]: I0212 19:49:28.693600 2567 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-lib-modules\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693604 kubelet[2567]: I0212 19:49:28.693613 2567 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-xtables-lock\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693626 2567 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-clustermesh-secrets\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693640 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-cilium-run\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693651 2567 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-bpf-maps\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693667 2567 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-net\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693680 2567 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-hostproc\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693692 2567 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-hubble-tls\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693723 2567 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-etc-cni-netd\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.693898 kubelet[2567]: I0212 19:49:28.693747 2567 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fl2kc\" (UniqueName: \"kubernetes.io/projected/759c567f-4944-4768-a844-ea61e3c429e0-kube-api-access-fl2kc\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.694107 kubelet[2567]: I0212 19:49:28.693762 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/759c567f-4944-4768-a844-ea61e3c429e0-cilium-config-path\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.694107 kubelet[2567]: I0212 19:49:28.693776 2567 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/759c567f-4944-4768-a844-ea61e3c429e0-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:28.694107 kubelet[2567]: I0212 19:49:28.693789 2567 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/759c567f-4944-4768-a844-ea61e3c429e0-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-2665495451\" DevicePath \"\"" Feb 12 19:49:29.315904 kubelet[2567]: I0212 19:49:29.315871 2567 scope.go:115] "RemoveContainer" containerID="ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6" Feb 12 19:49:29.317863 env[1398]: time="2024-02-12T19:49:29.317818694Z" level=info msg="RemoveContainer for \"ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6\"" Feb 12 19:49:29.332731 env[1398]: time="2024-02-12T19:49:29.330090684Z" level=info msg="RemoveContainer for \"ba284a65d8792116f5b515d0a78cd71c39ce6ff4e03ccccf5e35257c87f676b6\" returns successfully" Feb 12 19:49:29.332731 env[1398]: time="2024-02-12T19:49:29.331689196Z" level=info msg="RemoveContainer for \"b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4\"" Feb 12 19:49:29.332946 kubelet[2567]: I0212 19:49:29.330374 2567 scope.go:115] "RemoveContainer" containerID="b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4" Feb 12 19:49:29.342916 env[1398]: time="2024-02-12T19:49:29.342860778Z" level=info msg="RemoveContainer for \"b408b5894611efe2b425bea898b41dbae347d64469d8b28d63c19958bf8714d4\" returns successfully" Feb 12 19:49:29.343363 kubelet[2567]: I0212 19:49:29.343340 2567 scope.go:115] "RemoveContainer" containerID="87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf" Feb 12 19:49:29.345194 env[1398]: time="2024-02-12T19:49:29.345157995Z" level=info msg="RemoveContainer for \"87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf\"" Feb 12 19:49:29.346730 kubelet[2567]: I0212 19:49:29.346694 2567 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:49:29.346847 kubelet[2567]: E0212 19:49:29.346785 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="759c567f-4944-4768-a844-ea61e3c429e0" containerName="mount-cgroup" Feb 12 19:49:29.346847 kubelet[2567]: E0212 19:49:29.346803 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="759c567f-4944-4768-a844-ea61e3c429e0" containerName="apply-sysctl-overwrites" Feb 12 19:49:29.346847 kubelet[2567]: E0212 19:49:29.346814 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="759c567f-4944-4768-a844-ea61e3c429e0" containerName="mount-bpf-fs" Feb 12 19:49:29.346980 kubelet[2567]: I0212 19:49:29.346859 2567 memory_manager.go:346] "RemoveStaleState removing state" podUID="759c567f-4944-4768-a844-ea61e3c429e0" containerName="mount-bpf-fs" Feb 12 19:49:29.359061 env[1398]: time="2024-02-12T19:49:29.359005197Z" level=info msg="RemoveContainer for \"87b063eae10d114d3df16630f0ef610f2dd185415d20bfa1289db366698884bf\" returns successfully" Feb 12 19:49:29.398962 kubelet[2567]: I0212 19:49:29.398927 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-lib-modules\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399159 kubelet[2567]: I0212 19:49:29.398984 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598dl\" (UniqueName: \"kubernetes.io/projected/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-kube-api-access-598dl\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399159 kubelet[2567]: I0212 19:49:29.399015 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-bpf-maps\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399159 kubelet[2567]: I0212 19:49:29.399043 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-host-proc-sys-kernel\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399159 kubelet[2567]: I0212 19:49:29.399067 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-cni-path\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399159 kubelet[2567]: I0212 19:49:29.399090 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-etc-cni-netd\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399159 kubelet[2567]: I0212 19:49:29.399118 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-cilium-run\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399423 kubelet[2567]: I0212 19:49:29.399141 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-hostproc\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399423 kubelet[2567]: I0212 19:49:29.399166 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-xtables-lock\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399423 kubelet[2567]: I0212 19:49:29.399194 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-cilium-cgroup\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399423 kubelet[2567]: I0212 19:49:29.399223 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-clustermesh-secrets\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399423 kubelet[2567]: I0212 19:49:29.399255 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-hubble-tls\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399423 kubelet[2567]: I0212 19:49:29.399285 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-cilium-ipsec-secrets\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399668 kubelet[2567]: I0212 19:49:29.399317 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-cilium-config-path\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.399668 kubelet[2567]: I0212 19:49:29.399346 2567 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc-host-proc-sys-net\") pod \"cilium-7vb4z\" (UID: \"bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc\") " pod="kube-system/cilium-7vb4z" Feb 12 19:49:29.652438 env[1398]: time="2024-02-12T19:49:29.651514552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vb4z,Uid:bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc,Namespace:kube-system,Attempt:0,}" Feb 12 19:49:29.686396 env[1398]: time="2024-02-12T19:49:29.686160907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:49:29.686396 env[1398]: time="2024-02-12T19:49:29.686212107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:49:29.686396 env[1398]: time="2024-02-12T19:49:29.686229108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:49:29.686821 env[1398]: time="2024-02-12T19:49:29.686405509Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f pid=4773 runtime=io.containerd.runc.v2 Feb 12 19:49:29.706529 kubelet[2567]: E0212 19:49:29.704941 2567 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-smwgk" podUID=680f93d5-16fb-45be-b309-809700c1b476 Feb 12 19:49:29.713597 systemd[1]: run-containerd-runc-k8s.io-b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f-runc.7wN4rq.mount: Deactivated successfully. Feb 12 19:49:29.727934 kubelet[2567]: I0212 19:49:29.727904 2567 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-2665495451" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:49:29.727831914 +0000 UTC m=+265.251570372 LastTransitionTime:2024-02-12 19:49:29.727831914 +0000 UTC m=+265.251570372 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:49:29.755992 env[1398]: time="2024-02-12T19:49:29.755374317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vb4z,Uid:bdd4f1fd-2e7e-48c8-88ae-fd2978eb67dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\"" Feb 12 19:49:29.759172 env[1398]: time="2024-02-12T19:49:29.758557640Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:49:29.790728 env[1398]: time="2024-02-12T19:49:29.790678377Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66d35b76dde399a371f2c8ca5fa2b7376ea3acda9fd52006f76f162e1c2bfc07\"" Feb 12 19:49:29.793088 env[1398]: time="2024-02-12T19:49:29.791569683Z" level=info msg="StartContainer for \"66d35b76dde399a371f2c8ca5fa2b7376ea3acda9fd52006f76f162e1c2bfc07\"" Feb 12 19:49:29.844950 env[1398]: time="2024-02-12T19:49:29.844602874Z" level=info msg="StartContainer for \"66d35b76dde399a371f2c8ca5fa2b7376ea3acda9fd52006f76f162e1c2bfc07\" returns successfully" Feb 12 19:49:29.863644 kubelet[2567]: E0212 19:49:29.863591 2567 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:49:29.927680 env[1398]: time="2024-02-12T19:49:29.927509185Z" level=info msg="shim disconnected" id=66d35b76dde399a371f2c8ca5fa2b7376ea3acda9fd52006f76f162e1c2bfc07 Feb 12 19:49:29.927680 env[1398]: time="2024-02-12T19:49:29.927582385Z" level=warning msg="cleaning up after shim disconnected" id=66d35b76dde399a371f2c8ca5fa2b7376ea3acda9fd52006f76f162e1c2bfc07 namespace=k8s.io Feb 12 19:49:29.927680 env[1398]: time="2024-02-12T19:49:29.927597885Z" level=info msg="cleaning up dead shim" Feb 12 19:49:29.936985 env[1398]: time="2024-02-12T19:49:29.936929854Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4859 runtime=io.containerd.runc.v2\n" Feb 12 19:49:30.323562 env[1398]: time="2024-02-12T19:49:30.322981092Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:49:30.363089 env[1398]: time="2024-02-12T19:49:30.363035586Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20b3e26b8fedba1e430311baddff993f10799690421f9ae0bf89aa7525e84524\"" Feb 12 19:49:30.365794 env[1398]: time="2024-02-12T19:49:30.363994293Z" level=info msg="StartContainer for \"20b3e26b8fedba1e430311baddff993f10799690421f9ae0bf89aa7525e84524\"" Feb 12 19:49:30.421960 env[1398]: time="2024-02-12T19:49:30.420472408Z" level=info msg="StartContainer for \"20b3e26b8fedba1e430311baddff993f10799690421f9ae0bf89aa7525e84524\" returns successfully" Feb 12 19:49:30.450418 env[1398]: time="2024-02-12T19:49:30.450366727Z" level=info msg="shim disconnected" id=20b3e26b8fedba1e430311baddff993f10799690421f9ae0bf89aa7525e84524 Feb 12 19:49:30.450418 env[1398]: time="2024-02-12T19:49:30.450417128Z" level=warning msg="cleaning up after shim disconnected" id=20b3e26b8fedba1e430311baddff993f10799690421f9ae0bf89aa7525e84524 namespace=k8s.io Feb 12 19:49:30.450751 env[1398]: time="2024-02-12T19:49:30.450428528Z" level=info msg="cleaning up dead shim" Feb 12 19:49:30.458831 env[1398]: time="2024-02-12T19:49:30.458784089Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4922 runtime=io.containerd.runc.v2\n" Feb 12 19:49:30.681220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159312573.mount: Deactivated successfully. Feb 12 19:49:30.709056 kubelet[2567]: I0212 19:49:30.709023 2567 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=759c567f-4944-4768-a844-ea61e3c429e0 path="/var/lib/kubelet/pods/759c567f-4944-4768-a844-ea61e3c429e0/volumes" Feb 12 19:49:31.326303 env[1398]: time="2024-02-12T19:49:31.326259357Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:49:31.359080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161971498.mount: Deactivated successfully. Feb 12 19:49:31.363374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153908249.mount: Deactivated successfully. Feb 12 19:49:31.370530 env[1398]: time="2024-02-12T19:49:31.370483581Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30024baf5c4d98b1b6c59bb6a42f0455b399dc696196fb709a97e3ca74db4d29\"" Feb 12 19:49:31.371482 env[1398]: time="2024-02-12T19:49:31.371449588Z" level=info msg="StartContainer for \"30024baf5c4d98b1b6c59bb6a42f0455b399dc696196fb709a97e3ca74db4d29\"" Feb 12 19:49:31.443389 env[1398]: time="2024-02-12T19:49:31.443328415Z" level=info msg="StartContainer for \"30024baf5c4d98b1b6c59bb6a42f0455b399dc696196fb709a97e3ca74db4d29\" returns successfully" Feb 12 19:49:31.475251 env[1398]: time="2024-02-12T19:49:31.475192848Z" level=info msg="shim disconnected" id=30024baf5c4d98b1b6c59bb6a42f0455b399dc696196fb709a97e3ca74db4d29 Feb 12 19:49:31.475251 env[1398]: time="2024-02-12T19:49:31.475248149Z" level=warning msg="cleaning up after shim disconnected" id=30024baf5c4d98b1b6c59bb6a42f0455b399dc696196fb709a97e3ca74db4d29 namespace=k8s.io Feb 12 19:49:31.475251 env[1398]: time="2024-02-12T19:49:31.475259449Z" level=info msg="cleaning up dead shim" Feb 12 19:49:31.482855 env[1398]: time="2024-02-12T19:49:31.482806704Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4980 runtime=io.containerd.runc.v2\n" Feb 12 19:49:31.681325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30024baf5c4d98b1b6c59bb6a42f0455b399dc696196fb709a97e3ca74db4d29-rootfs.mount: Deactivated successfully. Feb 12 19:49:31.705671 kubelet[2567]: E0212 19:49:31.705620 2567 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-smwgk" podUID=680f93d5-16fb-45be-b309-809700c1b476 Feb 12 19:49:32.329992 env[1398]: time="2024-02-12T19:49:32.329938907Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:49:32.367558 env[1398]: time="2024-02-12T19:49:32.367505581Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd\"" Feb 12 19:49:32.368236 env[1398]: time="2024-02-12T19:49:32.368203586Z" level=info msg="StartContainer for \"2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd\"" Feb 12 19:49:32.423736 env[1398]: time="2024-02-12T19:49:32.423665992Z" level=info msg="StartContainer for \"2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd\" returns successfully" Feb 12 19:49:32.460216 env[1398]: time="2024-02-12T19:49:32.460164459Z" level=info msg="shim disconnected" id=2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd Feb 12 19:49:32.460216 env[1398]: time="2024-02-12T19:49:32.460213859Z" level=warning msg="cleaning up after shim disconnected" id=2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd namespace=k8s.io Feb 12 19:49:32.460216 env[1398]: time="2024-02-12T19:49:32.460225359Z" level=info msg="cleaning up dead shim" Feb 12 19:49:32.468229 env[1398]: time="2024-02-12T19:49:32.468179617Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5035 runtime=io.containerd.runc.v2\n" Feb 12 19:49:32.681517 systemd[1]: run-containerd-runc-k8s.io-2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd-runc.KxtZiV.mount: Deactivated successfully. Feb 12 19:49:32.681787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca5ec3fd9727d8dbcb45e27ba1f6a8c4268302e2edba9054ce9a0a16709adbd-rootfs.mount: Deactivated successfully. Feb 12 19:49:33.341189 env[1398]: time="2024-02-12T19:49:33.338306173Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:49:33.385032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620494095.mount: Deactivated successfully. Feb 12 19:49:33.396949 env[1398]: time="2024-02-12T19:49:33.396892700Z" level=info msg="CreateContainer within sandbox \"b5ee7c371dd7e37c1bba703368419f75ba66e75041b87ca6e9062ac54251345f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4affd004d91ee58817f6feaa97bfea3af0eadb2cba65d56f61118b4e0fe4be67\"" Feb 12 19:49:33.399646 env[1398]: time="2024-02-12T19:49:33.397674106Z" level=info msg="StartContainer for \"4affd004d91ee58817f6feaa97bfea3af0eadb2cba65d56f61118b4e0fe4be67\"" Feb 12 19:49:33.461651 env[1398]: time="2024-02-12T19:49:33.461595572Z" level=info msg="StartContainer for \"4affd004d91ee58817f6feaa97bfea3af0eadb2cba65d56f61118b4e0fe4be67\" returns successfully" Feb 12 19:49:33.705983 kubelet[2567]: E0212 19:49:33.705932 2567 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-smwgk" podUID=680f93d5-16fb-45be-b309-809700c1b476 Feb 12 19:49:33.818739 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:49:34.351092 kubelet[2567]: I0212 19:49:34.351046 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7vb4z" podStartSLOduration=5.351004452 pod.CreationTimestamp="2024-02-12 19:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:49:34.35061535 +0000 UTC m=+269.874353708" watchObservedRunningTime="2024-02-12 19:49:34.351004452 +0000 UTC m=+269.874742810" Feb 12 19:49:35.691273 update_engine[1385]: I0212 19:49:35.690760 1385 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:49:35.691273 update_engine[1385]: I0212 19:49:35.691004 1385 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:49:35.691273 update_engine[1385]: I0212 19:49:35.691226 1385 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:49:35.713810 update_engine[1385]: E0212 19:49:35.713618 1385 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:49:35.713810 update_engine[1385]: I0212 19:49:35.713766 1385 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 12 19:49:36.542842 systemd-networkd[1576]: lxc_health: Link UP Feb 12 19:49:36.557817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:49:36.560431 systemd-networkd[1576]: lxc_health: Gained carrier Feb 12 19:49:37.813890 systemd-networkd[1576]: lxc_health: Gained IPv6LL Feb 12 19:49:38.595429 systemd[1]: run-containerd-runc-k8s.io-4affd004d91ee58817f6feaa97bfea3af0eadb2cba65d56f61118b4e0fe4be67-runc.8UcyTx.mount: Deactivated successfully. Feb 12 19:49:43.013013 kubelet[2567]: E0212 19:49:43.012593 2567 upgradeaware.go:440] Error proxying data from backend to client: read tcp 127.0.0.1:52080->127.0.0.1:41341: read: connection reset by peer Feb 12 19:49:43.113611 sshd[4656]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:43.117598 systemd[1]: sshd@24-10.200.8.31:22-10.200.12.6:44508.service: Deactivated successfully. Feb 12 19:49:43.119946 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:49:43.120801 systemd-logind[1384]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:49:43.122824 systemd-logind[1384]: Removed session 27. Feb 12 19:49:45.685397 update_engine[1385]: I0212 19:49:45.685321 1385 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:49:45.685928 update_engine[1385]: I0212 19:49:45.685662 1385 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:49:45.685991 update_engine[1385]: I0212 19:49:45.685966 1385 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:49:45.707164 update_engine[1385]: E0212 19:49:45.707105 1385 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:49:45.707382 update_engine[1385]: I0212 19:49:45.707253 1385 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:49:45.707382 update_engine[1385]: I0212 19:49:45.707269 1385 omaha_request_action.cc:621] Omaha request response: Feb 12 19:49:45.707382 update_engine[1385]: E0212 19:49:45.707368 1385 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707387 1385 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707393 1385 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707399 1385 update_attempter.cc:306] Processing Done. Feb 12 19:49:45.707552 update_engine[1385]: E0212 19:49:45.707417 1385 update_attempter.cc:619] Update failed. Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707422 1385 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707429 1385 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707437 1385 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 12 19:49:45.707552 update_engine[1385]: I0212 19:49:45.707538 1385 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:49:45.707988 update_engine[1385]: I0212 19:49:45.707566 1385 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:49:45.707988 update_engine[1385]: I0212 19:49:45.707573 1385 omaha_request_action.cc:271] Request: Feb 12 19:49:45.707988 update_engine[1385]: Feb 12 19:49:45.707988 update_engine[1385]: Feb 12 19:49:45.707988 update_engine[1385]: Feb 12 19:49:45.707988 update_engine[1385]: Feb 12 19:49:45.707988 update_engine[1385]: Feb 12 19:49:45.707988 update_engine[1385]: Feb 12 19:49:45.707988 update_engine[1385]: I0212 19:49:45.707580 1385 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:49:45.708377 update_engine[1385]: I0212 19:49:45.708002 1385 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:49:45.708377 update_engine[1385]: I0212 19:49:45.708237 1385 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:49:45.708638 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 12 19:49:45.712323 update_engine[1385]: E0212 19:49:45.712301 1385 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:49:45.712424 update_engine[1385]: I0212 19:49:45.712391 1385 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:49:45.712424 update_engine[1385]: I0212 19:49:45.712401 1385 omaha_request_action.cc:621] Omaha request response: Feb 12 19:49:45.712424 update_engine[1385]: I0212 19:49:45.712407 1385 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:49:45.712424 update_engine[1385]: I0212 19:49:45.712411 1385 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:49:45.712424 update_engine[1385]: I0212 19:49:45.712415 1385 update_attempter.cc:306] Processing Done. Feb 12 19:49:45.712424 update_engine[1385]: I0212 19:49:45.712421 1385 update_attempter.cc:310] Error event sent. Feb 12 19:49:45.712649 update_engine[1385]: I0212 19:49:45.712430 1385 update_check_scheduler.cc:74] Next update check in 49m32s Feb 12 19:49:45.712827 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 12 19:49:57.242668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491-rootfs.mount: Deactivated successfully. Feb 12 19:49:57.398487 env[1398]: time="2024-02-12T19:49:57.398435924Z" level=info msg="shim disconnected" id=8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491 Feb 12 19:49:57.398487 env[1398]: time="2024-02-12T19:49:57.398483324Z" level=warning msg="cleaning up after shim disconnected" id=8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491 namespace=k8s.io Feb 12 19:49:57.399141 env[1398]: time="2024-02-12T19:49:57.398498824Z" level=info msg="cleaning up dead shim" Feb 12 19:49:57.406470 env[1398]: time="2024-02-12T19:49:57.406427979Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5723 runtime=io.containerd.runc.v2\n" Feb 12 19:49:58.392217 kubelet[2567]: I0212 19:49:58.392176 2567 scope.go:115] "RemoveContainer" containerID="8a2e25d37db8970837603bc56c7fba04bed3123cd2c778d33fc9e750aef96491" Feb 12 19:49:58.394496 env[1398]: time="2024-02-12T19:49:58.394456535Z" level=info msg="CreateContainer within sandbox \"d32a85294d5e92fbeee8766ea7746e164b9985c7b437c362c5ca1a4b6a685c52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 19:49:58.469905 env[1398]: time="2024-02-12T19:49:58.469846358Z" level=info msg="CreateContainer within sandbox \"d32a85294d5e92fbeee8766ea7746e164b9985c7b437c362c5ca1a4b6a685c52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5b172a390b2cfd6cb96e6ba4e767afa01c836317faf30576fbd437f09c5df9b1\"" Feb 12 19:49:58.470548 env[1398]: time="2024-02-12T19:49:58.470514262Z" level=info msg="StartContainer for \"5b172a390b2cfd6cb96e6ba4e767afa01c836317faf30576fbd437f09c5df9b1\"" Feb 12 19:49:58.550735 env[1398]: time="2024-02-12T19:49:58.548522203Z" level=info msg="StartContainer for \"5b172a390b2cfd6cb96e6ba4e767afa01c836317faf30576fbd437f09c5df9b1\" returns successfully" Feb 12 19:49:59.458058 systemd[1]: run-containerd-runc-k8s.io-5b172a390b2cfd6cb96e6ba4e767afa01c836317faf30576fbd437f09c5df9b1-runc.zHti4U.mount: Deactivated successfully. Feb 12 19:50:01.115232 kubelet[2567]: E0212 19:50:01.114563 2567 controller.go:189] failed to update lease, error: Put "https://10.200.8.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2665495451?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 19:50:01.304493 kubelet[2567]: E0212 19:50:01.304373 2567 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-2665495451.17b33564d2498fa7", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-2665495451", UID:"98be0c838f73a2dff0660c880a5a41f6", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2665495451"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 49, 50, 844587943, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 49, 50, 844587943, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.31:47088->10.200.8.33:2379: read: connection timed out' (will not retry!) Feb 12 19:50:01.626800 kubelet[2567]: E0212 19:50:01.626748 2567 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.31:47290->10.200.8.33:2379: read: connection timed out Feb 12 19:50:01.648841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c-rootfs.mount: Deactivated successfully. Feb 12 19:50:01.680659 env[1398]: time="2024-02-12T19:50:01.680595155Z" level=info msg="shim disconnected" id=193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c Feb 12 19:50:01.680659 env[1398]: time="2024-02-12T19:50:01.680661955Z" level=warning msg="cleaning up after shim disconnected" id=193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c namespace=k8s.io Feb 12 19:50:01.681324 env[1398]: time="2024-02-12T19:50:01.680674455Z" level=info msg="cleaning up dead shim" Feb 12 19:50:01.688681 env[1398]: time="2024-02-12T19:50:01.688638210Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:50:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5784 runtime=io.containerd.runc.v2\n" Feb 12 19:50:02.403684 kubelet[2567]: I0212 19:50:02.403651 2567 scope.go:115] "RemoveContainer" containerID="193856c3b783dda1dc0d2114ae4cdae6c57ed8dddd7296395c73370e560c859c" Feb 12 19:50:02.405556 env[1398]: time="2024-02-12T19:50:02.405515350Z" level=info msg="CreateContainer within sandbox \"a168307116f465a87c8607d062e3d98c9e47ca8c8351285737da2745acc5c4af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 19:50:02.443365 env[1398]: time="2024-02-12T19:50:02.443307610Z" level=info msg="CreateContainer within sandbox \"a168307116f465a87c8607d062e3d98c9e47ca8c8351285737da2745acc5c4af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9b2dae3b98f969eaace3c2e72dceb66d3e87d5563ee11132a7735aafc2fa3ddb\"" Feb 12 19:50:02.443967 env[1398]: time="2024-02-12T19:50:02.443933415Z" level=info msg="StartContainer for \"9b2dae3b98f969eaace3c2e72dceb66d3e87d5563ee11132a7735aafc2fa3ddb\"" Feb 12 19:50:02.528662 env[1398]: time="2024-02-12T19:50:02.528561997Z" level=info msg="StartContainer for \"9b2dae3b98f969eaace3c2e72dceb66d3e87d5563ee11132a7735aafc2fa3ddb\" returns successfully" Feb 12 19:50:02.649290 systemd[1]: run-containerd-runc-k8s.io-9b2dae3b98f969eaace3c2e72dceb66d3e87d5563ee11132a7735aafc2fa3ddb-runc.Eie3Vu.mount: Deactivated successfully. Feb 12 19:50:04.681606 env[1398]: time="2024-02-12T19:50:04.681548697Z" level=info msg="StopPodSandbox for \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\"" Feb 12 19:50:04.682158 env[1398]: time="2024-02-12T19:50:04.681687397Z" level=info msg="TearDown network for sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" successfully" Feb 12 19:50:04.682158 env[1398]: time="2024-02-12T19:50:04.681749598Z" level=info msg="StopPodSandbox for \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" returns successfully" Feb 12 19:50:04.682265 env[1398]: time="2024-02-12T19:50:04.682157201Z" level=info msg="RemovePodSandbox for \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\"" Feb 12 19:50:04.682265 env[1398]: time="2024-02-12T19:50:04.682193901Z" level=info msg="Forcibly stopping sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\"" Feb 12 19:50:04.682347 env[1398]: time="2024-02-12T19:50:04.682284402Z" level=info msg="TearDown network for sandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" successfully" Feb 12 19:50:04.690490 env[1398]: time="2024-02-12T19:50:04.690446658Z" level=info msg="RemovePodSandbox \"212d731990052c930938455099d8e5537bf4dc7b86331e066987a297b5f15ef7\" returns successfully" Feb 12 19:50:04.691399 env[1398]: time="2024-02-12T19:50:04.691356764Z" level=info msg="StopPodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\"" Feb 12 19:50:04.691543 env[1398]: time="2024-02-12T19:50:04.691494765Z" level=info msg="TearDown network for sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" successfully" Feb 12 19:50:04.691596 env[1398]: time="2024-02-12T19:50:04.691546165Z" level=info msg="StopPodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" returns successfully" Feb 12 19:50:04.692413 env[1398]: time="2024-02-12T19:50:04.692239470Z" level=info msg="RemovePodSandbox for \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\"" Feb 12 19:50:04.692530 env[1398]: time="2024-02-12T19:50:04.692417471Z" level=info msg="Forcibly stopping sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\"" Feb 12 19:50:04.692583 env[1398]: time="2024-02-12T19:50:04.692534772Z" level=info msg="TearDown network for sandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" successfully" Feb 12 19:50:04.703659 env[1398]: time="2024-02-12T19:50:04.703607648Z" level=info msg="RemovePodSandbox \"92d1a6670f2732a83bdb42dde32d347717c731142fdfaae5cfb142f513ef9f2f\" returns successfully" Feb 12 19:50:04.704090 env[1398]: time="2024-02-12T19:50:04.704060851Z" level=info msg="StopPodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\"" Feb 12 19:50:04.704200 env[1398]: time="2024-02-12T19:50:04.704151452Z" level=info msg="TearDown network for sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" successfully" Feb 12 19:50:04.704200 env[1398]: time="2024-02-12T19:50:04.704193052Z" level=info msg="StopPodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" returns successfully" Feb 12 19:50:04.704490 env[1398]: time="2024-02-12T19:50:04.704465654Z" level=info msg="RemovePodSandbox for \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\"" Feb 12 19:50:04.704584 env[1398]: time="2024-02-12T19:50:04.704491554Z" level=info msg="Forcibly stopping sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\"" Feb 12 19:50:04.704584 env[1398]: time="2024-02-12T19:50:04.704570155Z" level=info msg="TearDown network for sandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" successfully" Feb 12 19:50:04.711428 env[1398]: time="2024-02-12T19:50:04.711395401Z" level=info msg="RemovePodSandbox \"217df3f84160aaa3a386ba40bdd36a29372a4861f5b9c1ce3385f8424ec04857\" returns successfully" Feb 12 19:50:11.627623 kubelet[2567]: E0212 19:50:11.627583 2567 controller.go:189] failed to update lease, error: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-3510.3.2-a-2665495451)