Dec 13 01:48:46.017677 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024
Dec 13 01:48:46.017712 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 01:48:46.017727 kernel: BIOS-provided physical RAM map:
Dec 13 01:48:46.017737 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Dec 13 01:48:46.017747 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved
Dec 13 01:48:46.017757 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable
Dec 13 01:48:46.017773 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved
Dec 13 01:48:46.017785 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data
Dec 13 01:48:46.017795 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS
Dec 13 01:48:46.017806 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable
Dec 13 01:48:46.017817 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable
Dec 13 01:48:46.017827 kernel: printk: bootconsole [earlyser0] enabled
Dec 13 01:48:46.017838 kernel: NX (Execute Disable) protection: active
Dec 13 01:48:46.017848 kernel: efi: EFI v2.70 by Microsoft
Dec 13 01:48:46.017865 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 
Dec 13 01:48:46.017878 kernel: random: crng init done
Dec 13 01:48:46.017889 kernel: SMBIOS 3.1.0 present.
Dec 13 01:48:46.017901 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024
Dec 13 01:48:46.017913 kernel: Hypervisor detected: Microsoft Hyper-V
Dec 13 01:48:46.017925 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2
Dec 13 01:48:46.017936 kernel: Hyper-V Host Build:20348-10.0-1-0.1633
Dec 13 01:48:46.017948 kernel: Hyper-V: Nested features: 0x1e0101
Dec 13 01:48:46.017961 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40
Dec 13 01:48:46.017972 kernel: Hyper-V: Using hypercall for remote TLB flush
Dec 13 01:48:46.017984 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Dec 13 01:48:46.017995 kernel: tsc: Marking TSC unstable due to running on Hyper-V
Dec 13 01:48:46.018007 kernel: tsc: Detected 2593.906 MHz processor
Dec 13 01:48:46.018020 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 01:48:46.018031 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 01:48:46.018044 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000
Dec 13 01:48:46.018056 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 01:48:46.018068 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved
Dec 13 01:48:46.018083 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000
Dec 13 01:48:46.018095 kernel: Using GB pages for direct mapping
Dec 13 01:48:46.018107 kernel: Secure boot disabled
Dec 13 01:48:46.018119 kernel: ACPI: Early table checksum verification disabled
Dec 13 01:48:46.018130 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL)
Dec 13 01:48:46.018142 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018155 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018167 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
Dec 13 01:48:46.018188 kernel: ACPI: FACS 0x000000003FFFE000 000040
Dec 13 01:48:46.018200 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018213 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018226 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018239 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018264 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018280 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018292 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Dec 13 01:48:46.018305 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113]
Dec 13 01:48:46.018317 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183]
Dec 13 01:48:46.018330 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f]
Dec 13 01:48:46.018342 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063]
Dec 13 01:48:46.018354 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f]
Dec 13 01:48:46.018366 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027]
Dec 13 01:48:46.018381 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057]
Dec 13 01:48:46.018392 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf]
Dec 13 01:48:46.018404 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037]
Dec 13 01:48:46.018416 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033]
Dec 13 01:48:46.018427 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Dec 13 01:48:46.018439 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Dec 13 01:48:46.018451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug
Dec 13 01:48:46.018463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug
Dec 13 01:48:46.018475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug
Dec 13 01:48:46.018489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug
Dec 13 01:48:46.018501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug
Dec 13 01:48:46.018514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug
Dec 13 01:48:46.018526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug
Dec 13 01:48:46.018538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug
Dec 13 01:48:46.018550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug
Dec 13 01:48:46.018563 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug
Dec 13 01:48:46.018577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug
Dec 13 01:48:46.018590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug
Dec 13 01:48:46.018606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug
Dec 13 01:48:46.018620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug
Dec 13 01:48:46.018634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug
Dec 13 01:48:46.018647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug
Dec 13 01:48:46.018660 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff]
Dec 13 01:48:46.018674 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff]
Dec 13 01:48:46.018688 kernel: Zone ranges:
Dec 13 01:48:46.018702 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 01:48:46.018715 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 01:48:46.018730 kernel:   Normal   [mem 0x0000000100000000-0x00000002bfffffff]
Dec 13 01:48:46.018744 kernel: Movable zone start for each node
Dec 13 01:48:46.018758 kernel: Early memory node ranges
Dec 13 01:48:46.018772 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Dec 13 01:48:46.018785 kernel:   node   0: [mem 0x0000000000100000-0x000000003ff40fff]
Dec 13 01:48:46.018799 kernel:   node   0: [mem 0x000000003ffff000-0x000000003fffffff]
Dec 13 01:48:46.018811 kernel:   node   0: [mem 0x0000000100000000-0x00000002bfffffff]
Dec 13 01:48:46.018822 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff]
Dec 13 01:48:46.018834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 01:48:46.018848 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Dec 13 01:48:46.018859 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges
Dec 13 01:48:46.018871 kernel: ACPI: PM-Timer IO Port: 0x408
Dec 13 01:48:46.018882 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Dec 13 01:48:46.018894 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
Dec 13 01:48:46.018907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 01:48:46.018919 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 01:48:46.018931 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200
Dec 13 01:48:46.018943 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Dec 13 01:48:46.018957 kernel: [mem 0x40000000-0xffffffff] available for PCI devices
Dec 13 01:48:46.018969 kernel: Booting paravirtualized kernel on Hyper-V
Dec 13 01:48:46.018980 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 01:48:46.018992 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Dec 13 01:48:46.019004 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576
Dec 13 01:48:46.019015 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152
Dec 13 01:48:46.019027 kernel: pcpu-alloc: [0] 0 1 
Dec 13 01:48:46.019039 kernel: Hyper-V: PV spinlocks enabled
Dec 13 01:48:46.019050 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Dec 13 01:48:46.019064 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2062618
Dec 13 01:48:46.019076 kernel: Policy zone: Normal
Dec 13 01:48:46.019090 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 01:48:46.019102 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 01:48:46.019114 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 01:48:46.019126 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 01:48:46.019138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 01:48:46.019150 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved)
Dec 13 01:48:46.019165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Dec 13 01:48:46.019178 kernel: ftrace: allocating 34549 entries in 135 pages
Dec 13 01:48:46.019199 kernel: ftrace: allocated 135 pages with 4 groups
Dec 13 01:48:46.019214 kernel: rcu: Hierarchical RCU implementation.
Dec 13 01:48:46.019227 kernel: rcu:         RCU event tracing is enabled.
Dec 13 01:48:46.019240 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Dec 13 01:48:46.019265 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 01:48:46.019277 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 01:48:46.019290 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 01:48:46.019303 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Dec 13 01:48:46.019316 kernel: Using NULL legacy PIC
Dec 13 01:48:46.019332 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0
Dec 13 01:48:46.019345 kernel: Console: colour dummy device 80x25
Dec 13 01:48:46.019358 kernel: printk: console [tty1] enabled
Dec 13 01:48:46.019371 kernel: printk: console [ttyS0] enabled
Dec 13 01:48:46.019384 kernel: printk: bootconsole [earlyser0] disabled
Dec 13 01:48:46.019399 kernel: ACPI: Core revision 20210730
Dec 13 01:48:46.019411 kernel: Failed to register legacy timer interrupt
Dec 13 01:48:46.019424 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 01:48:46.019437 kernel: Hyper-V: Using IPI hypercalls
Dec 13 01:48:46.019450 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906)
Dec 13 01:48:46.019463 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Dec 13 01:48:46.019476 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Dec 13 01:48:46.019489 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 01:48:46.019502 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 01:48:46.019514 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 01:48:46.019529 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 13 01:48:46.019543 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Dec 13 01:48:46.019556 kernel: RETBleed: Vulnerable
Dec 13 01:48:46.019569 kernel: Speculative Store Bypass: Vulnerable
Dec 13 01:48:46.019582 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 01:48:46.019595 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 01:48:46.019608 kernel: GDS: Unknown: Dependent on hypervisor status
Dec 13 01:48:46.019621 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 01:48:46.019634 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 01:48:46.019647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 01:48:46.019662 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Dec 13 01:48:46.019675 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Dec 13 01:48:46.019688 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Dec 13 01:48:46.019701 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 01:48:46.019714 kernel: x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
Dec 13 01:48:46.019727 kernel: x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
Dec 13 01:48:46.019740 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
Dec 13 01:48:46.019753 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format.
Dec 13 01:48:46.019766 kernel: Freeing SMP alternatives memory: 32K
Dec 13 01:48:46.019780 kernel: pid_max: default: 32768 minimum: 301
Dec 13 01:48:46.019793 kernel: LSM: Security Framework initializing
Dec 13 01:48:46.019806 kernel: SELinux:  Initializing.
Dec 13 01:48:46.019821 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 01:48:46.019834 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 01:48:46.019847 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7)
Dec 13 01:48:46.019860 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Dec 13 01:48:46.019874 kernel: signal: max sigframe size: 3632
Dec 13 01:48:46.019887 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 01:48:46.019900 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Dec 13 01:48:46.019914 kernel: smp: Bringing up secondary CPUs ...
Dec 13 01:48:46.019927 kernel: x86: Booting SMP configuration:
Dec 13 01:48:46.019941 kernel: .... node  #0, CPUs:      #1
Dec 13 01:48:46.019957 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
Dec 13 01:48:46.019971 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Dec 13 01:48:46.019984 kernel: smp: Brought up 1 node, 2 CPUs
Dec 13 01:48:46.019997 kernel: smpboot: Max logical packages: 1
Dec 13 01:48:46.020011 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS)
Dec 13 01:48:46.020023 kernel: devtmpfs: initialized
Dec 13 01:48:46.020036 kernel: x86/mm: Memory block size: 128MB
Dec 13 01:48:46.020049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes)
Dec 13 01:48:46.020065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 01:48:46.020078 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Dec 13 01:48:46.020092 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 01:48:46.020104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 01:48:46.020118 kernel: audit: initializing netlink subsys (disabled)
Dec 13 01:48:46.020131 kernel: audit: type=2000 audit(1734054525.023:1): state=initialized audit_enabled=0 res=1
Dec 13 01:48:46.020144 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 01:48:46.020159 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 01:48:46.020172 kernel: cpuidle: using governor menu
Dec 13 01:48:46.020188 kernel: ACPI: bus type PCI registered
Dec 13 01:48:46.020201 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 01:48:46.020213 kernel: dca service started, version 1.12.1
Dec 13 01:48:46.020227 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 01:48:46.020239 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 01:48:46.020262 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 01:48:46.020275 kernel: ACPI: Added _OSI(Module Device)
Dec 13 01:48:46.020287 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 01:48:46.020300 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 01:48:46.020315 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 01:48:46.020328 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 13 01:48:46.020340 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 13 01:48:46.020353 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 13 01:48:46.020366 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 01:48:46.020378 kernel: ACPI: Interpreter enabled
Dec 13 01:48:46.020391 kernel: ACPI: PM: (supports S0 S5)
Dec 13 01:48:46.020404 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 01:48:46.020417 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 01:48:46.020432 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F
Dec 13 01:48:46.020445 kernel: iommu: Default domain type: Translated 
Dec 13 01:48:46.020458 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 13 01:48:46.020471 kernel: vgaarb: loaded
Dec 13 01:48:46.020483 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 01:48:46.020496 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 01:48:46.020509 kernel: PTP clock support registered
Dec 13 01:48:46.020522 kernel: Registered efivars operations
Dec 13 01:48:46.020535 kernel: PCI: Using ACPI for IRQ routing
Dec 13 01:48:46.020548 kernel: PCI: System does not support PCI
Dec 13 01:48:46.020563 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page
Dec 13 01:48:46.020576 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 01:48:46.020589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 01:48:46.020602 kernel: pnp: PnP ACPI init
Dec 13 01:48:46.020614 kernel: pnp: PnP ACPI: found 3 devices
Dec 13 01:48:46.020628 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 01:48:46.020641 kernel: NET: Registered PF_INET protocol family
Dec 13 01:48:46.020654 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 01:48:46.020670 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 01:48:46.020684 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 01:48:46.020697 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:48:46.020710 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 01:48:46.020723 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 01:48:46.020736 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 01:48:46.020749 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 01:48:46.020762 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 01:48:46.020775 kernel: NET: Registered PF_XDP protocol family
Dec 13 01:48:46.020791 kernel: PCI: CLS 0 bytes, default 64
Dec 13 01:48:46.020804 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 01:48:46.020818 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB)
Dec 13 01:48:46.020831 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Dec 13 01:48:46.020844 kernel: Initialise system trusted keyrings
Dec 13 01:48:46.020857 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Dec 13 01:48:46.020870 kernel: Key type asymmetric registered
Dec 13 01:48:46.020881 kernel: Asymmetric key parser 'x509' registered
Dec 13 01:48:46.020893 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Dec 13 01:48:46.020908 kernel: io scheduler mq-deadline registered
Dec 13 01:48:46.020921 kernel: io scheduler kyber registered
Dec 13 01:48:46.020932 kernel: io scheduler bfq registered
Dec 13 01:48:46.020946 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 01:48:46.020957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 01:48:46.020971 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 01:48:46.020984 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Dec 13 01:48:46.020996 kernel: i8042: PNP: No PS/2 controller found.
Dec 13 01:48:46.021162 kernel: rtc_cmos 00:02: registered as rtc0
Dec 13 01:48:46.021296 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:48:45 UTC (1734054525)
Dec 13 01:48:46.021408 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
Dec 13 01:48:46.021426 kernel: fail to initialize ptp_kvm
Dec 13 01:48:46.021439 kernel: intel_pstate: CPU model not supported
Dec 13 01:48:46.021451 kernel: efifb: probing for efifb
Dec 13 01:48:46.021463 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Dec 13 01:48:46.021476 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Dec 13 01:48:46.021487 kernel: efifb: scrolling: redraw
Dec 13 01:48:46.021504 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Dec 13 01:48:46.021518 kernel: Console: switching to colour frame buffer device 128x48
Dec 13 01:48:46.021532 kernel: fb0: EFI VGA frame buffer device
Dec 13 01:48:46.021546 kernel: pstore: Registered efi as persistent store backend
Dec 13 01:48:46.021558 kernel: NET: Registered PF_INET6 protocol family
Dec 13 01:48:46.021571 kernel: Segment Routing with IPv6
Dec 13 01:48:46.021584 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 01:48:46.021598 kernel: NET: Registered PF_PACKET protocol family
Dec 13 01:48:46.021611 kernel: Key type dns_resolver registered
Dec 13 01:48:46.021628 kernel: IPI shorthand broadcast: enabled
Dec 13 01:48:46.021642 kernel: sched_clock: Marking stable (748761200, 18904600)->(927236400, -159570600)
Dec 13 01:48:46.021656 kernel: registered taskstats version 1
Dec 13 01:48:46.021670 kernel: Loading compiled-in X.509 certificates
Dec 13 01:48:46.021684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e'
Dec 13 01:48:46.021698 kernel: Key type .fscrypt registered
Dec 13 01:48:46.021712 kernel: Key type fscrypt-provisioning registered
Dec 13 01:48:46.021726 kernel: pstore: Using crash dump compression: deflate
Dec 13 01:48:46.021743 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 01:48:46.021757 kernel: ima: Allocated hash algorithm: sha1
Dec 13 01:48:46.021768 kernel: ima: No architecture policies found
Dec 13 01:48:46.021780 kernel: clk: Disabling unused clocks
Dec 13 01:48:46.021791 kernel: Freeing unused kernel image (initmem) memory: 47476K
Dec 13 01:48:46.021803 kernel: Write protecting the kernel read-only data: 28672k
Dec 13 01:48:46.021814 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 13 01:48:46.021827 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K
Dec 13 01:48:46.021840 kernel: Run /init as init process
Dec 13 01:48:46.021854 kernel:   with arguments:
Dec 13 01:48:46.021870 kernel:     /init
Dec 13 01:48:46.021883 kernel:   with environment:
Dec 13 01:48:46.021896 kernel:     HOME=/
Dec 13 01:48:46.021909 kernel:     TERM=linux
Dec 13 01:48:46.021922 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 01:48:46.021939 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 01:48:46.021956 systemd[1]: Detected virtualization microsoft.
Dec 13 01:48:46.021972 systemd[1]: Detected architecture x86-64.
Dec 13 01:48:46.021984 systemd[1]: Running in initrd.
Dec 13 01:48:46.021997 systemd[1]: No hostname configured, using default hostname.
Dec 13 01:48:46.022011 systemd[1]: Hostname set to <localhost>.
Dec 13 01:48:46.022026 systemd[1]: Initializing machine ID from random generator.
Dec 13 01:48:46.022040 systemd[1]: Queued start job for default target initrd.target.
Dec 13 01:48:46.022054 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 01:48:46.022065 systemd[1]: Reached target cryptsetup.target.
Dec 13 01:48:46.022078 systemd[1]: Reached target paths.target.
Dec 13 01:48:46.022091 systemd[1]: Reached target slices.target.
Dec 13 01:48:46.022103 systemd[1]: Reached target swap.target.
Dec 13 01:48:46.022116 systemd[1]: Reached target timers.target.
Dec 13 01:48:46.022128 systemd[1]: Listening on iscsid.socket.
Dec 13 01:48:46.022140 systemd[1]: Listening on iscsiuio.socket.
Dec 13 01:48:46.022153 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 01:48:46.022166 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 01:48:46.022181 systemd[1]: Listening on systemd-journald.socket.
Dec 13 01:48:46.022194 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 01:48:46.022208 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 01:48:46.022221 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 01:48:46.022234 systemd[1]: Reached target sockets.target.
Dec 13 01:48:46.022267 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 01:48:46.022282 systemd[1]: Finished network-cleanup.service.
Dec 13 01:48:46.022297 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 01:48:46.022310 systemd[1]: Starting systemd-journald.service...
Dec 13 01:48:46.022328 systemd[1]: Starting systemd-modules-load.service...
Dec 13 01:48:46.022343 systemd[1]: Starting systemd-resolved.service...
Dec 13 01:48:46.022355 systemd[1]: Starting systemd-vconsole-setup.service...
Dec 13 01:48:46.022372 systemd-journald[183]: Journal started
Dec 13 01:48:46.022439 systemd-journald[183]: Runtime Journal (/run/log/journal/e97f9c6c094140b9aaa1224200de9165) is 8.0M, max 159.0M, 151.0M free.
Dec 13 01:48:46.022867 systemd-modules-load[184]: Inserted module 'overlay'
Dec 13 01:48:46.030545 systemd[1]: Started systemd-journald.service.
Dec 13 01:48:46.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.048959 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 01:48:46.051087 kernel: audit: type=1130 audit(1734054526.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.059520 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 01:48:46.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.063450 systemd-resolved[185]: Positive Trust Anchors:
Dec 13 01:48:46.082431 kernel: audit: type=1130 audit(1734054526.059:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.063459 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:48:46.063494 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 01:48:46.066192 systemd-resolved[185]: Defaulting to hostname 'linux'.
Dec 13 01:48:46.080518 systemd[1]: Started systemd-resolved.service.
Dec 13 01:48:46.110666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 01:48:46.099177 systemd[1]: Finished systemd-vconsole-setup.service.
Dec 13 01:48:46.112952 systemd[1]: Reached target nss-lookup.target.
Dec 13 01:48:46.118330 kernel: Bridge firewalling registered
Dec 13 01:48:46.118162 systemd-modules-load[184]: Inserted module 'br_netfilter'
Dec 13 01:48:46.121481 systemd[1]: Starting dracut-cmdline-ask.service...
Dec 13 01:48:46.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.136725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 01:48:46.139153 kernel: audit: type=1130 audit(1734054526.080:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.154852 kernel: audit: type=1130 audit(1734054526.098:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.145355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 01:48:46.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.180709 kernel: audit: type=1130 audit(1734054526.112:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.180746 kernel: SCSI subsystem initialized
Dec 13 01:48:46.180763 kernel: audit: type=1130 audit(1734054526.153:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.193321 systemd[1]: Finished dracut-cmdline-ask.service.
Dec 13 01:48:46.210334 kernel: audit: type=1130 audit(1734054526.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.196151 systemd[1]: Starting dracut-cmdline.service...
Dec 13 01:48:46.212137 dracut-cmdline[200]: dracut-dracut-053
Dec 13 01:48:46.212137 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA
Dec 13 01:48:46.212137 dracut-cmdline[200]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 01:48:46.237188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 01:48:46.237209 kernel: device-mapper: uevent: version 1.0.3
Dec 13 01:48:46.242261 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Dec 13 01:48:46.246326 systemd-modules-load[184]: Inserted module 'dm_multipath'
Dec 13 01:48:46.249199 systemd[1]: Finished systemd-modules-load.service.
Dec 13 01:48:46.265737 kernel: audit: type=1130 audit(1734054526.251:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.266900 systemd[1]: Starting systemd-sysctl.service...
Dec 13 01:48:46.276744 systemd[1]: Finished systemd-sysctl.service.
Dec 13 01:48:46.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.291261 kernel: audit: type=1130 audit(1734054526.278:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.295264 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 01:48:46.313264 kernel: iscsi: registered transport (tcp)
Dec 13 01:48:46.339254 kernel: iscsi: registered transport (qla4xxx)
Dec 13 01:48:46.339295 kernel: QLogic iSCSI HBA Driver
Dec 13 01:48:46.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.367826 systemd[1]: Finished dracut-cmdline.service.
Dec 13 01:48:46.371105 systemd[1]: Starting dracut-pre-udev.service...
Dec 13 01:48:46.420260 kernel: raid6: avx512x4 gen() 18454 MB/s
Dec 13 01:48:46.439253 kernel: raid6: avx512x4 xor()  8267 MB/s
Dec 13 01:48:46.458255 kernel: raid6: avx512x2 gen() 18347 MB/s
Dec 13 01:48:46.478258 kernel: raid6: avx512x2 xor() 29942 MB/s
Dec 13 01:48:46.497259 kernel: raid6: avx512x1 gen() 18294 MB/s
Dec 13 01:48:46.517258 kernel: raid6: avx512x1 xor() 26868 MB/s
Dec 13 01:48:46.537258 kernel: raid6: avx2x4   gen() 18326 MB/s
Dec 13 01:48:46.556257 kernel: raid6: avx2x4   xor()  7738 MB/s
Dec 13 01:48:46.576252 kernel: raid6: avx2x2   gen() 18307 MB/s
Dec 13 01:48:46.596257 kernel: raid6: avx2x2   xor() 22221 MB/s
Dec 13 01:48:46.615254 kernel: raid6: avx2x1   gen() 13526 MB/s
Dec 13 01:48:46.634253 kernel: raid6: avx2x1   xor() 19475 MB/s
Dec 13 01:48:46.654255 kernel: raid6: sse2x4   gen() 11736 MB/s
Dec 13 01:48:46.673258 kernel: raid6: sse2x4   xor()  7207 MB/s
Dec 13 01:48:46.693257 kernel: raid6: sse2x2   gen() 12890 MB/s
Dec 13 01:48:46.713259 kernel: raid6: sse2x2   xor()  7509 MB/s
Dec 13 01:48:46.732254 kernel: raid6: sse2x1   gen() 11617 MB/s
Dec 13 01:48:46.754192 kernel: raid6: sse2x1   xor()  5905 MB/s
Dec 13 01:48:46.754221 kernel: raid6: using algorithm avx512x4 gen() 18454 MB/s
Dec 13 01:48:46.754235 kernel: raid6: .... xor() 8267 MB/s, rmw enabled
Dec 13 01:48:46.760720 kernel: raid6: using avx512x2 recovery algorithm
Dec 13 01:48:46.775264 kernel: xor: automatically using best checksumming function   avx       
Dec 13 01:48:46.870270 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Dec 13 01:48:46.878343 systemd[1]: Finished dracut-pre-udev.service.
Dec 13 01:48:46.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.882000 audit: BPF prog-id=7 op=LOAD
Dec 13 01:48:46.882000 audit: BPF prog-id=8 op=LOAD
Dec 13 01:48:46.882996 systemd[1]: Starting systemd-udevd.service...
Dec 13 01:48:46.897458 systemd-udevd[383]: Using default interface naming scheme 'v252'.
Dec 13 01:48:46.904039 systemd[1]: Started systemd-udevd.service.
Dec 13 01:48:46.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.912700 systemd[1]: Starting dracut-pre-trigger.service...
Dec 13 01:48:46.928307 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation
Dec 13 01:48:46.956912 systemd[1]: Finished dracut-pre-trigger.service.
Dec 13 01:48:46.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:46.959982 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 01:48:46.997152 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 01:48:46.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:47.044263 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 01:48:47.060262 kernel: hv_vmbus: Vmbus version:5.2
Dec 13 01:48:47.075263 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 01:48:47.089261 kernel: hv_vmbus: registering driver hyperv_keyboard
Dec 13 01:48:47.094261 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 01:48:47.099623 kernel: hv_vmbus: registering driver hid_hyperv
Dec 13 01:48:47.110550 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0
Dec 13 01:48:47.110585 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Dec 13 01:48:47.115269 kernel: AES CTR mode by8 optimization enabled
Dec 13 01:48:47.123259 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1
Dec 13 01:48:47.133260 kernel: hv_vmbus: registering driver hv_netvsc
Dec 13 01:48:47.139259 kernel: hv_vmbus: registering driver hv_storvsc
Dec 13 01:48:47.152250 kernel: scsi host0: storvsc_host_t
Dec 13 01:48:47.152416 kernel: scsi host1: storvsc_host_t
Dec 13 01:48:47.152444 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Dec 13 01:48:47.159261 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Dec 13 01:48:47.182337 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Dec 13 01:48:47.185241 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 01:48:47.185273 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Dec 13 01:48:47.198156 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Dec 13 01:48:47.215211 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Dec 13 01:48:47.215389 kernel: sd 0:0:0:0: [sda] Write Protect is off
Dec 13 01:48:47.215540 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Dec 13 01:48:47.215688 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Dec 13 01:48:47.215834 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 01:48:47.215853 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Dec 13 01:48:47.247266 kernel: hv_netvsc 7c1e5235-89b6-7c1e-5235-89b67c1e5235 eth0: VF slot 1 added
Dec 13 01:48:47.256263 kernel: hv_vmbus: registering driver hv_pci
Dec 13 01:48:47.262265 kernel: hv_pci 9e04c5d8-57b1-40ed-b3e0-a2d255b7883c: PCI VMBus probing: Using version 0x10004
Dec 13 01:48:47.335691 kernel: hv_pci 9e04c5d8-57b1-40ed-b3e0-a2d255b7883c: PCI host bridge to bus 57b1:00
Dec 13 01:48:47.335856 kernel: pci_bus 57b1:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
Dec 13 01:48:47.336021 kernel: pci_bus 57b1:00: No busn resource found for root bus, will use [bus 00-ff]
Dec 13 01:48:47.336181 kernel: pci 57b1:00:02.0: [15b3:1016] type 00 class 0x020000
Dec 13 01:48:47.336364 kernel: pci 57b1:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
Dec 13 01:48:47.336521 kernel: pci 57b1:00:02.0: enabling Extended Tags
Dec 13 01:48:47.336673 kernel: pci 57b1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 57b1:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
Dec 13 01:48:47.336822 kernel: pci_bus 57b1:00: busn_res: [bus 00-ff] end is updated to 00
Dec 13 01:48:47.336960 kernel: pci 57b1:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
Dec 13 01:48:47.429270 kernel: mlx5_core 57b1:00:02.0: firmware version: 14.30.5000
Dec 13 01:48:47.685441 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (429)
Dec 13 01:48:47.685466 kernel: mlx5_core 57b1:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
Dec 13 01:48:47.685648 kernel: mlx5_core 57b1:00:02.0: Supported tc offload range - chains: 1, prios: 1
Dec 13 01:48:47.685834 kernel: mlx5_core 57b1:00:02.0: mlx5e_tc_post_act_init:40:(pid 187): firmware level support is missing
Dec 13 01:48:47.686000 kernel: hv_netvsc 7c1e5235-89b6-7c1e-5235-89b67c1e5235 eth0: VF registering: eth1
Dec 13 01:48:47.686143 kernel: mlx5_core 57b1:00:02.0 eth1: joined to eth0
Dec 13 01:48:47.570182 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Dec 13 01:48:47.604646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 01:48:47.694265 kernel: mlx5_core 57b1:00:02.0 enP22449s1: renamed from eth1
Dec 13 01:48:47.755880 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Dec 13 01:48:47.802778 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Dec 13 01:48:47.805194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Dec 13 01:48:47.810840 systemd[1]: Starting disk-uuid.service...
Dec 13 01:48:47.826275 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 01:48:47.833259 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 01:48:48.842181 disk-uuid[560]: The operation has completed successfully.
Dec 13 01:48:48.844362 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 01:48:48.913754 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 01:48:48.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:48.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:48.913853 systemd[1]: Finished disk-uuid.service.
Dec 13 01:48:48.924493 systemd[1]: Starting verity-setup.service...
Dec 13 01:48:48.959268 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 01:48:49.226435 systemd[1]: Found device dev-mapper-usr.device.
Dec 13 01:48:49.230261 systemd[1]: Finished verity-setup.service.
Dec 13 01:48:49.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.234420 systemd[1]: Mounting sysusr-usr.mount...
Dec 13 01:48:49.305282 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 01:48:49.305022 systemd[1]: Mounted sysusr-usr.mount.
Dec 13 01:48:49.308227 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Dec 13 01:48:49.312044 systemd[1]: Starting ignition-setup.service...
Dec 13 01:48:49.316753 systemd[1]: Starting parse-ip-for-networkd.service...
Dec 13 01:48:49.340577 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:48:49.340617 kernel: BTRFS info (device sda6): using free space tree
Dec 13 01:48:49.340634 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 01:48:49.385434 systemd[1]: Finished parse-ip-for-networkd.service.
Dec 13 01:48:49.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.389000 audit: BPF prog-id=9 op=LOAD
Dec 13 01:48:49.390758 systemd[1]: Starting systemd-networkd.service...
Dec 13 01:48:49.415499 systemd-networkd[799]: lo: Link UP
Dec 13 01:48:49.415507 systemd-networkd[799]: lo: Gained carrier
Dec 13 01:48:49.418897 systemd-networkd[799]: Enumeration completed
Dec 13 01:48:49.418984 systemd[1]: Started systemd-networkd.service.
Dec 13 01:48:49.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.422528 systemd[1]: Reached target network.target.
Dec 13 01:48:49.422943 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:48:49.431192 systemd[1]: Starting iscsiuio.service...
Dec 13 01:48:49.439283 systemd[1]: Started iscsiuio.service.
Dec 13 01:48:49.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.443616 systemd[1]: Starting iscsid.service...
Dec 13 01:48:49.449824 iscsid[806]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 01:48:49.449824 iscsid[806]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 13 01:48:49.449824 iscsid[806]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 13 01:48:49.449824 iscsid[806]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 13 01:48:49.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.478886 iscsid[806]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 01:48:49.478886 iscsid[806]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 13 01:48:49.455465 systemd[1]: Started iscsid.service.
Dec 13 01:48:49.474262 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 01:48:49.475214 systemd[1]: Starting dracut-initqueue.service...
Dec 13 01:48:49.501239 kernel: mlx5_core 57b1:00:02.0 enP22449s1: Link up
Dec 13 01:48:49.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.497611 systemd[1]: Finished dracut-initqueue.service.
Dec 13 01:48:49.502760 systemd[1]: Reached target remote-fs-pre.target.
Dec 13 01:48:49.505059 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 01:48:49.506972 systemd[1]: Reached target remote-fs.target.
Dec 13 01:48:49.510374 systemd[1]: Starting dracut-pre-mount.service...
Dec 13 01:48:49.525971 systemd[1]: Finished dracut-pre-mount.service.
Dec 13 01:48:49.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.538970 kernel: hv_netvsc 7c1e5235-89b6-7c1e-5235-89b67c1e5235 eth0: Data path switched to VF: enP22449s1
Dec 13 01:48:49.539226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 01:48:49.539488 systemd-networkd[799]: enP22449s1: Link UP
Dec 13 01:48:49.539606 systemd-networkd[799]: eth0: Link UP
Dec 13 01:48:49.539713 systemd-networkd[799]: eth0: Gained carrier
Dec 13 01:48:49.545412 systemd-networkd[799]: enP22449s1: Gained carrier
Dec 13 01:48:49.571320 systemd-networkd[799]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16
Dec 13 01:48:49.722670 systemd[1]: Finished ignition-setup.service.
Dec 13 01:48:49.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:49.727599 systemd[1]: Starting ignition-fetch-offline.service...
Dec 13 01:48:51.192524 systemd-networkd[799]: eth0: Gained IPv6LL
Dec 13 01:48:53.166884 ignition[825]: Ignition 2.14.0
Dec 13 01:48:53.166900 ignition[825]: Stage: fetch-offline
Dec 13 01:48:53.167003 ignition[825]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:53.167054 ignition[825]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:53.284711 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:53.284901 ignition[825]: parsed url from cmdline: ""
Dec 13 01:48:53.286277 systemd[1]: Finished ignition-fetch-offline.service.
Dec 13 01:48:53.299298 kernel: kauditd_printk_skb: 18 callbacks suppressed
Dec 13 01:48:53.299340 kernel: audit: type=1130 audit(1734054533.291:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.284905 ignition[825]: no config URL provided
Dec 13 01:48:53.292971 systemd[1]: Starting ignition-fetch.service...
Dec 13 01:48:53.284911 ignition[825]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 01:48:53.284919 ignition[825]: no config at "/usr/lib/ignition/user.ign"
Dec 13 01:48:53.284926 ignition[825]: failed to fetch config: resource requires networking
Dec 13 01:48:53.285279 ignition[825]: Ignition finished successfully
Dec 13 01:48:53.301416 ignition[831]: Ignition 2.14.0
Dec 13 01:48:53.301423 ignition[831]: Stage: fetch
Dec 13 01:48:53.301527 ignition[831]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:53.301548 ignition[831]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:53.304769 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:53.306550 ignition[831]: parsed url from cmdline: ""
Dec 13 01:48:53.306558 ignition[831]: no config URL provided
Dec 13 01:48:53.306570 ignition[831]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 01:48:53.307227 ignition[831]: no config at "/usr/lib/ignition/user.ign"
Dec 13 01:48:53.307285 ignition[831]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Dec 13 01:48:53.411173 ignition[831]: GET result: OK
Dec 13 01:48:53.411291 ignition[831]: config has been read from IMDS userdata
Dec 13 01:48:53.411327 ignition[831]: parsing config with SHA512: 7fc8239c9f34bcac80604c4128193b210d53e08ea2fc2868d4bf06083763d3240c954210173c17613648a7ef1c7bbd4b18b7710f7f3392c3320fd876d800ee4b
Dec 13 01:48:53.418048 unknown[831]: fetched base config from "system"
Dec 13 01:48:53.418067 unknown[831]: fetched base config from "system"
Dec 13 01:48:53.418079 unknown[831]: fetched user config from "azure"
Dec 13 01:48:53.424209 ignition[831]: fetch: fetch complete
Dec 13 01:48:53.424219 ignition[831]: fetch: fetch passed
Dec 13 01:48:53.424287 ignition[831]: Ignition finished successfully
Dec 13 01:48:53.428832 systemd[1]: Finished ignition-fetch.service.
Dec 13 01:48:53.446292 kernel: audit: type=1130 audit(1734054533.431:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.432197 systemd[1]: Starting ignition-kargs.service...
Dec 13 01:48:53.458514 ignition[837]: Ignition 2.14.0
Dec 13 01:48:53.458525 ignition[837]: Stage: kargs
Dec 13 01:48:53.458661 ignition[837]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:53.458693 ignition[837]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:53.467711 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:53.468811 ignition[837]: kargs: kargs passed
Dec 13 01:48:53.468843 ignition[837]: Ignition finished successfully
Dec 13 01:48:53.474129 systemd[1]: Finished ignition-kargs.service.
Dec 13 01:48:53.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.489284 kernel: audit: type=1130 audit(1734054533.477:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.478317 systemd[1]: Starting ignition-disks.service...
Dec 13 01:48:53.486227 ignition[843]: Ignition 2.14.0
Dec 13 01:48:53.486235 ignition[843]: Stage: disks
Dec 13 01:48:53.486389 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:53.486419 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:53.498409 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:53.502514 ignition[843]: disks: disks passed
Dec 13 01:48:53.502567 ignition[843]: Ignition finished successfully
Dec 13 01:48:53.505909 systemd[1]: Finished ignition-disks.service.
Dec 13 01:48:53.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.507795 systemd[1]: Reached target initrd-root-device.target.
Dec 13 01:48:53.523542 kernel: audit: type=1130 audit(1734054533.507:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.520071 systemd[1]: Reached target local-fs-pre.target.
Dec 13 01:48:53.523518 systemd[1]: Reached target local-fs.target.
Dec 13 01:48:53.525118 systemd[1]: Reached target sysinit.target.
Dec 13 01:48:53.526664 systemd[1]: Reached target basic.target.
Dec 13 01:48:53.530719 systemd[1]: Starting systemd-fsck-root.service...
Dec 13 01:48:53.593576 systemd-fsck[851]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks
Dec 13 01:48:53.601007 systemd[1]: Finished systemd-fsck-root.service.
Dec 13 01:48:53.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.613217 systemd[1]: Mounting sysroot.mount...
Dec 13 01:48:53.619570 kernel: audit: type=1130 audit(1734054533.604:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:53.634108 systemd[1]: Mounted sysroot.mount.
Dec 13 01:48:53.637305 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 01:48:53.637354 systemd[1]: Reached target initrd-root-fs.target.
Dec 13 01:48:53.680383 systemd[1]: Mounting sysroot-usr.mount...
Dec 13 01:48:53.685657 systemd[1]: Starting flatcar-metadata-hostname.service...
Dec 13 01:48:53.687738 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 01:48:53.687771 systemd[1]: Reached target ignition-diskful.target.
Dec 13 01:48:53.692805 systemd[1]: Mounted sysroot-usr.mount.
Dec 13 01:48:53.741485 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 01:48:53.746213 systemd[1]: Starting initrd-setup-root.service...
Dec 13 01:48:53.761264 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (862)
Dec 13 01:48:53.769404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:48:53.769439 kernel: BTRFS info (device sda6): using free space tree
Dec 13 01:48:53.769452 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 01:48:53.772223 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 01:48:53.780575 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 01:48:53.805058 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory
Dec 13 01:48:53.848834 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 01:48:53.856315 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 01:48:54.483006 systemd[1]: Finished initrd-setup-root.service.
Dec 13 01:48:54.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:54.488210 systemd[1]: Starting ignition-mount.service...
Dec 13 01:48:54.502019 kernel: audit: type=1130 audit(1734054534.487:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:54.500801 systemd[1]: Starting sysroot-boot.service...
Dec 13 01:48:54.506163 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Dec 13 01:48:54.508386 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Dec 13 01:48:54.528805 ignition[929]: INFO     : Ignition 2.14.0
Dec 13 01:48:54.531301 ignition[929]: INFO     : Stage: mount
Dec 13 01:48:54.533933 ignition[929]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:54.537362 ignition[929]: DEBUG    : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:54.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:54.536587 systemd[1]: Finished sysroot-boot.service.
Dec 13 01:48:54.559764 kernel: audit: type=1130 audit(1734054534.545:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:54.559844 ignition[929]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:54.562978 ignition[929]: INFO     : mount: mount passed
Dec 13 01:48:54.562978 ignition[929]: INFO     : Ignition finished successfully
Dec 13 01:48:54.564697 systemd[1]: Finished ignition-mount.service.
Dec 13 01:48:54.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:54.580286 kernel: audit: type=1130 audit(1734054534.568:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:55.407685 coreos-metadata[861]: Dec 13 01:48:55.407 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Dec 13 01:48:55.428311 coreos-metadata[861]: Dec 13 01:48:55.428 INFO Fetch successful
Dec 13 01:48:55.463528 coreos-metadata[861]: Dec 13 01:48:55.463 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Dec 13 01:48:55.481403 coreos-metadata[861]: Dec 13 01:48:55.481 INFO Fetch successful
Dec 13 01:48:55.496634 coreos-metadata[861]: Dec 13 01:48:55.496 INFO wrote hostname ci-3510.3.6-a-f5ec44d98c to /sysroot/etc/hostname
Dec 13 01:48:55.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:55.498547 systemd[1]: Finished flatcar-metadata-hostname.service.
Dec 13 01:48:55.517153 kernel: audit: type=1130 audit(1734054535.501:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:55.503660 systemd[1]: Starting ignition-files.service...
Dec 13 01:48:55.520306 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 01:48:55.535266 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (941)
Dec 13 01:48:55.535297 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:48:55.542153 kernel: BTRFS info (device sda6): using free space tree
Dec 13 01:48:55.542175 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 01:48:55.550458 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 01:48:55.563182 ignition[960]: INFO     : Ignition 2.14.0
Dec 13 01:48:55.563182 ignition[960]: INFO     : Stage: files
Dec 13 01:48:55.567045 ignition[960]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:55.567045 ignition[960]: DEBUG    : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:55.581054 ignition[960]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:55.595961 ignition[960]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 01:48:55.640396 ignition[960]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 01:48:55.640396 ignition[960]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 01:48:55.727640 ignition[960]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 01:48:55.731316 ignition[960]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 01:48:55.731316 ignition[960]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 01:48:55.728310 unknown[960]: wrote ssh authorized keys file for user: core
Dec 13 01:48:55.742324 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 01:48:55.747101 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 01:48:55.834707 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Dec 13 01:48:55.953813 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 01:48:55.958471 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Dec 13 01:48:55.962413 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Dec 13 01:48:56.455885 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Dec 13 01:48:56.599986 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/etc/systemd/system/waagent.service"
Dec 13 01:48:56.604629 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 01:48:56.675350 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (960)
Dec 13 01:48:56.632317 systemd[1]: mnt-oem3465827898.mount: Deactivated successfully.
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(c): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3465827898"
Dec 13 01:48:56.677721 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3465827898": device or resource busy
Dec 13 01:48:56.677721 ignition[960]: ERROR    : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3465827898", trying btrfs: device or resource busy
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(d): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3465827898"
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3465827898"
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(e): [started]  unmounting "/mnt/oem3465827898"
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3465827898"
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service"
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/etc/systemd/system/nvidia.service"
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): op(10): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem528594900"
Dec 13 01:48:56.677721 ignition[960]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem528594900": device or resource busy
Dec 13 01:48:56.677721 ignition[960]: ERROR    : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem528594900", trying btrfs: device or resource busy
Dec 13 01:48:56.677721 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): op(11): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem528594900"
Dec 13 01:48:56.650066 systemd[1]: mnt-oem528594900.mount: Deactivated successfully.
Dec 13 01:48:56.742237 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem528594900"
Dec 13 01:48:56.742237 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): op(12): [started]  unmounting "/mnt/oem528594900"
Dec 13 01:48:56.742237 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem528594900"
Dec 13 01:48:56.742237 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service"
Dec 13 01:48:56.742237 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(13): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:48:56.742237 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Dec 13 01:48:57.116126 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(13): GET result: OK
Dec 13 01:48:57.495461 ignition[960]: INFO     : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:48:57.495461 ignition[960]: INFO     : files: op(14): [started]  processing unit "waagent.service"
Dec 13 01:48:57.495461 ignition[960]: INFO     : files: op(14): [finished] processing unit "waagent.service"
Dec 13 01:48:57.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(15): [started]  processing unit "nvidia.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(15): [finished] processing unit "nvidia.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(16): [started]  processing unit "prepare-helm.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(16): op(17): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(16): [finished] processing unit "prepare-helm.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(18): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(18): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(19): [started]  setting preset to enabled for "waagent.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(19): [finished] setting preset to enabled for "waagent.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(1a): [started]  setting preset to enabled for "nvidia.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: op(1a): [finished] setting preset to enabled for "nvidia.service"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: createResultFile: createFiles: op(1b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:48:57.508961 ignition[960]: INFO     : files: files passed
Dec 13 01:48:57.508961 ignition[960]: INFO     : Ignition finished successfully
Dec 13 01:48:57.587438 kernel: audit: type=1130 audit(1734054537.508:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.506719 systemd[1]: Finished ignition-files.service.
Dec 13 01:48:57.510222 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Dec 13 01:48:57.525052 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Dec 13 01:48:57.596214 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:48:57.525817 systemd[1]: Starting ignition-quench.service...
Dec 13 01:48:57.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.532234 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 01:48:57.532368 systemd[1]: Finished ignition-quench.service.
Dec 13 01:48:57.547143 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Dec 13 01:48:57.550471 systemd[1]: Reached target ignition-complete.target.
Dec 13 01:48:57.552920 systemd[1]: Starting initrd-parse-etc.service...
Dec 13 01:48:57.568560 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 01:48:57.568645 systemd[1]: Finished initrd-parse-etc.service.
Dec 13 01:48:57.571335 systemd[1]: Reached target initrd-fs.target.
Dec 13 01:48:57.576566 systemd[1]: Reached target initrd.target.
Dec 13 01:48:57.582016 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Dec 13 01:48:57.582731 systemd[1]: Starting dracut-pre-pivot.service...
Dec 13 01:48:57.594685 systemd[1]: Finished dracut-pre-pivot.service.
Dec 13 01:48:57.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.609234 systemd[1]: Starting initrd-cleanup.service...
Dec 13 01:48:57.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.625539 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 01:48:57.625622 systemd[1]: Finished initrd-cleanup.service.
Dec 13 01:48:57.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.628453 systemd[1]: Stopped target nss-lookup.target.
Dec 13 01:48:57.706000 audit: BPF prog-id=6 op=UNLOAD
Dec 13 01:48:57.631339 systemd[1]: Stopped target remote-cryptsetup.target.
Dec 13 01:48:57.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.710681 ignition[999]: INFO     : Ignition 2.14.0
Dec 13 01:48:57.710681 ignition[999]: INFO     : Stage: umount
Dec 13 01:48:57.710681 ignition[999]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 01:48:57.710681 ignition[999]: DEBUG    : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Dec 13 01:48:57.710681 ignition[999]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Dec 13 01:48:57.710681 ignition[999]: INFO     : umount: umount passed
Dec 13 01:48:57.710681 ignition[999]: INFO     : Ignition finished successfully
Dec 13 01:48:57.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.633363 systemd[1]: Stopped target timers.target.
Dec 13 01:48:57.635017 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 01:48:57.635060 systemd[1]: Stopped dracut-pre-pivot.service.
Dec 13 01:48:57.638484 systemd[1]: Stopped target initrd.target.
Dec 13 01:48:57.640125 systemd[1]: Stopped target basic.target.
Dec 13 01:48:57.643476 systemd[1]: Stopped target ignition-complete.target.
Dec 13 01:48:57.645276 systemd[1]: Stopped target ignition-diskful.target.
Dec 13 01:48:57.647056 systemd[1]: Stopped target initrd-root-device.target.
Dec 13 01:48:57.649108 systemd[1]: Stopped target remote-fs.target.
Dec 13 01:48:57.649875 systemd[1]: Stopped target remote-fs-pre.target.
Dec 13 01:48:57.650255 systemd[1]: Stopped target sysinit.target.
Dec 13 01:48:57.650622 systemd[1]: Stopped target local-fs.target.
Dec 13 01:48:57.650989 systemd[1]: Stopped target local-fs-pre.target.
Dec 13 01:48:57.651351 systemd[1]: Stopped target swap.target.
Dec 13 01:48:57.651722 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 01:48:57.651766 systemd[1]: Stopped dracut-pre-mount.service.
Dec 13 01:48:57.652135 systemd[1]: Stopped target cryptsetup.target.
Dec 13 01:48:57.652462 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 01:48:57.652494 systemd[1]: Stopped dracut-initqueue.service.
Dec 13 01:48:57.652892 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 01:48:57.652923 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Dec 13 01:48:57.653215 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 01:48:57.653254 systemd[1]: Stopped ignition-files.service.
Dec 13 01:48:57.653576 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Dec 13 01:48:57.653607 systemd[1]: Stopped flatcar-metadata-hostname.service.
Dec 13 01:48:57.654490 systemd[1]: Stopping ignition-mount.service...
Dec 13 01:48:57.657138 systemd[1]: Stopping iscsiuio.service...
Dec 13 01:48:57.658017 systemd[1]: Stopping sysroot-boot.service...
Dec 13 01:48:57.658147 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 01:48:57.658204 systemd[1]: Stopped systemd-udev-trigger.service.
Dec 13 01:48:57.658625 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 01:48:57.658666 systemd[1]: Stopped dracut-pre-trigger.service.
Dec 13 01:48:57.659328 systemd[1]: iscsiuio.service: Deactivated successfully.
Dec 13 01:48:57.659430 systemd[1]: Stopped iscsiuio.service.
Dec 13 01:48:57.675502 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 01:48:57.675582 systemd[1]: Stopped ignition-mount.service.
Dec 13 01:48:57.675836 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 01:48:57.675873 systemd[1]: Stopped ignition-disks.service.
Dec 13 01:48:57.676256 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 01:48:57.676288 systemd[1]: Stopped ignition-kargs.service.
Dec 13 01:48:57.676592 systemd[1]: ignition-fetch.service: Deactivated successfully.
Dec 13 01:48:57.676627 systemd[1]: Stopped ignition-fetch.service.
Dec 13 01:48:57.676951 systemd[1]: Stopped target network.target.
Dec 13 01:48:57.677293 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 01:48:57.677326 systemd[1]: Stopped ignition-fetch-offline.service.
Dec 13 01:48:57.677672 systemd[1]: Stopped target paths.target.
Dec 13 01:48:57.678002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 01:48:57.681287 systemd[1]: Stopped systemd-ask-password-console.path.
Dec 13 01:48:57.681625 systemd[1]: Stopped target slices.target.
Dec 13 01:48:57.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.682074 systemd[1]: Stopped target sockets.target.
Dec 13 01:48:57.682463 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 01:48:57.682492 systemd[1]: Closed iscsid.socket.
Dec 13 01:48:57.682831 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 01:48:57.682862 systemd[1]: Closed iscsiuio.socket.
Dec 13 01:48:57.683195 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 01:48:57.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.683231 systemd[1]: Stopped ignition-setup.service.
Dec 13 01:48:57.686321 systemd[1]: Stopping systemd-networkd.service...
Dec 13 01:48:57.687895 systemd[1]: Stopping systemd-resolved.service...
Dec 13 01:48:57.701104 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 01:48:57.701201 systemd[1]: Stopped systemd-resolved.service.
Dec 13 01:48:57.704763 systemd-networkd[799]: eth0: DHCPv6 lease lost
Dec 13 01:48:57.840000 audit: BPF prog-id=9 op=UNLOAD
Dec 13 01:48:57.706942 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 01:48:57.707044 systemd[1]: Stopped systemd-networkd.service.
Dec 13 01:48:57.710842 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 01:48:57.710879 systemd[1]: Closed systemd-networkd.socket.
Dec 13 01:48:57.717011 systemd[1]: Stopping network-cleanup.service...
Dec 13 01:48:57.718686 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 01:48:57.718748 systemd[1]: Stopped parse-ip-for-networkd.service.
Dec 13 01:48:57.728613 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:48:57.732628 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 01:48:57.819182 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 01:48:57.822353 systemd[1]: Stopped systemd-modules-load.service.
Dec 13 01:48:57.830923 systemd[1]: Stopping systemd-udevd.service...
Dec 13 01:48:57.844411 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 01:48:57.847985 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 01:48:57.868559 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 01:48:57.868699 systemd[1]: Stopped systemd-udevd.service.
Dec 13 01:48:57.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.875017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 01:48:57.875098 systemd[1]: Closed systemd-udevd-control.socket.
Dec 13 01:48:57.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.879415 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 01:48:57.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.879461 systemd[1]: Closed systemd-udevd-kernel.socket.
Dec 13 01:48:57.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.881372 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 01:48:57.881412 systemd[1]: Stopped dracut-pre-udev.service.
Dec 13 01:48:57.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.884750 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 01:48:57.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.884799 systemd[1]: Stopped dracut-cmdline.service.
Dec 13 01:48:57.888795 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:48:57.888842 systemd[1]: Stopped dracut-cmdline-ask.service.
Dec 13 01:48:57.893193 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Dec 13 01:48:57.896342 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 01:48:57.896415 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Dec 13 01:48:57.898564 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 01:48:57.898613 systemd[1]: Stopped kmod-static-nodes.service.
Dec 13 01:48:57.900502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:48:57.900551 systemd[1]: Stopped systemd-vconsole-setup.service.
Dec 13 01:48:57.925263 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 13 01:48:57.935707 kernel: hv_netvsc 7c1e5235-89b6-7c1e-5235-89b67c1e5235 eth0: Data path switched from VF: enP22449s1
Dec 13 01:48:57.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.925729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 01:48:57.925804 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Dec 13 01:48:57.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:48:57.951194 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 01:48:57.951286 systemd[1]: Stopped network-cleanup.service.
Dec 13 01:49:00.195286 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 01:49:00.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:00.201854 kernel: kauditd_printk_skb: 40 callbacks suppressed
Dec 13 01:49:00.201876 kernel: audit: type=1131 audit(1734054540.197:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:00.195401 systemd[1]: Stopped sysroot-boot.service.
Dec 13 01:49:00.202041 systemd[1]: Reached target initrd-switch-root.target.
Dec 13 01:49:00.215500 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 01:49:00.215562 systemd[1]: Stopped initrd-setup-root.service.
Dec 13 01:49:00.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:00.224078 systemd[1]: Starting initrd-switch-root.service...
Dec 13 01:49:00.237051 kernel: audit: type=1131 audit(1734054540.223:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:00.243433 systemd[1]: Switching root.
Dec 13 01:49:00.268837 iscsid[806]: iscsid shutting down.
Dec 13 01:49:00.270744 systemd-journald[183]: Received SIGTERM from PID 1 (systemd).
Dec 13 01:49:00.270815 systemd-journald[183]: Journal stopped
Dec 13 01:49:26.564437 kernel: SELinux:  Class mctp_socket not defined in policy.
Dec 13 01:49:26.564480 kernel: SELinux:  Class anon_inode not defined in policy.
Dec 13 01:49:26.564495 kernel: SELinux: the above unknown classes and permissions will be allowed
Dec 13 01:49:26.564509 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:49:26.564522 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:49:26.564534 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:49:26.564548 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:49:26.564565 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:49:26.564578 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:49:26.564591 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 01:49:26.564604 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 01:49:26.564617 kernel: audit: type=1403 audit(1734054548.476:81): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 01:49:26.564634 systemd[1]: Successfully loaded SELinux policy in 791.095ms.
Dec 13 01:49:26.564651 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.995ms.
Dec 13 01:49:26.564672 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 01:49:26.564689 systemd[1]: Detected virtualization microsoft.
Dec 13 01:49:26.564704 systemd[1]: Detected architecture x86-64.
Dec 13 01:49:26.564719 systemd[1]: Detected first boot.
Dec 13 01:49:26.564737 systemd[1]: Hostname set to <ci-3510.3.6-a-f5ec44d98c>.
Dec 13 01:49:26.564753 systemd[1]: Initializing machine ID from random generator.
Dec 13 01:49:26.564770 kernel: audit: type=1400 audit(1734054551.448:82): avc:  denied  { integrity } for  pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 01:49:26.564786 kernel: audit: type=1400 audit(1734054551.462:83): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 01:49:26.564802 kernel: audit: type=1400 audit(1734054551.462:84): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 01:49:26.564815 kernel: audit: type=1334 audit(1734054551.473:85): prog-id=10 op=LOAD
Dec 13 01:49:26.564829 kernel: audit: type=1334 audit(1734054551.473:86): prog-id=10 op=UNLOAD
Dec 13 01:49:26.564847 kernel: audit: type=1334 audit(1734054551.484:87): prog-id=11 op=LOAD
Dec 13 01:49:26.564862 kernel: audit: type=1334 audit(1734054551.484:88): prog-id=11 op=UNLOAD
Dec 13 01:49:26.564877 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Dec 13 01:49:26.564894 kernel: audit: type=1400 audit(1734054555.056:89): avc:  denied  { associate } for  pid=1032 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 01:49:26.564910 kernel: audit: type=1300 audit(1734054555.056:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:26.564926 kernel: audit: type=1327 audit(1734054555.056:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 01:49:26.564942 kernel: audit: type=1400 audit(1734054555.063:90): avc:  denied  { associate } for  pid=1032 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 01:49:26.564962 kernel: audit: type=1300 audit(1734054555.063:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:26.564977 kernel: audit: type=1307 audit(1734054555.063:90): cwd="/"
Dec 13 01:49:26.564994 kernel: audit: type=1302 audit(1734054555.063:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:26.565011 kernel: audit: type=1302 audit(1734054555.063:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:26.565027 kernel: audit: type=1327 audit(1734054555.063:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 01:49:26.565044 systemd[1]: Populated /etc with preset unit settings.
Dec 13 01:49:26.565065 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 01:49:26.565082 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 01:49:26.565104 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:49:26.565120 kernel: audit: type=1334 audit(1734054565.598:91): prog-id=12 op=LOAD
Dec 13 01:49:26.565136 kernel: audit: type=1334 audit(1734054565.598:92): prog-id=3 op=UNLOAD
Dec 13 01:49:26.565152 kernel: audit: type=1334 audit(1734054565.603:93): prog-id=13 op=LOAD
Dec 13 01:49:26.565168 kernel: audit: type=1334 audit(1734054565.608:94): prog-id=14 op=LOAD
Dec 13 01:49:26.565184 kernel: audit: type=1334 audit(1734054565.608:95): prog-id=4 op=UNLOAD
Dec 13 01:49:26.565203 kernel: audit: type=1334 audit(1734054565.608:96): prog-id=5 op=UNLOAD
Dec 13 01:49:26.565218 kernel: audit: type=1334 audit(1734054565.612:97): prog-id=15 op=LOAD
Dec 13 01:49:26.565233 kernel: audit: type=1334 audit(1734054565.612:98): prog-id=12 op=UNLOAD
Dec 13 01:49:26.565262 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 01:49:26.565278 kernel: audit: type=1334 audit(1734054565.617:99): prog-id=16 op=LOAD
Dec 13 01:49:26.565293 kernel: audit: type=1334 audit(1734054565.622:100): prog-id=17 op=LOAD
Dec 13 01:49:26.565308 systemd[1]: Stopped iscsid.service.
Dec 13 01:49:26.565326 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 01:49:26.565349 systemd[1]: Stopped initrd-switch-root.service.
Dec 13 01:49:26.565367 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:49:26.565385 systemd[1]: Created slice system-addon\x2dconfig.slice.
Dec 13 01:49:26.565403 systemd[1]: Created slice system-addon\x2drun.slice.
Dec 13 01:49:26.565420 systemd[1]: Created slice system-getty.slice.
Dec 13 01:49:26.565436 systemd[1]: Created slice system-modprobe.slice.
Dec 13 01:49:26.565454 systemd[1]: Created slice system-serial\x2dgetty.slice.
Dec 13 01:49:26.565472 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Dec 13 01:49:26.565493 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Dec 13 01:49:26.565509 systemd[1]: Created slice user.slice.
Dec 13 01:49:26.565525 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 01:49:26.565541 systemd[1]: Started systemd-ask-password-wall.path.
Dec 13 01:49:26.565558 systemd[1]: Set up automount boot.automount.
Dec 13 01:49:26.565574 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Dec 13 01:49:26.565591 systemd[1]: Stopped target initrd-switch-root.target.
Dec 13 01:49:26.565608 systemd[1]: Stopped target initrd-fs.target.
Dec 13 01:49:26.565625 systemd[1]: Stopped target initrd-root-fs.target.
Dec 13 01:49:26.565645 systemd[1]: Reached target integritysetup.target.
Dec 13 01:49:26.565662 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 01:49:26.565679 systemd[1]: Reached target remote-fs.target.
Dec 13 01:49:26.565694 systemd[1]: Reached target slices.target.
Dec 13 01:49:26.565711 systemd[1]: Reached target swap.target.
Dec 13 01:49:26.565728 systemd[1]: Reached target torcx.target.
Dec 13 01:49:26.565746 systemd[1]: Reached target veritysetup.target.
Dec 13 01:49:26.565763 systemd[1]: Listening on systemd-coredump.socket.
Dec 13 01:49:26.565784 systemd[1]: Listening on systemd-initctl.socket.
Dec 13 01:49:26.565801 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 01:49:26.565819 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 01:49:26.565837 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 01:49:26.565857 systemd[1]: Listening on systemd-userdbd.socket.
Dec 13 01:49:26.565875 systemd[1]: Mounting dev-hugepages.mount...
Dec 13 01:49:26.565890 systemd[1]: Mounting dev-mqueue.mount...
Dec 13 01:49:26.565907 systemd[1]: Mounting media.mount...
Dec 13 01:49:26.565922 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:26.565939 systemd[1]: Mounting sys-kernel-debug.mount...
Dec 13 01:49:26.565956 systemd[1]: Mounting sys-kernel-tracing.mount...
Dec 13 01:49:26.565971 systemd[1]: Mounting tmp.mount...
Dec 13 01:49:26.565988 systemd[1]: Starting flatcar-tmpfiles.service...
Dec 13 01:49:26.566010 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 01:49:26.566026 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 01:49:26.566046 systemd[1]: Starting modprobe@configfs.service...
Dec 13 01:49:26.566061 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 01:49:26.566075 systemd[1]: Starting modprobe@drm.service...
Dec 13 01:49:26.566092 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 01:49:26.566108 systemd[1]: Starting modprobe@fuse.service...
Dec 13 01:49:26.566123 systemd[1]: Starting modprobe@loop.service...
Dec 13 01:49:26.566139 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 01:49:26.566158 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 01:49:26.566172 systemd[1]: Stopped systemd-fsck-root.service.
Dec 13 01:49:26.566188 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 01:49:26.566205 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 01:49:26.566222 systemd[1]: Stopped systemd-journald.service.
Dec 13 01:49:26.566235 systemd[1]: Starting systemd-journald.service...
Dec 13 01:49:26.567641 systemd[1]: Starting systemd-modules-load.service...
Dec 13 01:49:26.567671 systemd[1]: Starting systemd-network-generator.service...
Dec 13 01:49:26.567691 systemd[1]: Starting systemd-remount-fs.service...
Dec 13 01:49:26.567714 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 01:49:26.567729 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 01:49:26.567743 systemd[1]: Stopped verity-setup.service.
Dec 13 01:49:26.567753 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:26.567762 systemd[1]: Mounted dev-hugepages.mount.
Dec 13 01:49:26.567772 systemd[1]: Mounted dev-mqueue.mount.
Dec 13 01:49:26.567782 systemd[1]: Mounted media.mount.
Dec 13 01:49:26.567795 systemd[1]: Mounted sys-kernel-debug.mount.
Dec 13 01:49:26.567817 systemd[1]: Mounted sys-kernel-tracing.mount.
Dec 13 01:49:26.567827 systemd[1]: Mounted tmp.mount.
Dec 13 01:49:26.567838 systemd[1]: Finished systemd-network-generator.service.
Dec 13 01:49:26.567850 systemd[1]: Finished systemd-remount-fs.service.
Dec 13 01:49:26.567861 systemd[1]: Reached target network-pre.target.
Dec 13 01:49:26.567873 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 01:49:26.567885 systemd[1]: Starting systemd-hwdb-update.service...
Dec 13 01:49:26.567897 systemd[1]: Starting systemd-random-seed.service...
Dec 13 01:49:26.567907 systemd[1]: Finished flatcar-tmpfiles.service.
Dec 13 01:49:26.567917 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 01:49:26.567930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:49:26.567944 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 01:49:26.567956 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:49:26.567970 systemd[1]: Finished modprobe@drm.service.
Dec 13 01:49:26.567983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:49:26.567993 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 01:49:26.568007 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 01:49:26.568020 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:49:26.568032 systemd[1]: Starting systemd-sysusers.service...
Dec 13 01:49:26.568044 systemd[1]: Starting systemd-udev-settle.service...
Dec 13 01:49:26.568055 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:49:26.568073 systemd-journald[1113]: Journal started
Dec 13 01:49:26.568123 systemd-journald[1113]: Runtime Journal (/run/log/journal/20f3ced43d6f4095a040a9498b502865) is 8.0M, max 159.0M, 151.0M free.
Dec 13 01:49:08.476000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 01:49:11.448000 audit[1]: AVC avc:  denied  { integrity } for  pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 01:49:11.462000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 01:49:11.462000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 01:49:11.473000 audit: BPF prog-id=10 op=LOAD
Dec 13 01:49:11.473000 audit: BPF prog-id=10 op=UNLOAD
Dec 13 01:49:11.484000 audit: BPF prog-id=11 op=LOAD
Dec 13 01:49:11.484000 audit: BPF prog-id=11 op=UNLOAD
Dec 13 01:49:15.056000 audit[1032]: AVC avc:  denied  { associate } for  pid=1032 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 01:49:15.056000 audit[1032]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:15.056000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 01:49:15.063000 audit[1032]: AVC avc:  denied  { associate } for  pid=1032 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 01:49:15.063000 audit[1032]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1015 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:15.063000 audit: CWD cwd="/"
Dec 13 01:49:15.063000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:15.063000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:15.063000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 01:49:25.598000 audit: BPF prog-id=12 op=LOAD
Dec 13 01:49:25.598000 audit: BPF prog-id=3 op=UNLOAD
Dec 13 01:49:25.603000 audit: BPF prog-id=13 op=LOAD
Dec 13 01:49:25.608000 audit: BPF prog-id=14 op=LOAD
Dec 13 01:49:25.608000 audit: BPF prog-id=4 op=UNLOAD
Dec 13 01:49:25.608000 audit: BPF prog-id=5 op=UNLOAD
Dec 13 01:49:25.612000 audit: BPF prog-id=15 op=LOAD
Dec 13 01:49:25.612000 audit: BPF prog-id=12 op=UNLOAD
Dec 13 01:49:25.617000 audit: BPF prog-id=16 op=LOAD
Dec 13 01:49:25.622000 audit: BPF prog-id=17 op=LOAD
Dec 13 01:49:25.622000 audit: BPF prog-id=13 op=UNLOAD
Dec 13 01:49:25.622000 audit: BPF prog-id=14 op=UNLOAD
Dec 13 01:49:25.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:25.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:25.659000 audit: BPF prog-id=15 op=UNLOAD
Dec 13 01:49:25.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:25.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:25.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.012000 audit: BPF prog-id=18 op=LOAD
Dec 13 01:49:26.012000 audit: BPF prog-id=19 op=LOAD
Dec 13 01:49:26.012000 audit: BPF prog-id=20 op=LOAD
Dec 13 01:49:26.012000 audit: BPF prog-id=16 op=UNLOAD
Dec 13 01:49:26.012000 audit: BPF prog-id=17 op=UNLOAD
Dec 13 01:49:26.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.556000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 01:49:26.556000 audit[1113]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc0199e470 a2=4000 a3=7ffc0199e50c items=0 ppid=1 pid=1113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:26.556000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 01:49:15.011077 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 01:49:25.596904 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 01:49:15.011553 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 01:49:25.623065 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 01:49:15.011576 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 01:49:15.011616 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Dec 13 01:49:15.011628 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="skipped missing lower profile" missing profile=oem
Dec 13 01:49:15.011674 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Dec 13 01:49:15.011688 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Dec 13 01:49:15.011908 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Dec 13 01:49:15.011964 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 01:49:15.011981 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 01:49:15.041507 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Dec 13 01:49:15.041587 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Dec 13 01:49:15.041624 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6
Dec 13 01:49:15.041653 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Dec 13 01:49:15.041688 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6
Dec 13 01:49:15.041713 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Dec 13 01:49:24.252251 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 01:49:24.252484 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 01:49:24.252602 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 01:49:24.252771 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 01:49:24.252819 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Dec 13 01:49:24.252871 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T01:49:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Dec 13 01:49:26.571736 udevadm[1150]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 01:49:26.578874 kernel: loop: module loaded
Dec 13 01:49:26.578923 systemd[1]: Finished modprobe@configfs.service.
Dec 13 01:49:26.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.585547 systemd[1]: Started systemd-journald.service.
Dec 13 01:49:26.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.588603 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:49:26.588795 systemd[1]: Finished modprobe@loop.service.
Dec 13 01:49:26.592722 systemd[1]: Starting systemd-journal-flush.service...
Dec 13 01:49:26.597801 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 01:49:26.886903 systemd[1]: Finished systemd-random-seed.service.
Dec 13 01:49:26.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.889569 systemd[1]: Reached target first-boot-complete.target.
Dec 13 01:49:26.895417 systemd[1]: Finished systemd-modules-load.service.
Dec 13 01:49:26.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.899472 systemd[1]: Starting systemd-sysctl.service...
Dec 13 01:49:26.917302 kernel: fuse: init (API version 7.34)
Dec 13 01:49:26.918117 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 01:49:26.918322 systemd[1]: Finished modprobe@fuse.service.
Dec 13 01:49:26.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.924140 systemd-journald[1113]: Time spent on flushing to /var/log/journal/20f3ced43d6f4095a040a9498b502865 is 20.384ms for 1173 entries.
Dec 13 01:49:26.924140 systemd-journald[1113]: System Journal (/var/log/journal/20f3ced43d6f4095a040a9498b502865) is 8.0M, max 2.6G, 2.6G free.
Dec 13 01:49:27.305496 systemd-journald[1113]: Received client request to flush runtime journal.
Dec 13 01:49:26.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:26.966699 systemd[1]: Finished systemd-sysctl.service.
Dec 13 01:49:27.063305 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Dec 13 01:49:27.067626 systemd[1]: Mounting sys-kernel-config.mount...
Dec 13 01:49:27.072938 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Dec 13 01:49:27.075186 systemd[1]: Mounted sys-kernel-config.mount.
Dec 13 01:49:27.306593 systemd[1]: Finished systemd-journal-flush.service.
Dec 13 01:49:27.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:30.806673 systemd[1]: Finished systemd-sysusers.service.
Dec 13 01:49:30.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:30.811011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 01:49:30.813780 kernel: kauditd_printk_skb: 42 callbacks suppressed
Dec 13 01:49:30.813837 kernel: audit: type=1130 audit(1734054570.809:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:33.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:33.150498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 01:49:33.166268 kernel: audit: type=1130 audit(1734054573.153:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:34.252495 systemd[1]: Finished systemd-hwdb-update.service.
Dec 13 01:49:34.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:34.268000 audit: BPF prog-id=21 op=LOAD
Dec 13 01:49:34.270921 systemd[1]: Starting systemd-udevd.service...
Dec 13 01:49:34.272641 kernel: audit: type=1130 audit(1734054574.254:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:34.272703 kernel: audit: type=1334 audit(1734054574.268:144): prog-id=21 op=LOAD
Dec 13 01:49:34.272729 kernel: audit: type=1334 audit(1734054574.268:145): prog-id=22 op=LOAD
Dec 13 01:49:34.272753 kernel: audit: type=1334 audit(1734054574.268:146): prog-id=7 op=UNLOAD
Dec 13 01:49:34.272775 kernel: audit: type=1334 audit(1734054574.268:147): prog-id=8 op=UNLOAD
Dec 13 01:49:34.268000 audit: BPF prog-id=22 op=LOAD
Dec 13 01:49:34.268000 audit: BPF prog-id=7 op=UNLOAD
Dec 13 01:49:34.268000 audit: BPF prog-id=8 op=UNLOAD
Dec 13 01:49:34.299379 systemd-udevd[1160]: Using default interface naming scheme 'v252'.
Dec 13 01:49:35.315791 kernel: audit: type=1130 audit(1734054575.298:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:35.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:35.296077 systemd[1]: Started systemd-udevd.service.
Dec 13 01:49:35.313934 systemd[1]: Starting systemd-networkd.service...
Dec 13 01:49:35.312000 audit: BPF prog-id=23 op=LOAD
Dec 13 01:49:35.323273 kernel: audit: type=1334 audit(1734054575.312:149): prog-id=23 op=LOAD
Dec 13 01:49:35.333658 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Dec 13 01:49:35.619000 audit[1167]: AVC avc:  denied  { confidentiality } for  pid=1167 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 01:49:35.639269 kernel: audit: type=1400 audit(1734054575.619:150): avc:  denied  { confidentiality } for  pid=1167 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 01:49:35.660670 kernel: hv_vmbus: registering driver hv_balloon
Dec 13 01:49:35.660759 kernel: hv_utils: Registering HyperV Utility Driver
Dec 13 01:49:35.660788 kernel: hv_vmbus: registering driver hv_utils
Dec 13 01:49:35.667806 kernel: hv_utils: TimeSync IC version 4.0
Dec 13 01:49:35.667889 kernel: hv_utils: Shutdown IC version 3.2
Dec 13 01:49:36.467978 kernel: hv_utils: Heartbeat IC version 3.0
Dec 13 01:49:36.473134 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0
Dec 13 01:49:36.478191 kernel: hv_vmbus: registering driver hyperv_fb
Dec 13 01:49:36.488696 kernel: hyperv_fb: Synthvid Version major 3, minor 5
Dec 13 01:49:36.488796 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608
Dec 13 01:49:35.619000 audit[1167]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5653c2f78ec0 a1=f884 a2=7ff7638a6bc5 a3=5 items=12 ppid=1160 pid=1167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:35.619000 audit: CWD cwd="/"
Dec 13 01:49:35.619000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=1 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=2 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=3 name=(null) inode=14193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=4 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=5 name=(null) inode=14194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=6 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=7 name=(null) inode=14195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=8 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=9 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=10 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PATH item=11 name=(null) inode=14197 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 01:49:35.619000 audit: PROCTITLE proctitle="(udev-worker)"
Dec 13 01:49:36.496726 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 01:49:36.496795 kernel: Console: switching to colour dummy device 80x25
Dec 13 01:49:36.503890 kernel: Console: switching to colour frame buffer device 128x48
Dec 13 01:49:36.863805 kernel: kauditd_printk_skb: 15 callbacks suppressed
Dec 13 01:49:36.863919 kernel: audit: type=1334 audit(1734054576.853:151): prog-id=24 op=LOAD
Dec 13 01:49:36.853000 audit: BPF prog-id=24 op=LOAD
Dec 13 01:49:36.861639 systemd[1]: Starting systemd-userdbd.service...
Dec 13 01:49:36.872636 kernel: audit: type=1334 audit(1734054576.853:152): prog-id=25 op=LOAD
Dec 13 01:49:36.872717 kernel: audit: type=1334 audit(1734054576.853:153): prog-id=26 op=LOAD
Dec 13 01:49:36.853000 audit: BPF prog-id=25 op=LOAD
Dec 13 01:49:36.853000 audit: BPF prog-id=26 op=LOAD
Dec 13 01:49:36.995616 systemd[1]: Started systemd-userdbd.service.
Dec 13 01:49:36.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:37.008097 kernel: audit: type=1130 audit(1734054576.997:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:37.178108 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1176)
Dec 13 01:49:37.193103 kernel: KVM: vmx: using Hyper-V Enlightened VMCS
Dec 13 01:49:37.227795 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 01:49:37.276681 systemd[1]: Finished systemd-udev-settle.service.
Dec 13 01:49:37.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:37.280480 systemd[1]: Starting lvm2-activation-early.service...
Dec 13 01:49:37.290507 kernel: audit: type=1130 audit(1734054577.278:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:37.806284 systemd-networkd[1180]: lo: Link UP
Dec 13 01:49:37.806295 systemd-networkd[1180]: lo: Gained carrier
Dec 13 01:49:37.806873 systemd-networkd[1180]: Enumeration completed
Dec 13 01:49:37.807010 systemd[1]: Started systemd-networkd.service.
Dec 13 01:49:37.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:37.811240 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 01:49:37.820098 kernel: audit: type=1130 audit(1734054577.808:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.119186 systemd-networkd[1180]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:49:38.154151 kernel: mlx5_core 57b1:00:02.0 enP22449s1: Link up
Dec 13 01:49:38.176101 kernel: hv_netvsc 7c1e5235-89b6-7c1e-5235-89b67c1e5235 eth0: Data path switched to VF: enP22449s1
Dec 13 01:49:38.177003 systemd-networkd[1180]: enP22449s1: Link UP
Dec 13 01:49:38.177370 systemd-networkd[1180]: eth0: Link UP
Dec 13 01:49:38.177476 systemd-networkd[1180]: eth0: Gained carrier
Dec 13 01:49:38.181358 systemd-networkd[1180]: enP22449s1: Gained carrier
Dec 13 01:49:38.218303 systemd-networkd[1180]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16
Dec 13 01:49:38.364171 lvm[1236]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 01:49:38.395204 systemd[1]: Finished lvm2-activation-early.service.
Dec 13 01:49:38.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.398302 systemd[1]: Reached target cryptsetup.target.
Dec 13 01:49:38.409565 kernel: audit: type=1130 audit(1734054578.397:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.411823 systemd[1]: Starting lvm2-activation.service...
Dec 13 01:49:38.417004 lvm[1239]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 01:49:38.435966 systemd[1]: Finished lvm2-activation.service.
Dec 13 01:49:38.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.438142 systemd[1]: Reached target local-fs-pre.target.
Dec 13 01:49:38.449279 kernel: audit: type=1130 audit(1734054578.436:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.449327 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 01:49:38.449361 systemd[1]: Reached target local-fs.target.
Dec 13 01:49:38.451157 systemd[1]: Reached target machines.target.
Dec 13 01:49:38.454112 systemd[1]: Starting ldconfig.service...
Dec 13 01:49:38.476204 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 01:49:38.476336 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:38.477885 systemd[1]: Starting systemd-boot-update.service...
Dec 13 01:49:38.481515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Dec 13 01:49:38.485721 systemd[1]: Starting systemd-machine-id-commit.service...
Dec 13 01:49:38.488964 systemd[1]: Starting systemd-sysext.service...
Dec 13 01:49:38.773446 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Dec 13 01:49:38.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.787175 kernel: audit: type=1130 audit(1734054578.775:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:38.950132 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1241 (bootctl)
Dec 13 01:49:38.951493 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Dec 13 01:49:39.099399 systemd[1]: Unmounting usr-share-oem.mount...
Dec 13 01:49:39.503788 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Dec 13 01:49:39.504021 systemd[1]: Unmounted usr-share-oem.mount.
Dec 13 01:49:39.516134 kernel: loop0: detected capacity change from 0 to 211296
Dec 13 01:49:39.605371 systemd-networkd[1180]: eth0: Gained IPv6LL
Dec 13 01:49:39.611941 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 01:49:39.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.628109 kernel: audit: type=1130 audit(1734054579.613:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.649110 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 01:49:39.667099 kernel: loop1: detected capacity change from 0 to 211296
Dec 13 01:49:39.673138 (sd-sysext)[1253]: Using extensions 'kubernetes'.
Dec 13 01:49:39.673649 (sd-sysext)[1253]: Merged extensions into '/usr'.
Dec 13 01:49:39.690113 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:39.691529 systemd[1]: Mounting usr-share-oem.mount...
Dec 13 01:49:39.692869 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 01:49:39.696424 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 01:49:39.698704 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 01:49:39.702324 systemd[1]: Starting modprobe@loop.service...
Dec 13 01:49:39.703465 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 01:49:39.703584 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:39.703696 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.705024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:49:39.705322 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 01:49:39.706623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:49:39.706723 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 01:49:39.707450 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:49:39.707553 systemd[1]: Finished modprobe@loop.service.
Dec 13 01:49:39.708294 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:49:39.708441 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 01:49:39.713567 systemd[1]: Mounted usr-share-oem.mount.
Dec 13 01:49:39.715321 systemd[1]: Finished systemd-sysext.service.
Dec 13 01:49:39.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.717999 systemd[1]: Starting ensure-sysext.service...
Dec 13 01:49:39.720368 systemd[1]: Starting systemd-tmpfiles-setup.service...
Dec 13 01:49:39.731503 systemd[1]: Reloading.
Dec 13 01:49:39.737860 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Dec 13 01:49:39.798601 /usr/lib/systemd/system-generators/torcx-generator[1279]: time="2024-12-13T01:49:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 01:49:39.798639 /usr/lib/systemd/system-generators/torcx-generator[1279]: time="2024-12-13T01:49:39Z" level=info msg="torcx already run"
Dec 13 01:49:39.885357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 01:49:39.885376 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 01:49:39.896038 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 01:49:39.902269 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:49:39.963000 audit: BPF prog-id=27 op=LOAD
Dec 13 01:49:39.963000 audit: BPF prog-id=23 op=UNLOAD
Dec 13 01:49:39.964000 audit: BPF prog-id=28 op=LOAD
Dec 13 01:49:39.964000 audit: BPF prog-id=24 op=UNLOAD
Dec 13 01:49:39.964000 audit: BPF prog-id=29 op=LOAD
Dec 13 01:49:39.964000 audit: BPF prog-id=30 op=LOAD
Dec 13 01:49:39.964000 audit: BPF prog-id=25 op=UNLOAD
Dec 13 01:49:39.964000 audit: BPF prog-id=26 op=UNLOAD
Dec 13 01:49:39.965000 audit: BPF prog-id=31 op=LOAD
Dec 13 01:49:39.966000 audit: BPF prog-id=32 op=LOAD
Dec 13 01:49:39.966000 audit: BPF prog-id=21 op=UNLOAD
Dec 13 01:49:39.966000 audit: BPF prog-id=22 op=UNLOAD
Dec 13 01:49:39.967000 audit: BPF prog-id=33 op=LOAD
Dec 13 01:49:39.967000 audit: BPF prog-id=18 op=UNLOAD
Dec 13 01:49:39.967000 audit: BPF prog-id=34 op=LOAD
Dec 13 01:49:39.967000 audit: BPF prog-id=35 op=LOAD
Dec 13 01:49:39.967000 audit: BPF prog-id=19 op=UNLOAD
Dec 13 01:49:39.967000 audit: BPF prog-id=20 op=UNLOAD
Dec 13 01:49:39.981187 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:39.981497 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 01:49:39.982813 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 01:49:39.985973 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 01:49:39.990050 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 01:49:39.992263 systemd[1]: Starting modprobe@loop.service...
Dec 13 01:49:39.994103 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 01:49:39.994250 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:39.994385 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:39.995283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:49:39.995445 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 01:49:39.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.998072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:49:39.998286 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 01:49:39.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:39.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.000808 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:49:40.000952 systemd[1]: Finished modprobe@loop.service.
Dec 13 01:49:40.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.004722 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:40.005019 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 01:49:40.006343 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 01:49:40.009999 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 01:49:40.012898 systemd[1]: Starting modprobe@loop.service...
Dec 13 01:49:40.014565 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 01:49:40.014752 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:40.014889 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:40.015876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:49:40.016024 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 01:49:40.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.018829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:49:40.018962 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 01:49:40.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.021343 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:49:40.021476 systemd[1]: Finished modprobe@loop.service.
Dec 13 01:49:40.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.026254 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:40.026580 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 01:49:40.027857 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 01:49:40.030714 systemd[1]: Starting modprobe@drm.service...
Dec 13 01:49:40.033638 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 01:49:40.036679 systemd[1]: Starting modprobe@loop.service...
Dec 13 01:49:40.038899 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 01:49:40.039103 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:40.039302 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:49:40.040406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:49:40.040542 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 01:49:40.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.043305 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:49:40.043438 systemd[1]: Finished modprobe@drm.service.
Dec 13 01:49:40.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.045766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:49:40.045897 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 01:49:40.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.048571 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:49:40.048705 systemd[1]: Finished modprobe@loop.service.
Dec 13 01:49:40.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:40.051161 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:49:40.051248 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 01:49:40.052418 systemd[1]: Finished ensure-sysext.service.
Dec 13 01:49:40.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:43.693398 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 01:49:43.694221 systemd[1]: Finished systemd-machine-id-commit.service.
Dec 13 01:49:43.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:43.700686 kernel: kauditd_printk_skb: 46 callbacks suppressed
Dec 13 01:49:43.700784 kernel: audit: type=1130 audit(1734054583.696:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:43.959427 systemd-fsck[1248]: fsck.fat 4.2 (2021-01-31)
Dec 13 01:49:43.959427 systemd-fsck[1248]: /dev/sda1: 789 files, 119291/258078 clusters
Dec 13 01:49:43.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:43.961255 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Dec 13 01:49:43.965910 systemd[1]: Mounting boot.mount...
Dec 13 01:49:43.979117 kernel: audit: type=1130 audit(1734054583.963:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:43.990014 systemd[1]: Mounted boot.mount.
Dec 13 01:49:44.046601 systemd[1]: Finished systemd-boot-update.service.
Dec 13 01:49:44.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:44.062189 kernel: audit: type=1130 audit(1734054584.048:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.429226 systemd[1]: Finished systemd-tmpfiles-setup.service.
Dec 13 01:49:45.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.433323 systemd[1]: Starting audit-rules.service...
Dec 13 01:49:45.445132 kernel: audit: type=1130 audit(1734054585.430:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.446837 systemd[1]: Starting clean-ca-certificates.service...
Dec 13 01:49:45.454000 audit: BPF prog-id=36 op=LOAD
Dec 13 01:49:45.450750 systemd[1]: Starting systemd-journal-catalog-update.service...
Dec 13 01:49:45.464417 kernel: audit: type=1334 audit(1734054585.454:211): prog-id=36 op=LOAD
Dec 13 01:49:45.461501 systemd[1]: Starting systemd-resolved.service...
Dec 13 01:49:45.463000 audit: BPF prog-id=37 op=LOAD
Dec 13 01:49:45.465555 systemd[1]: Starting systemd-timesyncd.service...
Dec 13 01:49:45.469651 kernel: audit: type=1334 audit(1734054585.463:212): prog-id=37 op=LOAD
Dec 13 01:49:45.471584 systemd[1]: Starting systemd-update-utmp.service...
Dec 13 01:49:45.492000 audit[1364]: SYSTEM_BOOT pid=1364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.515217 kernel: audit: type=1127 audit(1734054585.492:213): pid=1364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.514222 systemd[1]: Finished systemd-update-utmp.service.
Dec 13 01:49:45.532341 kernel: audit: type=1130 audit(1734054585.515:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.531121 systemd[1]: Finished clean-ca-certificates.service.
Dec 13 01:49:45.533365 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 01:49:45.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.546102 kernel: audit: type=1130 audit(1734054585.532:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.608291 systemd[1]: Started systemd-timesyncd.service.
Dec 13 01:49:45.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.610459 systemd[1]: Reached target time-set.target.
Dec 13 01:49:45.622361 systemd[1]: Finished systemd-journal-catalog-update.service.
Dec 13 01:49:45.626715 kernel: audit: type=1130 audit(1734054585.609:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.730464 systemd-timesyncd[1363]: Contacted time server 193.1.8.98:123 (0.flatcar.pool.ntp.org).
Dec 13 01:49:45.730547 systemd-timesyncd[1363]: Initial clock synchronization to Fri 2024-12-13 01:49:45.730492 UTC.
Dec 13 01:49:45.764538 systemd-resolved[1362]: Positive Trust Anchors:
Dec 13 01:49:45.764553 systemd-resolved[1362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:49:45.764591 systemd-resolved[1362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 01:49:45.845934 systemd-resolved[1362]: Using system hostname 'ci-3510.3.6-a-f5ec44d98c'.
Dec 13 01:49:45.847536 systemd[1]: Started systemd-resolved.service.
Dec 13 01:49:45.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 01:49:45.850114 systemd[1]: Reached target network.target.
Dec 13 01:49:45.852048 systemd[1]: Reached target network-online.target.
Dec 13 01:49:45.854295 systemd[1]: Reached target nss-lookup.target.
Dec 13 01:49:45.927000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Dec 13 01:49:45.927000 audit[1379]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd43252d00 a2=420 a3=0 items=0 ppid=1358 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 01:49:45.927000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Dec 13 01:49:45.929384 augenrules[1379]: No rules
Dec 13 01:49:45.930260 systemd[1]: Finished audit-rules.service.
Dec 13 01:49:51.517132 ldconfig[1240]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 01:49:51.527706 systemd[1]: Finished ldconfig.service.
Dec 13 01:49:51.531846 systemd[1]: Starting systemd-update-done.service...
Dec 13 01:49:51.556512 systemd[1]: Finished systemd-update-done.service.
Dec 13 01:49:51.558579 systemd[1]: Reached target sysinit.target.
Dec 13 01:49:51.560459 systemd[1]: Started motdgen.path.
Dec 13 01:49:51.562076 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Dec 13 01:49:51.564848 systemd[1]: Started logrotate.timer.
Dec 13 01:49:51.566729 systemd[1]: Started mdadm.timer.
Dec 13 01:49:51.568324 systemd[1]: Started systemd-tmpfiles-clean.timer.
Dec 13 01:49:51.570479 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 01:49:51.570525 systemd[1]: Reached target paths.target.
Dec 13 01:49:51.572141 systemd[1]: Reached target timers.target.
Dec 13 01:49:51.574063 systemd[1]: Listening on dbus.socket.
Dec 13 01:49:51.576552 systemd[1]: Starting docker.socket...
Dec 13 01:49:51.581435 systemd[1]: Listening on sshd.socket.
Dec 13 01:49:51.583287 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:51.583752 systemd[1]: Listening on docker.socket.
Dec 13 01:49:51.585387 systemd[1]: Reached target sockets.target.
Dec 13 01:49:51.587223 systemd[1]: Reached target basic.target.
Dec 13 01:49:51.588878 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 01:49:51.588918 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 01:49:51.589949 systemd[1]: Starting containerd.service...
Dec 13 01:49:51.593064 systemd[1]: Starting dbus.service...
Dec 13 01:49:51.596025 systemd[1]: Starting enable-oem-cloudinit.service...
Dec 13 01:49:51.599910 systemd[1]: Starting extend-filesystems.service...
Dec 13 01:49:51.601768 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Dec 13 01:49:51.616890 systemd[1]: Starting kubelet.service...
Dec 13 01:49:51.619871 systemd[1]: Starting motdgen.service...
Dec 13 01:49:51.622702 systemd[1]: Started nvidia.service.
Dec 13 01:49:51.626283 systemd[1]: Starting prepare-helm.service...
Dec 13 01:49:51.629228 systemd[1]: Starting ssh-key-proc-cmdline.service...
Dec 13 01:49:51.632421 systemd[1]: Starting sshd-keygen.service...
Dec 13 01:49:51.637343 systemd[1]: Starting systemd-logind.service...
Dec 13 01:49:51.642174 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:49:51.642269 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 01:49:51.642765 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 01:49:51.644157 systemd[1]: Starting update-engine.service...
Dec 13 01:49:51.648066 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Dec 13 01:49:51.648962 jq[1389]: false
Dec 13 01:49:51.654962 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 01:49:51.655197 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Dec 13 01:49:51.657524 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 01:49:51.657786 systemd[1]: Finished ssh-key-proc-cmdline.service.
Dec 13 01:49:51.663931 jq[1406]: true
Dec 13 01:49:51.683886 extend-filesystems[1390]: Found loop1
Dec 13 01:49:51.687401 extend-filesystems[1390]: Found sda
Dec 13 01:49:51.689076 extend-filesystems[1390]: Found sda1
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found sda2
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found sda3
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found usr
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found sda4
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found sda6
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found sda7
Dec 13 01:49:51.690556 extend-filesystems[1390]: Found sda9
Dec 13 01:49:51.690556 extend-filesystems[1390]: Checking size of /dev/sda9
Dec 13 01:49:51.714325 jq[1412]: true
Dec 13 01:49:51.717877 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 01:49:51.718068 systemd[1]: Finished motdgen.service.
Dec 13 01:49:51.755612 env[1419]: time="2024-12-13T01:49:51.755386352Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Dec 13 01:49:51.790678 env[1419]: time="2024-12-13T01:49:51.790633761Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 01:49:51.790829 env[1419]: time="2024-12-13T01:49:51.790781762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:49:51.792340 env[1419]: time="2024-12-13T01:49:51.792293775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:49:51.792340 env[1419]: time="2024-12-13T01:49:51.792337776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:49:51.792630 env[1419]: time="2024-12-13T01:49:51.792601578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:49:51.792693 env[1419]: time="2024-12-13T01:49:51.792634778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 01:49:51.792693 env[1419]: time="2024-12-13T01:49:51.792653579Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Dec 13 01:49:51.792693 env[1419]: time="2024-12-13T01:49:51.792667979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 01:49:51.792803 env[1419]: time="2024-12-13T01:49:51.792759979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:49:51.793040 env[1419]: time="2024-12-13T01:49:51.793015882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:49:51.794724 env[1419]: time="2024-12-13T01:49:51.793229184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:49:51.794724 env[1419]: time="2024-12-13T01:49:51.793258484Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 01:49:51.794724 env[1419]: time="2024-12-13T01:49:51.793322784Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Dec 13 01:49:51.794724 env[1419]: time="2024-12-13T01:49:51.793338685Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 01:49:51.804479 tar[1410]: linux-amd64/helm
Dec 13 01:49:51.815871 env[1419]: time="2024-12-13T01:49:51.815810681Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 01:49:51.815871 env[1419]: time="2024-12-13T01:49:51.815845581Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 01:49:51.815871 env[1419]: time="2024-12-13T01:49:51.815864382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.815899282Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.815917882Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.815935382Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.815952682Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.815970883Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.815988483Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.816005483Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816028 env[1419]: time="2024-12-13T01:49:51.816022483Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.816317 env[1419]: time="2024-12-13T01:49:51.816038883Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 01:49:51.816317 env[1419]: time="2024-12-13T01:49:51.816164584Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 01:49:51.816317 env[1419]: time="2024-12-13T01:49:51.816258385Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816552188Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816596288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816616288Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816670689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816687289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816704789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816720289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816736089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816751789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816766090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816779790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816795990Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816929991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816948391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.817498 env[1419]: time="2024-12-13T01:49:51.816965491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.819561 env[1419]: time="2024-12-13T01:49:51.816980991Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 01:49:51.819561 env[1419]: time="2024-12-13T01:49:51.816999892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Dec 13 01:49:51.819561 env[1419]: time="2024-12-13T01:49:51.817014692Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 01:49:51.819561 env[1419]: time="2024-12-13T01:49:51.817043392Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Dec 13 01:49:51.819561 env[1419]: time="2024-12-13T01:49:51.817194893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 01:49:51.818901 systemd[1]: Started containerd.service.
Dec 13 01:49:51.819759 env[1419]: time="2024-12-13T01:49:51.817476396Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 01:49:51.819759 env[1419]: time="2024-12-13T01:49:51.817556196Z" level=info msg="Connect containerd service"
Dec 13 01:49:51.819759 env[1419]: time="2024-12-13T01:49:51.817613497Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 01:49:51.819759 env[1419]: time="2024-12-13T01:49:51.818379104Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 01:49:51.819759 env[1419]: time="2024-12-13T01:49:51.818701806Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 01:49:51.819759 env[1419]: time="2024-12-13T01:49:51.818774307Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.821187528Z" level=info msg="containerd successfully booted in 0.066523s"
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.822983744Z" level=info msg="Start subscribing containerd event"
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.823154645Z" level=info msg="Start recovering state"
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.832125324Z" level=info msg="Start event monitor"
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.832155724Z" level=info msg="Start snapshots syncer"
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.832296925Z" level=info msg="Start cni network conf syncer for default"
Dec 13 01:49:51.880820 env[1419]: time="2024-12-13T01:49:51.832397926Z" level=info msg="Start streaming server"
Dec 13 01:49:51.881111 extend-filesystems[1390]: Old size kept for /dev/sda9
Dec 13 01:49:51.881111 extend-filesystems[1390]: Found sr0
Dec 13 01:49:51.824985 systemd-logind[1402]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 01:49:51.836055 systemd-logind[1402]: New seat seat0.
Dec 13 01:49:51.897756 dbus-daemon[1388]: [system] SELinux support is enabled
Dec 13 01:49:51.846200 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 01:49:51.856323 systemd[1]: Finished extend-filesystems.service.
Dec 13 01:49:51.897936 systemd[1]: Started dbus.service.
Dec 13 01:49:51.902637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 01:49:51.902671 systemd[1]: Reached target system-config.target.
Dec 13 01:49:51.905064 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 01:49:51.905107 systemd[1]: Reached target user-config.target.
Dec 13 01:49:51.909060 bash[1440]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 01:49:51.908842 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Dec 13 01:49:51.912887 systemd[1]: Started systemd-logind.service.
Dec 13 01:49:51.974828 systemd[1]: nvidia.service: Deactivated successfully.
Dec 13 01:49:52.543880 update_engine[1405]: I1213 01:49:52.543105  1405 main.cc:92] Flatcar Update Engine starting
Dec 13 01:49:52.691004 systemd[1]: Started update-engine.service.
Dec 13 01:49:52.691491 update_engine[1405]: I1213 01:49:52.691063  1405 update_check_scheduler.cc:74] Next update check in 7m53s
Dec 13 01:49:52.696214 systemd[1]: Started locksmithd.service.
Dec 13 01:49:52.807052 tar[1410]: linux-amd64/LICENSE
Dec 13 01:49:52.807416 tar[1410]: linux-amd64/README.md
Dec 13 01:49:52.816881 systemd[1]: Finished prepare-helm.service.
Dec 13 01:49:52.825963 sshd_keygen[1407]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 01:49:52.854935 systemd[1]: Finished sshd-keygen.service.
Dec 13 01:49:52.859004 systemd[1]: Starting issuegen.service...
Dec 13 01:49:52.862377 systemd[1]: Started waagent.service.
Dec 13 01:49:52.873412 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 01:49:52.873593 systemd[1]: Finished issuegen.service.
Dec 13 01:49:52.877272 systemd[1]: Starting systemd-user-sessions.service...
Dec 13 01:49:52.886528 systemd[1]: Finished systemd-user-sessions.service.
Dec 13 01:49:52.890289 systemd[1]: Started getty@tty1.service.
Dec 13 01:49:52.894033 systemd[1]: Started serial-getty@ttyS0.service.
Dec 13 01:49:52.896611 systemd[1]: Reached target getty.target.
Dec 13 01:49:53.021633 systemd[1]: Started kubelet.service.
Dec 13 01:49:53.024492 systemd[1]: Reached target multi-user.target.
Dec 13 01:49:53.028297 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Dec 13 01:49:53.041450 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 01:49:53.041627 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Dec 13 01:49:53.044070 systemd[1]: Startup finished in 751ms (firmware) + 29.053s (loader) + 917ms (kernel) + 21.134s (initrd) + 45.395s (userspace) = 1min 37.252s.
Dec 13 01:49:53.424353 login[1503]: pam_lastlog(login:session): file /var/log/lastlog is locked/write
Dec 13 01:49:53.425658 login[1502]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Dec 13 01:49:53.528238 systemd[1]: Created slice user-500.slice.
Dec 13 01:49:53.529824 systemd[1]: Starting user-runtime-dir@500.service...
Dec 13 01:49:53.536686 systemd-logind[1402]: New session 1 of user core.
Dec 13 01:49:53.544507 systemd[1]: Finished user-runtime-dir@500.service.
Dec 13 01:49:53.548463 systemd[1]: Starting user@500.service...
Dec 13 01:49:53.566963 (systemd)[1518]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:49:53.638878 kubelet[1506]: E1213 01:49:53.638812    1506 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:49:53.640677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:49:53.640836 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:49:53.641193 systemd[1]: kubelet.service: Consumed 1.033s CPU time.
Dec 13 01:49:53.726272 systemd[1518]: Queued start job for default target default.target.
Dec 13 01:49:53.726811 systemd[1518]: Reached target paths.target.
Dec 13 01:49:53.726838 systemd[1518]: Reached target sockets.target.
Dec 13 01:49:53.726855 systemd[1518]: Reached target timers.target.
Dec 13 01:49:53.726870 systemd[1518]: Reached target basic.target.
Dec 13 01:49:53.726976 systemd[1]: Started user@500.service.
Dec 13 01:49:53.728164 systemd[1]: Started session-1.scope.
Dec 13 01:49:53.728705 systemd[1518]: Reached target default.target.
Dec 13 01:49:53.728892 systemd[1518]: Startup finished in 154ms.
Dec 13 01:49:54.153940 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 01:49:54.426059 login[1503]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Dec 13 01:49:54.429888 systemd-logind[1402]: New session 2 of user core.
Dec 13 01:49:54.431123 systemd[1]: Started session-2.scope.
Dec 13 01:49:58.718104 waagent[1497]: 2024-12-13T01:49:58.717956Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2
Dec 13 01:49:58.729992 waagent[1497]: 2024-12-13T01:49:58.720541Z INFO Daemon Daemon OS: flatcar 3510.3.6
Dec 13 01:49:58.729992 waagent[1497]: 2024-12-13T01:49:58.721474Z INFO Daemon Daemon Python: 3.9.16
Dec 13 01:49:58.729992 waagent[1497]: 2024-12-13T01:49:58.722598Z INFO Daemon Daemon Run daemon
Dec 13 01:49:58.729992 waagent[1497]: 2024-12-13T01:49:58.723694Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6'
Dec 13 01:49:58.735033 waagent[1497]: 2024-12-13T01:49:58.734905Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1.
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.738450Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.739394Z INFO Daemon Daemon cloud-init is enabled: False
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.740102Z INFO Daemon Daemon Using waagent for provisioning
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.741409Z INFO Daemon Daemon Activate resource disk
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.742047Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.749718Z INFO Daemon Daemon Found device: None
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.750408Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.751156Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.752720Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Dec 13 01:49:58.766359 waagent[1497]: 2024-12-13T01:49:58.753504Z INFO Daemon Daemon Running default provisioning handler
Dec 13 01:49:58.768758 waagent[1497]: 2024-12-13T01:49:58.768640Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1.
Dec 13 01:49:58.775991 waagent[1497]: 2024-12-13T01:49:58.775884Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Dec 13 01:49:58.783961 waagent[1497]: 2024-12-13T01:49:58.777039Z INFO Daemon Daemon cloud-init is enabled: False
Dec 13 01:49:58.783961 waagent[1497]: 2024-12-13T01:49:58.777878Z INFO Daemon Daemon Copying ovf-env.xml
Dec 13 01:49:58.861130 waagent[1497]: 2024-12-13T01:49:58.860976Z INFO Daemon Daemon Successfully mounted dvd
Dec 13 01:49:58.912445 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully.
Dec 13 01:49:58.934937 waagent[1497]: 2024-12-13T01:49:58.934810Z INFO Daemon Daemon Detect protocol endpoint
Dec 13 01:49:58.937645 waagent[1497]: 2024-12-13T01:49:58.937576Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Dec 13 01:49:58.940419 waagent[1497]: 2024-12-13T01:49:58.940355Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler
Dec 13 01:49:58.943431 waagent[1497]: 2024-12-13T01:49:58.943374Z INFO Daemon Daemon Test for route to 168.63.129.16
Dec 13 01:49:58.945963 waagent[1497]: 2024-12-13T01:49:58.945901Z INFO Daemon Daemon Route to 168.63.129.16 exists
Dec 13 01:49:58.948456 waagent[1497]: 2024-12-13T01:49:58.948397Z INFO Daemon Daemon Wire server endpoint:168.63.129.16
Dec 13 01:49:59.057541 waagent[1497]: 2024-12-13T01:49:59.057467Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05
Dec 13 01:49:59.061528 waagent[1497]: 2024-12-13T01:49:59.061480Z INFO Daemon Daemon Wire protocol version:2012-11-30
Dec 13 01:49:59.064658 waagent[1497]: 2024-12-13T01:49:59.064599Z INFO Daemon Daemon Server preferred version:2015-04-05
Dec 13 01:49:59.390403 waagent[1497]: 2024-12-13T01:49:59.390200Z INFO Daemon Daemon Initializing goal state during protocol detection
Dec 13 01:49:59.402474 waagent[1497]: 2024-12-13T01:49:59.402394Z INFO Daemon Daemon Forcing an update of the goal state..
Dec 13 01:49:59.405212 waagent[1497]: 2024-12-13T01:49:59.405146Z INFO Daemon Daemon Fetching goal state [incarnation 1]
Dec 13 01:49:59.486224 waagent[1497]: 2024-12-13T01:49:59.486075Z INFO Daemon Daemon Found private key matching thumbprint 381F23118060580A40607EAF038D167B925DDA13
Dec 13 01:49:59.496280 waagent[1497]: 2024-12-13T01:49:59.487403Z INFO Daemon Daemon Certificate with thumbprint 000D6F95CDB5FE8B5619C3A55FE3DE088755515D has no matching private key.
Dec 13 01:49:59.496280 waagent[1497]: 2024-12-13T01:49:59.487993Z INFO Daemon Daemon Fetch goal state completed
Dec 13 01:49:59.538356 waagent[1497]: 2024-12-13T01:49:59.538282Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 93f14171-4b75-442b-a284-2b7696dcb6e0 New eTag: 9333032416594252243]
Dec 13 01:49:59.546729 waagent[1497]: 2024-12-13T01:49:59.540139Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob
Dec 13 01:49:59.550711 waagent[1497]: 2024-12-13T01:49:59.550652Z INFO Daemon Daemon Starting provisioning
Dec 13 01:49:59.557273 waagent[1497]: 2024-12-13T01:49:59.551850Z INFO Daemon Daemon Handle ovf-env.xml.
Dec 13 01:49:59.557273 waagent[1497]: 2024-12-13T01:49:59.552688Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-f5ec44d98c]
Dec 13 01:49:59.576887 waagent[1497]: 2024-12-13T01:49:59.576737Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-f5ec44d98c]
Dec 13 01:49:59.584096 waagent[1497]: 2024-12-13T01:49:59.578531Z INFO Daemon Daemon Examine /proc/net/route for primary interface
Dec 13 01:49:59.584096 waagent[1497]: 2024-12-13T01:49:59.579786Z INFO Daemon Daemon Primary interface is [eth0]
Dec 13 01:49:59.593896 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully.
Dec 13 01:49:59.594159 systemd[1]: Stopped systemd-networkd-wait-online.service.
Dec 13 01:49:59.594233 systemd[1]: Stopping systemd-networkd-wait-online.service...
Dec 13 01:49:59.594577 systemd[1]: Stopping systemd-networkd.service...
Dec 13 01:49:59.601125 systemd-networkd[1180]: eth0: DHCPv6 lease lost
Dec 13 01:49:59.602457 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 01:49:59.602646 systemd[1]: Stopped systemd-networkd.service.
Dec 13 01:49:59.604889 systemd[1]: Starting systemd-networkd.service...
Dec 13 01:49:59.635812 systemd-networkd[1562]: enP22449s1: Link UP
Dec 13 01:49:59.635822 systemd-networkd[1562]: enP22449s1: Gained carrier
Dec 13 01:49:59.637247 systemd-networkd[1562]: eth0: Link UP
Dec 13 01:49:59.637255 systemd-networkd[1562]: eth0: Gained carrier
Dec 13 01:49:59.637674 systemd-networkd[1562]: lo: Link UP
Dec 13 01:49:59.637682 systemd-networkd[1562]: lo: Gained carrier
Dec 13 01:49:59.637987 systemd-networkd[1562]: eth0: Gained IPv6LL
Dec 13 01:49:59.638295 systemd-networkd[1562]: Enumeration completed
Dec 13 01:49:59.638380 systemd[1]: Started systemd-networkd.service.
Dec 13 01:49:59.640313 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 01:49:59.644546 systemd-networkd[1562]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:49:59.646037 waagent[1497]: 2024-12-13T01:49:59.645868Z INFO Daemon Daemon Create user account if not exists
Dec 13 01:49:59.649253 waagent[1497]: 2024-12-13T01:49:59.647468Z INFO Daemon Daemon User core already exists, skip useradd
Dec 13 01:49:59.649253 waagent[1497]: 2024-12-13T01:49:59.648172Z INFO Daemon Daemon Configure sudoer
Dec 13 01:49:59.649593 waagent[1497]: 2024-12-13T01:49:59.649535Z INFO Daemon Daemon Configure sshd
Dec 13 01:49:59.650787 waagent[1497]: 2024-12-13T01:49:59.650734Z INFO Daemon Daemon Deploy ssh public key.
Dec 13 01:49:59.685242 systemd-networkd[1562]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16
Dec 13 01:49:59.688369 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 01:50:00.777119 waagent[1497]: 2024-12-13T01:50:00.777015Z INFO Daemon Daemon Provisioning complete
Dec 13 01:50:00.792409 waagent[1497]: 2024-12-13T01:50:00.792338Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping
Dec 13 01:50:00.798479 waagent[1497]: 2024-12-13T01:50:00.793503Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions.
Dec 13 01:50:00.798479 waagent[1497]: 2024-12-13T01:50:00.795165Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent
Dec 13 01:50:01.059033 waagent[1571]: 2024-12-13T01:50:01.058864Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent
Dec 13 01:50:01.059771 waagent[1571]: 2024-12-13T01:50:01.059701Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Dec 13 01:50:01.059916 waagent[1571]: 2024-12-13T01:50:01.059861Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Dec 13 01:50:01.070741 waagent[1571]: 2024-12-13T01:50:01.070667Z INFO ExtHandler ExtHandler Forcing an update of the goal state..
Dec 13 01:50:01.070892 waagent[1571]: 2024-12-13T01:50:01.070838Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1]
Dec 13 01:50:01.130831 waagent[1571]: 2024-12-13T01:50:01.130711Z INFO ExtHandler ExtHandler Found private key matching thumbprint 381F23118060580A40607EAF038D167B925DDA13
Dec 13 01:50:01.131041 waagent[1571]: 2024-12-13T01:50:01.130985Z INFO ExtHandler ExtHandler Certificate with thumbprint 000D6F95CDB5FE8B5619C3A55FE3DE088755515D has no matching private key.
Dec 13 01:50:01.131289 waagent[1571]: 2024-12-13T01:50:01.131238Z INFO ExtHandler ExtHandler Fetch goal state completed
Dec 13 01:50:01.144601 waagent[1571]: 2024-12-13T01:50:01.144550Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 24aa3e4a-1801-404d-aa6e-da04d5b92142 New eTag: 9333032416594252243]
Dec 13 01:50:01.145127 waagent[1571]: 2024-12-13T01:50:01.145059Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob
Dec 13 01:50:01.312825 waagent[1571]: 2024-12-13T01:50:01.312593Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Dec 13 01:50:01.337114 waagent[1571]: 2024-12-13T01:50:01.337008Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1571
Dec 13 01:50:01.340512 waagent[1571]: 2024-12-13T01:50:01.340444Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk']
Dec 13 01:50:01.341724 waagent[1571]: 2024-12-13T01:50:01.341664Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Dec 13 01:50:01.418157 waagent[1571]: 2024-12-13T01:50:01.418071Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Dec 13 01:50:01.418547 waagent[1571]: 2024-12-13T01:50:01.418484Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Dec 13 01:50:01.426486 waagent[1571]: 2024-12-13T01:50:01.426429Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Dec 13 01:50:01.426937 waagent[1571]: 2024-12-13T01:50:01.426878Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service'
Dec 13 01:50:01.427986 waagent[1571]: 2024-12-13T01:50:01.427921Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True]
Dec 13 01:50:01.429264 waagent[1571]: 2024-12-13T01:50:01.429206Z INFO ExtHandler ExtHandler Starting env monitor service.
Dec 13 01:50:01.429616 waagent[1571]: 2024-12-13T01:50:01.429560Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Dec 13 01:50:01.429974 waagent[1571]: 2024-12-13T01:50:01.429920Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Dec 13 01:50:01.430490 waagent[1571]: 2024-12-13T01:50:01.430436Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Dec 13 01:50:01.430782 waagent[1571]: 2024-12-13T01:50:01.430728Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Dec 13 01:50:01.430782 waagent[1571]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Dec 13 01:50:01.430782 waagent[1571]: eth0        00000000        0108C80A        0003        0        0        1024        00000000        0        0        0
Dec 13 01:50:01.430782 waagent[1571]: eth0        0008C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Dec 13 01:50:01.430782 waagent[1571]: eth0        0108C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Dec 13 01:50:01.430782 waagent[1571]: eth0        10813FA8        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Dec 13 01:50:01.430782 waagent[1571]: eth0        FEA9FEA9        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Dec 13 01:50:01.433973 waagent[1571]: 2024-12-13T01:50:01.433784Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Dec 13 01:50:01.434714 waagent[1571]: 2024-12-13T01:50:01.434655Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Dec 13 01:50:01.434888 waagent[1571]: 2024-12-13T01:50:01.434838Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Dec 13 01:50:01.435515 waagent[1571]: 2024-12-13T01:50:01.435455Z INFO EnvHandler ExtHandler Configure routes
Dec 13 01:50:01.435658 waagent[1571]: 2024-12-13T01:50:01.435611Z INFO EnvHandler ExtHandler Gateway:None
Dec 13 01:50:01.435785 waagent[1571]: 2024-12-13T01:50:01.435742Z INFO EnvHandler ExtHandler Routes:None
Dec 13 01:50:01.436687 waagent[1571]: 2024-12-13T01:50:01.436631Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Dec 13 01:50:01.436885 waagent[1571]: 2024-12-13T01:50:01.436833Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Dec 13 01:50:01.437895 waagent[1571]: 2024-12-13T01:50:01.437835Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Dec 13 01:50:01.438101 waagent[1571]: 2024-12-13T01:50:01.438038Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Dec 13 01:50:01.438395 waagent[1571]: 2024-12-13T01:50:01.438344Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Dec 13 01:50:01.448893 waagent[1571]: 2024-12-13T01:50:01.448837Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod)
Dec 13 01:50:01.450570 waagent[1571]: 2024-12-13T01:50:01.450506Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required
Dec 13 01:50:01.451570 waagent[1571]: 2024-12-13T01:50:01.451511Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders'
Dec 13 01:50:01.495048 waagent[1571]: 2024-12-13T01:50:01.493128Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel.
Dec 13 01:50:01.502278 waagent[1571]: 2024-12-13T01:50:01.502218Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1562'
Dec 13 01:50:01.579691 waagent[1571]: 2024-12-13T01:50:01.579488Z INFO MonitorHandler ExtHandler Network interfaces:
Dec 13 01:50:01.579691 waagent[1571]: Executing ['ip', '-a', '-o', 'link']:
Dec 13 01:50:01.579691 waagent[1571]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Dec 13 01:50:01.579691 waagent[1571]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 7c:1e:52:35:89:b6 brd ff:ff:ff:ff:ff:ff
Dec 13 01:50:01.579691 waagent[1571]: 3: enP22449s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 7c:1e:52:35:89:b6 brd ff:ff:ff:ff:ff:ff\    altname enP22449p0s2
Dec 13 01:50:01.579691 waagent[1571]: Executing ['ip', '-4', '-a', '-o', 'address']:
Dec 13 01:50:01.579691 waagent[1571]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Dec 13 01:50:01.579691 waagent[1571]: 2: eth0    inet 10.200.8.24/24 metric 1024 brd 10.200.8.255 scope global eth0\       valid_lft forever preferred_lft forever
Dec 13 01:50:01.579691 waagent[1571]: Executing ['ip', '-6', '-a', '-o', 'address']:
Dec 13 01:50:01.579691 waagent[1571]: 1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
Dec 13 01:50:01.579691 waagent[1571]: 2: eth0    inet6 fe80::7e1e:52ff:fe35:89b6/64 scope link \       valid_lft forever preferred_lft forever
Dec 13 01:50:01.842897 waagent[1571]: 2024-12-13T01:50:01.842770Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting
Dec 13 01:50:02.800256 waagent[1497]: 2024-12-13T01:50:02.800056Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running
Dec 13 01:50:02.806558 waagent[1497]: 2024-12-13T01:50:02.806479Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent
Dec 13 01:50:03.734120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:50:03.734382 systemd[1]: Stopped kubelet.service.
Dec 13 01:50:03.734438 systemd[1]: kubelet.service: Consumed 1.033s CPU time.
Dec 13 01:50:03.736367 systemd[1]: Starting kubelet.service...
Dec 13 01:50:03.849581 waagent[1609]: 2024-12-13T01:50:03.849464Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2)
Dec 13 01:50:03.850887 waagent[1609]: 2024-12-13T01:50:03.850806Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6
Dec 13 01:50:03.851235 waagent[1609]: 2024-12-13T01:50:03.851169Z INFO ExtHandler ExtHandler Python: 3.9.16
Dec 13 01:50:03.851513 waagent[1609]: 2024-12-13T01:50:03.851454Z INFO ExtHandler ExtHandler CPU Arch: x86_64
Dec 13 01:50:03.865074 waagent[1609]: 2024-12-13T01:50:03.864974Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Dec 13 01:50:03.865467 waagent[1609]: 2024-12-13T01:50:03.865408Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Dec 13 01:50:03.865626 waagent[1609]: 2024-12-13T01:50:03.865577Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Dec 13 01:50:03.877704 waagent[1609]: 2024-12-13T01:50:03.877637Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1]
Dec 13 01:50:03.886280 waagent[1609]: 2024-12-13T01:50:03.886224Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159
Dec 13 01:50:03.887184 waagent[1609]: 2024-12-13T01:50:03.887127Z INFO ExtHandler
Dec 13 01:50:03.887332 waagent[1609]: 2024-12-13T01:50:03.887283Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b7d5018b-2978-4360-898f-3ed8c1d6e688 eTag: 9333032416594252243 source: Fabric]
Dec 13 01:50:03.888022 waagent[1609]: 2024-12-13T01:50:03.887960Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them.
Dec 13 01:50:03.889919 waagent[1609]: 2024-12-13T01:50:03.889218Z INFO ExtHandler
Dec 13 01:50:03.889919 waagent[1609]: 2024-12-13T01:50:03.889391Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1]
Dec 13 01:50:03.896648 waagent[1609]: 2024-12-13T01:50:03.896598Z INFO ExtHandler ExtHandler Downloading artifacts profile blob
Dec 13 01:50:03.897074 waagent[1609]: 2024-12-13T01:50:03.897015Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required
Dec 13 01:50:03.917787 waagent[1609]: 2024-12-13T01:50:03.917726Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel.
Dec 13 01:50:04.065642 waagent[1609]: 2024-12-13T01:50:04.065433Z INFO ExtHandler Downloaded certificate {'thumbprint': '381F23118060580A40607EAF038D167B925DDA13', 'hasPrivateKey': True}
Dec 13 01:50:04.067146 waagent[1609]: 2024-12-13T01:50:04.067051Z INFO ExtHandler Downloaded certificate {'thumbprint': '000D6F95CDB5FE8B5619C3A55FE3DE088755515D', 'hasPrivateKey': False}
Dec 13 01:50:04.069075 waagent[1609]: 2024-12-13T01:50:04.069008Z INFO ExtHandler Fetch goal state completed
Dec 13 01:50:04.073723 systemd[1]: Started kubelet.service.
Dec 13 01:50:04.093137 waagent[1609]: 2024-12-13T01:50:04.093022Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024)
Dec 13 01:50:04.104589 waagent[1609]: 2024-12-13T01:50:04.104507Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1609
Dec 13 01:50:04.107736 waagent[1609]: 2024-12-13T01:50:04.107670Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk']
Dec 13 01:50:04.108690 waagent[1609]: 2024-12-13T01:50:04.108629Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported
Dec 13 01:50:04.108964 waagent[1609]: 2024-12-13T01:50:04.108908Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False
Dec 13 01:50:04.111003 waagent[1609]: 2024-12-13T01:50:04.110943Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Dec 13 01:50:04.115606 waagent[1609]: 2024-12-13T01:50:04.115551Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Dec 13 01:50:04.143762 waagent[1609]: 2024-12-13T01:50:04.143670Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Dec 13 01:50:04.152601 waagent[1609]: 2024-12-13T01:50:04.152549Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Dec 13 01:50:04.153056 waagent[1609]: 2024-12-13T01:50:04.152999Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service'
Dec 13 01:50:04.158961 waagent[1609]: 2024-12-13T01:50:04.158873Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.
Dec 13 01:50:04.159981 waagent[1609]: 2024-12-13T01:50:04.159915Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True]
Dec 13 01:50:04.161402 waagent[1609]: 2024-12-13T01:50:04.161343Z INFO ExtHandler ExtHandler Starting env monitor service.
Dec 13 01:50:04.161758 waagent[1609]: 2024-12-13T01:50:04.161703Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Dec 13 01:50:04.162568 waagent[1609]: 2024-12-13T01:50:04.162519Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Dec 13 01:50:04.163205 waagent[1609]: 2024-12-13T01:50:04.163154Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Dec 13 01:50:04.163517 waagent[1609]: 2024-12-13T01:50:04.163470Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Dec 13 01:50:04.163517 waagent[1609]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Dec 13 01:50:04.163517 waagent[1609]: eth0        00000000        0108C80A        0003        0        0        1024        00000000        0        0        0
Dec 13 01:50:04.163517 waagent[1609]: eth0        0008C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Dec 13 01:50:04.163517 waagent[1609]: eth0        0108C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Dec 13 01:50:04.163517 waagent[1609]: eth0        10813FA8        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Dec 13 01:50:04.163517 waagent[1609]: eth0        FEA9FEA9        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Dec 13 01:50:04.165766 waagent[1609]: 2024-12-13T01:50:04.165676Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Dec 13 01:50:04.166686 waagent[1609]: 2024-12-13T01:50:04.166626Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Dec 13 01:50:04.167009 waagent[1609]: 2024-12-13T01:50:04.166922Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Dec 13 01:50:04.167296 waagent[1609]: 2024-12-13T01:50:04.167235Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Dec 13 01:50:04.167680 waagent[1609]: 2024-12-13T01:50:04.167632Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Dec 13 01:50:04.168335 waagent[1609]: 2024-12-13T01:50:04.168240Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Dec 13 01:50:04.168416 waagent[1609]: 2024-12-13T01:50:04.168357Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Dec 13 01:50:04.169142 waagent[1609]: 2024-12-13T01:50:04.169061Z INFO EnvHandler ExtHandler Configure routes
Dec 13 01:50:04.171449 waagent[1609]: 2024-12-13T01:50:04.171227Z INFO EnvHandler ExtHandler Gateway:None
Dec 13 01:50:04.172162 waagent[1609]: 2024-12-13T01:50:04.172073Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Dec 13 01:50:04.174220 waagent[1609]: 2024-12-13T01:50:04.174165Z INFO EnvHandler ExtHandler Routes:None
Dec 13 01:50:04.184933 waagent[1609]: 2024-12-13T01:50:04.184871Z INFO MonitorHandler ExtHandler Network interfaces:
Dec 13 01:50:04.184933 waagent[1609]: Executing ['ip', '-a', '-o', 'link']:
Dec 13 01:50:04.184933 waagent[1609]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Dec 13 01:50:04.184933 waagent[1609]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 7c:1e:52:35:89:b6 brd ff:ff:ff:ff:ff:ff
Dec 13 01:50:04.184933 waagent[1609]: 3: enP22449s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 7c:1e:52:35:89:b6 brd ff:ff:ff:ff:ff:ff\    altname enP22449p0s2
Dec 13 01:50:04.184933 waagent[1609]: Executing ['ip', '-4', '-a', '-o', 'address']:
Dec 13 01:50:04.184933 waagent[1609]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Dec 13 01:50:04.184933 waagent[1609]: 2: eth0    inet 10.200.8.24/24 metric 1024 brd 10.200.8.255 scope global eth0\       valid_lft forever preferred_lft forever
Dec 13 01:50:04.184933 waagent[1609]: Executing ['ip', '-6', '-a', '-o', 'address']:
Dec 13 01:50:04.184933 waagent[1609]: 1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
Dec 13 01:50:04.184933 waagent[1609]: 2: eth0    inet6 fe80::7e1e:52ff:fe35:89b6/64 scope link \       valid_lft forever preferred_lft forever
Dec 13 01:50:04.199369 waagent[1609]: 2024-12-13T01:50:04.197884Z INFO ExtHandler ExtHandler Downloading agent manifest
Dec 13 01:50:04.228748 waagent[1609]: 2024-12-13T01:50:04.228680Z INFO ExtHandler ExtHandler
Dec 13 01:50:04.228902 waagent[1609]: 2024-12-13T01:50:04.228834Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fec57668-2661-4d8e-9451-047dc621bb3a correlation 4d661815-a2e2-4ac1-a988-e6e854b7b884 created: 2024-12-13T01:48:04.347351Z]
Dec 13 01:50:04.229883 waagent[1609]: 2024-12-13T01:50:04.229821Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything.
Dec 13 01:50:04.231821 waagent[1609]: 2024-12-13T01:50:04.231761Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms]
Dec 13 01:50:04.254771 waagent[1609]: 2024-12-13T01:50:04.254702Z INFO ExtHandler ExtHandler Looking for existing remote access users.
Dec 13 01:50:04.379944 kubelet[1625]: E1213 01:50:04.379828    1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:50:04.383326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:50:04.383487 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:50:04.390204 waagent[1609]: 2024-12-13T01:50:04.390052Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1F1FC999-24E7-4EE3-875F-69081BB66E41;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;]
Dec 13 01:50:04.451218 waagent[1609]: 2024-12-13T01:50:04.451102Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric:
Dec 13 01:50:04.451218 waagent[1609]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Dec 13 01:50:04.451218 waagent[1609]:     pkts      bytes target     prot opt in     out     source               destination
Dec 13 01:50:04.451218 waagent[1609]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Dec 13 01:50:04.451218 waagent[1609]:     pkts      bytes target     prot opt in     out     source               destination
Dec 13 01:50:04.451218 waagent[1609]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes)
Dec 13 01:50:04.451218 waagent[1609]:     pkts      bytes target     prot opt in     out     source               destination
Dec 13 01:50:04.451218 waagent[1609]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Dec 13 01:50:04.451218 waagent[1609]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Dec 13 01:50:04.451218 waagent[1609]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Dec 13 01:50:04.458274 waagent[1609]: 2024-12-13T01:50:04.458177Z INFO EnvHandler ExtHandler Current Firewall rules:
Dec 13 01:50:04.458274 waagent[1609]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Dec 13 01:50:04.458274 waagent[1609]:     pkts      bytes target     prot opt in     out     source               destination
Dec 13 01:50:04.458274 waagent[1609]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Dec 13 01:50:04.458274 waagent[1609]:     pkts      bytes target     prot opt in     out     source               destination
Dec 13 01:50:04.458274 waagent[1609]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes)
Dec 13 01:50:04.458274 waagent[1609]:     pkts      bytes target     prot opt in     out     source               destination
Dec 13 01:50:04.458274 waagent[1609]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Dec 13 01:50:04.458274 waagent[1609]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Dec 13 01:50:04.458274 waagent[1609]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Dec 13 01:50:04.458864 waagent[1609]: 2024-12-13T01:50:04.458812Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300
Dec 13 01:50:14.484172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 01:50:14.484495 systemd[1]: Stopped kubelet.service.
Dec 13 01:50:14.486550 systemd[1]: Starting kubelet.service...
Dec 13 01:50:14.814268 systemd[1]: Started kubelet.service.
Dec 13 01:50:15.114051 kubelet[1671]: E1213 01:50:15.113936    1671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:50:15.116173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:50:15.116337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:50:24.579912 kernel: hv_balloon: Max. dynamic memory size: 8192 MB
Dec 13 01:50:25.234322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Dec 13 01:50:25.234648 systemd[1]: Stopped kubelet.service.
Dec 13 01:50:25.236696 systemd[1]: Starting kubelet.service...
Dec 13 01:50:25.563284 systemd[1]: Started kubelet.service.
Dec 13 01:50:25.876757 kubelet[1681]: E1213 01:50:25.876640    1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:50:25.878665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:50:25.878777 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:50:35.984108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Dec 13 01:50:35.984435 systemd[1]: Stopped kubelet.service.
Dec 13 01:50:35.986525 systemd[1]: Starting kubelet.service...
Dec 13 01:50:36.193378 systemd[1]: Started kubelet.service.
Dec 13 01:50:36.596756 kubelet[1691]: E1213 01:50:36.596700    1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:50:36.598608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:50:36.598765 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:50:37.609848 update_engine[1405]: I1213 01:50:37.609679  1405 update_attempter.cc:509] Updating boot flags...
Dec 13 01:50:42.096533 systemd[1]: Created slice system-sshd.slice.
Dec 13 01:50:42.098772 systemd[1]: Started sshd@0-10.200.8.24:22-10.200.16.10:54434.service.
Dec 13 01:50:42.943812 sshd[1737]: Accepted publickey for core from 10.200.16.10 port 54434 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:50:42.945494 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:50:42.949838 systemd-logind[1402]: New session 3 of user core.
Dec 13 01:50:42.950489 systemd[1]: Started session-3.scope.
Dec 13 01:50:43.486746 systemd[1]: Started sshd@1-10.200.8.24:22-10.200.16.10:54436.service.
Dec 13 01:50:44.122942 sshd[1742]: Accepted publickey for core from 10.200.16.10 port 54436 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:50:44.124608 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:50:44.129954 systemd[1]: Started session-4.scope.
Dec 13 01:50:44.130409 systemd-logind[1402]: New session 4 of user core.
Dec 13 01:50:44.568451 sshd[1742]: pam_unix(sshd:session): session closed for user core
Dec 13 01:50:44.571772 systemd[1]: sshd@1-10.200.8.24:22-10.200.16.10:54436.service: Deactivated successfully.
Dec 13 01:50:44.572802 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 01:50:44.573618 systemd-logind[1402]: Session 4 logged out. Waiting for processes to exit.
Dec 13 01:50:44.574532 systemd-logind[1402]: Removed session 4.
Dec 13 01:50:44.672744 systemd[1]: Started sshd@2-10.200.8.24:22-10.200.16.10:54446.service.
Dec 13 01:50:45.298126 sshd[1748]: Accepted publickey for core from 10.200.16.10 port 54446 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:50:45.299782 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:50:45.304736 systemd[1]: Started session-5.scope.
Dec 13 01:50:45.305345 systemd-logind[1402]: New session 5 of user core.
Dec 13 01:50:45.738396 sshd[1748]: pam_unix(sshd:session): session closed for user core
Dec 13 01:50:45.741424 systemd[1]: sshd@2-10.200.8.24:22-10.200.16.10:54446.service: Deactivated successfully.
Dec 13 01:50:45.742258 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 01:50:45.742895 systemd-logind[1402]: Session 5 logged out. Waiting for processes to exit.
Dec 13 01:50:45.743653 systemd-logind[1402]: Removed session 5.
Dec 13 01:50:45.843042 systemd[1]: Started sshd@3-10.200.8.24:22-10.200.16.10:54454.service.
Dec 13 01:50:46.470385 sshd[1754]: Accepted publickey for core from 10.200.16.10 port 54454 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:50:46.472045 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:50:46.477155 systemd[1]: Started session-6.scope.
Dec 13 01:50:46.477604 systemd-logind[1402]: New session 6 of user core.
Dec 13 01:50:46.734210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Dec 13 01:50:46.734447 systemd[1]: Stopped kubelet.service.
Dec 13 01:50:46.736154 systemd[1]: Starting kubelet.service...
Dec 13 01:50:46.906608 systemd[1]: Started kubelet.service.
Dec 13 01:50:46.919381 sshd[1754]: pam_unix(sshd:session): session closed for user core
Dec 13 01:50:46.922005 systemd[1]: sshd@3-10.200.8.24:22-10.200.16.10:54454.service: Deactivated successfully.
Dec 13 01:50:46.922776 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 01:50:46.924674 systemd-logind[1402]: Session 6 logged out. Waiting for processes to exit.
Dec 13 01:50:46.925794 systemd-logind[1402]: Removed session 6.
Dec 13 01:50:47.023098 systemd[1]: Started sshd@4-10.200.8.24:22-10.200.16.10:54468.service.
Dec 13 01:50:47.352220 kubelet[1762]: E1213 01:50:47.352096    1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:50:47.354067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:50:47.354244 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:50:47.648727 sshd[1770]: Accepted publickey for core from 10.200.16.10 port 54468 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:50:47.650350 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:50:47.655001 systemd[1]: Started session-7.scope.
Dec 13 01:50:47.655605 systemd-logind[1402]: New session 7 of user core.
Dec 13 01:50:48.245342 sudo[1774]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 01:50:48.245712 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 01:50:48.283177 systemd[1]: Starting docker.service...
Dec 13 01:50:48.332760 env[1784]: time="2024-12-13T01:50:48.332718393Z" level=info msg="Starting up"
Dec 13 01:50:48.334174 env[1784]: time="2024-12-13T01:50:48.334141594Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 01:50:48.334174 env[1784]: time="2024-12-13T01:50:48.334162894Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 01:50:48.334340 env[1784]: time="2024-12-13T01:50:48.334187394Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 01:50:48.334340 env[1784]: time="2024-12-13T01:50:48.334200194Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 01:50:48.336294 env[1784]: time="2024-12-13T01:50:48.336268194Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 01:50:48.336294 env[1784]: time="2024-12-13T01:50:48.336285094Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 01:50:48.336458 env[1784]: time="2024-12-13T01:50:48.336302994Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 01:50:48.336458 env[1784]: time="2024-12-13T01:50:48.336315294Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 01:50:48.344226 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3723240345-merged.mount: Deactivated successfully.
Dec 13 01:50:48.471290 env[1784]: time="2024-12-13T01:50:48.471243324Z" level=info msg="Loading containers: start."
Dec 13 01:50:48.666160 kernel: Initializing XFRM netlink socket
Dec 13 01:50:48.718592 env[1784]: time="2024-12-13T01:50:48.718549679Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 13 01:50:48.828473 systemd-networkd[1562]: docker0: Link UP
Dec 13 01:50:48.854545 env[1784]: time="2024-12-13T01:50:48.854508409Z" level=info msg="Loading containers: done."
Dec 13 01:50:48.866406 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1672553177-merged.mount: Deactivated successfully.
Dec 13 01:50:48.879461 env[1784]: time="2024-12-13T01:50:48.879428614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 01:50:48.879657 env[1784]: time="2024-12-13T01:50:48.879631014Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Dec 13 01:50:48.879756 env[1784]: time="2024-12-13T01:50:48.879736314Z" level=info msg="Daemon has completed initialization"
Dec 13 01:50:48.911006 systemd[1]: Started docker.service.
Dec 13 01:50:48.921159 env[1784]: time="2024-12-13T01:50:48.921109923Z" level=info msg="API listen on /run/docker.sock"
Dec 13 01:50:54.065464 env[1419]: time="2024-12-13T01:50:54.065413839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\""
Dec 13 01:50:54.953594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000172836.mount: Deactivated successfully.
Dec 13 01:50:56.989363 env[1419]: time="2024-12-13T01:50:56.989293700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:56.998212 env[1419]: time="2024-12-13T01:50:56.998173875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:57.002692 env[1419]: time="2024-12-13T01:50:57.002661712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:57.006701 env[1419]: time="2024-12-13T01:50:57.006597230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:57.007486 env[1419]: time="2024-12-13T01:50:57.007454456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\""
Dec 13 01:50:57.017838 env[1419]: time="2024-12-13T01:50:57.017813267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\""
Dec 13 01:50:57.484155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
Dec 13 01:50:57.484461 systemd[1]: Stopped kubelet.service.
Dec 13 01:50:57.486503 systemd[1]: Starting kubelet.service...
Dec 13 01:50:57.588096 systemd[1]: Started kubelet.service.
Dec 13 01:50:57.654123 kubelet[1913]: E1213 01:50:57.654053    1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:50:57.656242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:50:57.656400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:50:59.367604 env[1419]: time="2024-12-13T01:50:59.367474636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:59.372247 env[1419]: time="2024-12-13T01:50:59.372208571Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:59.376448 env[1419]: time="2024-12-13T01:50:59.376404590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:59.380629 env[1419]: time="2024-12-13T01:50:59.380596609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:50:59.381437 env[1419]: time="2024-12-13T01:50:59.381403032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\""
Dec 13 01:50:59.392426 env[1419]: time="2024-12-13T01:50:59.392397345Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\""
Dec 13 01:51:00.630244 env[1419]: time="2024-12-13T01:51:00.630189557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:00.639201 env[1419]: time="2024-12-13T01:51:00.639145905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:00.644046 env[1419]: time="2024-12-13T01:51:00.644007439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:00.648860 env[1419]: time="2024-12-13T01:51:00.648824673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:00.649472 env[1419]: time="2024-12-13T01:51:00.649431989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\""
Dec 13 01:51:00.659260 env[1419]: time="2024-12-13T01:51:00.659238161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\""
Dec 13 01:51:01.959066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954769771.mount: Deactivated successfully.
Dec 13 01:51:02.536439 env[1419]: time="2024-12-13T01:51:02.536382990Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:02.544311 env[1419]: time="2024-12-13T01:51:02.544267697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:02.548581 env[1419]: time="2024-12-13T01:51:02.548542109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:02.552099 env[1419]: time="2024-12-13T01:51:02.552049701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:02.552486 env[1419]: time="2024-12-13T01:51:02.552454511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\""
Dec 13 01:51:02.567223 env[1419]: time="2024-12-13T01:51:02.567188798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 01:51:03.087747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347115984.mount: Deactivated successfully.
Dec 13 01:51:04.463446 env[1419]: time="2024-12-13T01:51:04.463383687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:04.472109 env[1419]: time="2024-12-13T01:51:04.472047602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:04.478578 env[1419]: time="2024-12-13T01:51:04.478537963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:04.484403 env[1419]: time="2024-12-13T01:51:04.484366208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:04.485218 env[1419]: time="2024-12-13T01:51:04.485183829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Dec 13 01:51:04.495226 env[1419]: time="2024-12-13T01:51:04.495195377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Dec 13 01:51:05.003551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702578973.mount: Deactivated successfully.
Dec 13 01:51:05.026254 env[1419]: time="2024-12-13T01:51:05.026210659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:05.037959 env[1419]: time="2024-12-13T01:51:05.037919942Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:05.044683 env[1419]: time="2024-12-13T01:51:05.044653105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:05.048481 env[1419]: time="2024-12-13T01:51:05.048443297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:05.048923 env[1419]: time="2024-12-13T01:51:05.048887108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Dec 13 01:51:05.058522 env[1419]: time="2024-12-13T01:51:05.058494440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Dec 13 01:51:05.622175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1369555993.mount: Deactivated successfully.
Dec 13 01:51:07.734050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
Dec 13 01:51:07.734335 systemd[1]: Stopped kubelet.service.
Dec 13 01:51:07.736197 systemd[1]: Starting kubelet.service...
Dec 13 01:51:07.861828 systemd[1]: Started kubelet.service.
Dec 13 01:51:08.364428 kubelet[1949]: E1213 01:51:08.364375    1949 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:51:08.366268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:51:08.366431 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:51:08.592509 env[1419]: time="2024-12-13T01:51:08.592451922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:08.601186 env[1419]: time="2024-12-13T01:51:08.601151017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:08.607650 env[1419]: time="2024-12-13T01:51:08.607614861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:08.612321 env[1419]: time="2024-12-13T01:51:08.612288166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:08.613039 env[1419]: time="2024-12-13T01:51:08.613009882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Dec 13 01:51:11.431002 systemd[1]: Stopped kubelet.service.
Dec 13 01:51:11.434221 systemd[1]: Starting kubelet.service...
Dec 13 01:51:11.464938 systemd[1]: Reloading.
Dec 13 01:51:11.585278 /usr/lib/systemd/system-generators/torcx-generator[2040]: time="2024-12-13T01:51:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 01:51:11.585315 /usr/lib/systemd/system-generators/torcx-generator[2040]: time="2024-12-13T01:51:11Z" level=info msg="torcx already run"
Dec 13 01:51:11.678562 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 01:51:11.678582 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 01:51:11.694964 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:51:11.790270 systemd[1]: Started kubelet.service.
Dec 13 01:51:11.792289 systemd[1]: Stopping kubelet.service...
Dec 13 01:51:11.792876 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 01:51:11.793103 systemd[1]: Stopped kubelet.service.
Dec 13 01:51:11.794774 systemd[1]: Starting kubelet.service...
Dec 13 01:51:12.061114 systemd[1]: Started kubelet.service.
Dec 13 01:51:12.106115 kubelet[2110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:51:12.106115 kubelet[2110]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 01:51:12.106115 kubelet[2110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:51:12.106600 kubelet[2110]: I1213 01:51:12.106170    2110 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 01:51:12.913348 kubelet[2110]: I1213 01:51:12.913309    2110 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 01:51:12.913519 kubelet[2110]: I1213 01:51:12.913385    2110 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 01:51:12.913668 kubelet[2110]: I1213 01:51:12.913647    2110 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 01:51:12.944126 kubelet[2110]: E1213 01:51:12.944047    2110 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:12.945035 kubelet[2110]: I1213 01:51:12.945009    2110 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:51:12.953416 kubelet[2110]: I1213 01:51:12.953396    2110 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 01:51:12.954867 kubelet[2110]: I1213 01:51:12.954842    2110 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 01:51:12.955055 kubelet[2110]: I1213 01:51:12.955034    2110 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 01:51:12.955221 kubelet[2110]: I1213 01:51:12.955068    2110 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 01:51:12.955221 kubelet[2110]: I1213 01:51:12.955103    2110 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 01:51:12.955221 kubelet[2110]: I1213 01:51:12.955219    2110 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:51:12.955348 kubelet[2110]: I1213 01:51:12.955332    2110 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 01:51:12.955391 kubelet[2110]: I1213 01:51:12.955350    2110 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 01:51:12.955391 kubelet[2110]: I1213 01:51:12.955378    2110 kubelet.go:312] "Adding apiserver pod source"
Dec 13 01:51:12.955465 kubelet[2110]: I1213 01:51:12.955399    2110 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 01:51:12.957740 kubelet[2110]: W1213 01:51:12.957687    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:12.957840 kubelet[2110]: E1213 01:51:12.957753    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:12.957891 kubelet[2110]: I1213 01:51:12.957845    2110 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 01:51:12.961223 kubelet[2110]: I1213 01:51:12.961196    2110 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 01:51:12.963562 kubelet[2110]: W1213 01:51:12.963538    2110 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 01:51:12.967603 kubelet[2110]: W1213 01:51:12.967561    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-f5ec44d98c&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:12.967732 kubelet[2110]: E1213 01:51:12.967718    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-f5ec44d98c&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:12.967866 kubelet[2110]: I1213 01:51:12.967817    2110 server.go:1256] "Started kubelet"
Dec 13 01:51:12.969778 kubelet[2110]: I1213 01:51:12.969758    2110 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 01:51:12.970788 kubelet[2110]: I1213 01:51:12.970770    2110 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 01:51:12.978244 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Dec 13 01:51:12.978319 kubelet[2110]: I1213 01:51:12.973333    2110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 01:51:12.978319 kubelet[2110]: I1213 01:51:12.973478    2110 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 01:51:12.978513 kubelet[2110]: I1213 01:51:12.978491    2110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 01:51:12.980947 kubelet[2110]: E1213 01:51:12.980924    2110 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-f5ec44d98c.18109989a359fd17  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-f5ec44d98c,UID:ci-3510.3.6-a-f5ec44d98c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-f5ec44d98c,},FirstTimestamp:2024-12-13 01:51:12.967793943 +0000 UTC m=+0.900829887,LastTimestamp:2024-12-13 01:51:12.967793943 +0000 UTC m=+0.900829887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-f5ec44d98c,}"
Dec 13 01:51:12.982767 kubelet[2110]: E1213 01:51:12.982750    2110 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 01:51:12.984920 kubelet[2110]: I1213 01:51:12.984896    2110 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 01:51:12.986112 kubelet[2110]: I1213 01:51:12.986097    2110 factory.go:221] Registration of the systemd container factory successfully
Dec 13 01:51:12.986303 kubelet[2110]: I1213 01:51:12.986284    2110 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 01:51:12.986921 kubelet[2110]: E1213 01:51:12.986903    2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-f5ec44d98c?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="200ms"
Dec 13 01:51:12.987777 kubelet[2110]: I1213 01:51:12.987756    2110 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 01:51:12.987862 kubelet[2110]: I1213 01:51:12.987823    2110 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 01:51:12.988699 kubelet[2110]: I1213 01:51:12.988683    2110 factory.go:221] Registration of the containerd container factory successfully
Dec 13 01:51:12.999938 kubelet[2110]: W1213 01:51:12.999668    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:12.999938 kubelet[2110]: E1213 01:51:12.999801    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:13.013404 kubelet[2110]: I1213 01:51:13.013379    2110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 01:51:13.014831 kubelet[2110]: I1213 01:51:13.014810    2110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 01:51:13.014911 kubelet[2110]: I1213 01:51:13.014841    2110 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 01:51:13.014911 kubelet[2110]: I1213 01:51:13.014858    2110 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 01:51:13.014911 kubelet[2110]: E1213 01:51:13.014902    2110 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 01:51:13.019858 kubelet[2110]: W1213 01:51:13.019817    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:13.019858 kubelet[2110]: E1213 01:51:13.019859    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:13.028040 kubelet[2110]: I1213 01:51:13.028023    2110 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 01:51:13.028040 kubelet[2110]: I1213 01:51:13.028040    2110 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 01:51:13.028181 kubelet[2110]: I1213 01:51:13.028058    2110 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:51:13.036766 kubelet[2110]: I1213 01:51:13.036742    2110 policy_none.go:49] "None policy: Start"
Dec 13 01:51:13.037243 kubelet[2110]: I1213 01:51:13.037231    2110 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 01:51:13.037338 kubelet[2110]: I1213 01:51:13.037332    2110 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 01:51:13.049181 systemd[1]: Created slice kubepods.slice.
Dec 13 01:51:13.053425 systemd[1]: Created slice kubepods-burstable.slice.
Dec 13 01:51:13.056331 systemd[1]: Created slice kubepods-besteffort.slice.
Dec 13 01:51:13.062824 kubelet[2110]: I1213 01:51:13.062798    2110 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 01:51:13.063004 kubelet[2110]: I1213 01:51:13.062984    2110 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 01:51:13.065315 kubelet[2110]: E1213 01:51:13.065299    2110 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-f5ec44d98c\" not found"
Dec 13 01:51:13.087063 kubelet[2110]: I1213 01:51:13.087044    2110 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.087611 kubelet[2110]: E1213 01:51:13.087589    2110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.116022 kubelet[2110]: I1213 01:51:13.115946    2110 topology_manager.go:215] "Topology Admit Handler" podUID="99638ff36f1f24075577e1c24ea618e6" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.118298 kubelet[2110]: I1213 01:51:13.118274    2110 topology_manager.go:215] "Topology Admit Handler" podUID="98c6329e952b04925b1907b3bb1aad22" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.120015 kubelet[2110]: I1213 01:51:13.119980    2110 topology_manager.go:215] "Topology Admit Handler" podUID="6dd8a23607bef9720185e1110c6b4550" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.126544 systemd[1]: Created slice kubepods-burstable-pod99638ff36f1f24075577e1c24ea618e6.slice.
Dec 13 01:51:13.136905 systemd[1]: Created slice kubepods-burstable-pod98c6329e952b04925b1907b3bb1aad22.slice.
Dec 13 01:51:13.145189 systemd[1]: Created slice kubepods-burstable-pod6dd8a23607bef9720185e1110c6b4550.slice.
Dec 13 01:51:13.188414 kubelet[2110]: E1213 01:51:13.188336    2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-f5ec44d98c?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="400ms"
Dec 13 01:51:13.189603 kubelet[2110]: I1213 01:51:13.189582    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99638ff36f1f24075577e1c24ea618e6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" (UID: \"99638ff36f1f24075577e1c24ea618e6\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.189783 kubelet[2110]: I1213 01:51:13.189769    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99638ff36f1f24075577e1c24ea618e6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" (UID: \"99638ff36f1f24075577e1c24ea618e6\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.189920 kubelet[2110]: I1213 01:51:13.189908    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.190116 kubelet[2110]: I1213 01:51:13.190103    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.190256 kubelet[2110]: I1213 01:51:13.190245    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99638ff36f1f24075577e1c24ea618e6-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" (UID: \"99638ff36f1f24075577e1c24ea618e6\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.190377 kubelet[2110]: I1213 01:51:13.190366    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.190501 kubelet[2110]: I1213 01:51:13.190491    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.190626 kubelet[2110]: I1213 01:51:13.190615    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6dd8a23607bef9720185e1110c6b4550-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-f5ec44d98c\" (UID: \"6dd8a23607bef9720185e1110c6b4550\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.190759 kubelet[2110]: I1213 01:51:13.190748    2110 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.289854 kubelet[2110]: I1213 01:51:13.289824    2110 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.290275 kubelet[2110]: E1213 01:51:13.290250    2110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.437134 env[1419]: time="2024-12-13T01:51:13.436244381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-f5ec44d98c,Uid:99638ff36f1f24075577e1c24ea618e6,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:13.441069 env[1419]: time="2024-12-13T01:51:13.440968974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-f5ec44d98c,Uid:98c6329e952b04925b1907b3bb1aad22,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:13.448155 env[1419]: time="2024-12-13T01:51:13.448123915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-f5ec44d98c,Uid:6dd8a23607bef9720185e1110c6b4550,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:13.588986 kubelet[2110]: E1213 01:51:13.588953    2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-f5ec44d98c?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="800ms"
Dec 13 01:51:13.692502 kubelet[2110]: I1213 01:51:13.692235    2110 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.692712 kubelet[2110]: E1213 01:51:13.692691    2110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:13.782722 kubelet[2110]: W1213 01:51:13.782673    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:13.782722 kubelet[2110]: E1213 01:51:13.782724    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.013138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121657290.mount: Deactivated successfully.
Dec 13 01:51:14.038957 kubelet[2110]: W1213 01:51:14.038881    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.039277 kubelet[2110]: E1213 01:51:14.038985    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.046342 env[1419]: time="2024-12-13T01:51:14.046299567Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.048855 env[1419]: time="2024-12-13T01:51:14.048821516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.059323 env[1419]: time="2024-12-13T01:51:14.059284516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.064727 env[1419]: time="2024-12-13T01:51:14.064691020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.068674 env[1419]: time="2024-12-13T01:51:14.068639396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.072679 env[1419]: time="2024-12-13T01:51:14.072647973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.075229 env[1419]: time="2024-12-13T01:51:14.075199622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.079708 env[1419]: time="2024-12-13T01:51:14.079675408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.085414 env[1419]: time="2024-12-13T01:51:14.085378417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.088636 env[1419]: time="2024-12-13T01:51:14.088604679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.100414 env[1419]: time="2024-12-13T01:51:14.100384005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.121002 env[1419]: time="2024-12-13T01:51:14.120965400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:14.191732 env[1419]: time="2024-12-13T01:51:14.191657657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:14.191732 env[1419]: time="2024-12-13T01:51:14.191696458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:14.191968 env[1419]: time="2024-12-13T01:51:14.191710158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:14.193643 env[1419]: time="2024-12-13T01:51:14.193357290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:14.193643 env[1419]: time="2024-12-13T01:51:14.193391791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:14.193643 env[1419]: time="2024-12-13T01:51:14.193404391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:14.193643 env[1419]: time="2024-12-13T01:51:14.193536693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11a5342f328fedad3e46070fd17f2ef9e96e0e4d57f25551079990ce0ec5cf5b pid=2160 runtime=io.containerd.runc.v2
Dec 13 01:51:14.193877 env[1419]: time="2024-12-13T01:51:14.193650696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/413115bd8b34841cb7789a18ec5e427f39e1730d345d4f0baafc3c4248418a07 pid=2148 runtime=io.containerd.runc.v2
Dec 13 01:51:14.194264 kubelet[2110]: W1213 01:51:14.194174    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-f5ec44d98c&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.194264 kubelet[2110]: E1213 01:51:14.194226    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-f5ec44d98c&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.214244 systemd[1]: Started cri-containerd-11a5342f328fedad3e46070fd17f2ef9e96e0e4d57f25551079990ce0ec5cf5b.scope.
Dec 13 01:51:14.231301 systemd[1]: Started cri-containerd-413115bd8b34841cb7789a18ec5e427f39e1730d345d4f0baafc3c4248418a07.scope.
Dec 13 01:51:14.239143 env[1419]: time="2024-12-13T01:51:14.236189412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:14.239143 env[1419]: time="2024-12-13T01:51:14.236235513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:14.239143 env[1419]: time="2024-12-13T01:51:14.236251513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:14.239143 env[1419]: time="2024-12-13T01:51:14.237589439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6bf9206412a0f1d37650714fe9de6d6fce6e91503d9df80f10aba26d386c668 pid=2207 runtime=io.containerd.runc.v2
Dec 13 01:51:14.260869 systemd[1]: Started cri-containerd-b6bf9206412a0f1d37650714fe9de6d6fce6e91503d9df80f10aba26d386c668.scope.
Dec 13 01:51:14.280505 kubelet[2110]: W1213 01:51:14.280334    2110 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.280505 kubelet[2110]: E1213 01:51:14.280388    2110 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused
Dec 13 01:51:14.302627 env[1419]: time="2024-12-13T01:51:14.302581487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-f5ec44d98c,Uid:99638ff36f1f24075577e1c24ea618e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"11a5342f328fedad3e46070fd17f2ef9e96e0e4d57f25551079990ce0ec5cf5b\""
Dec 13 01:51:14.309230 env[1419]: time="2024-12-13T01:51:14.309187513Z" level=info msg="CreateContainer within sandbox \"11a5342f328fedad3e46070fd17f2ef9e96e0e4d57f25551079990ce0ec5cf5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 01:51:14.327975 env[1419]: time="2024-12-13T01:51:14.327935573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-f5ec44d98c,Uid:6dd8a23607bef9720185e1110c6b4550,Namespace:kube-system,Attempt:0,} returns sandbox id \"413115bd8b34841cb7789a18ec5e427f39e1730d345d4f0baafc3c4248418a07\""
Dec 13 01:51:14.341103 env[1419]: time="2024-12-13T01:51:14.340827721Z" level=info msg="CreateContainer within sandbox \"413115bd8b34841cb7789a18ec5e427f39e1730d345d4f0baafc3c4248418a07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 01:51:14.351904 env[1419]: time="2024-12-13T01:51:14.351871033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-f5ec44d98c,Uid:98c6329e952b04925b1907b3bb1aad22,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6bf9206412a0f1d37650714fe9de6d6fce6e91503d9df80f10aba26d386c668\""
Dec 13 01:51:14.357330 env[1419]: time="2024-12-13T01:51:14.357285637Z" level=info msg="CreateContainer within sandbox \"b6bf9206412a0f1d37650714fe9de6d6fce6e91503d9df80f10aba26d386c668\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 01:51:14.386341 env[1419]: time="2024-12-13T01:51:14.386310594Z" level=info msg="CreateContainer within sandbox \"11a5342f328fedad3e46070fd17f2ef9e96e0e4d57f25551079990ce0ec5cf5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"995fab17345b53d990390d706972824e8a257374a21cb2c07eb92fc99a65a5fe\""
Dec 13 01:51:14.390277 kubelet[2110]: E1213 01:51:14.390247    2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-f5ec44d98c?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="1.6s"
Dec 13 01:51:14.390861 env[1419]: time="2024-12-13T01:51:14.390839381Z" level=info msg="StartContainer for \"995fab17345b53d990390d706972824e8a257374a21cb2c07eb92fc99a65a5fe\""
Dec 13 01:51:14.399545 env[1419]: time="2024-12-13T01:51:14.399511247Z" level=info msg="CreateContainer within sandbox \"413115bd8b34841cb7789a18ec5e427f39e1730d345d4f0baafc3c4248418a07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9190a66f6ddee090a0e5c29a209b09c8c44cc90e18789a37823362457142283c\""
Dec 13 01:51:14.400116 env[1419]: time="2024-12-13T01:51:14.400053758Z" level=info msg="StartContainer for \"9190a66f6ddee090a0e5c29a209b09c8c44cc90e18789a37823362457142283c\""
Dec 13 01:51:14.408883 systemd[1]: Started cri-containerd-995fab17345b53d990390d706972824e8a257374a21cb2c07eb92fc99a65a5fe.scope.
Dec 13 01:51:14.424496 env[1419]: time="2024-12-13T01:51:14.424456626Z" level=info msg="CreateContainer within sandbox \"b6bf9206412a0f1d37650714fe9de6d6fce6e91503d9df80f10aba26d386c668\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cdda9e50c1a35d3502deacfb3c0fc16883e657cc5f8698bff59c763e2e2a6c31\""
Dec 13 01:51:14.425010 env[1419]: time="2024-12-13T01:51:14.424980736Z" level=info msg="StartContainer for \"cdda9e50c1a35d3502deacfb3c0fc16883e657cc5f8698bff59c763e2e2a6c31\""
Dec 13 01:51:14.434720 systemd[1]: Started cri-containerd-9190a66f6ddee090a0e5c29a209b09c8c44cc90e18789a37823362457142283c.scope.
Dec 13 01:51:14.455442 systemd[1]: Started cri-containerd-cdda9e50c1a35d3502deacfb3c0fc16883e657cc5f8698bff59c763e2e2a6c31.scope.
Dec 13 01:51:14.496152 kubelet[2110]: I1213 01:51:14.494730    2110 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:14.496339 kubelet[2110]: E1213 01:51:14.495094    2110 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:14.506564 env[1419]: time="2024-12-13T01:51:14.506527002Z" level=info msg="StartContainer for \"995fab17345b53d990390d706972824e8a257374a21cb2c07eb92fc99a65a5fe\" returns successfully"
Dec 13 01:51:14.531394 env[1419]: time="2024-12-13T01:51:14.531306977Z" level=info msg="StartContainer for \"cdda9e50c1a35d3502deacfb3c0fc16883e657cc5f8698bff59c763e2e2a6c31\" returns successfully"
Dec 13 01:51:14.563306 env[1419]: time="2024-12-13T01:51:14.563248590Z" level=info msg="StartContainer for \"9190a66f6ddee090a0e5c29a209b09c8c44cc90e18789a37823362457142283c\" returns successfully"
Dec 13 01:51:16.098831 kubelet[2110]: I1213 01:51:16.098791    2110 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:16.500703 kubelet[2110]: E1213 01:51:16.500627    2110 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-f5ec44d98c\" not found" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:17.683596 kubelet[2110]: I1213 01:51:17.683552    2110 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:18.683408 kubelet[2110]: I1213 01:51:18.683356    2110 apiserver.go:52] "Watching apiserver"
Dec 13 01:51:18.687875 kubelet[2110]: I1213 01:51:18.687849    2110 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 01:51:18.796414 kubelet[2110]: W1213 01:51:18.796379    2110 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:51:19.327921 systemd[1]: Reloading.
Dec 13 01:51:19.408270 /usr/lib/systemd/system-generators/torcx-generator[2390]: time="2024-12-13T01:51:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 01:51:19.408309 /usr/lib/systemd/system-generators/torcx-generator[2390]: time="2024-12-13T01:51:19Z" level=info msg="torcx already run"
Dec 13 01:51:19.511503 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 01:51:19.511523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 01:51:19.527970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:51:19.637343 systemd[1]: Stopping kubelet.service...
Dec 13 01:51:19.657437 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 01:51:19.657656 systemd[1]: Stopped kubelet.service.
Dec 13 01:51:19.657710 systemd[1]: kubelet.service: Consumed 1.248s CPU time.
Dec 13 01:51:19.659559 systemd[1]: Starting kubelet.service...
Dec 13 01:51:19.740442 systemd[1]: Started kubelet.service.
Dec 13 01:51:19.811102 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:51:19.811102 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 01:51:19.811102 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:51:19.811102 kubelet[2456]: I1213 01:51:19.811050    2456 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 01:51:19.821827 kubelet[2456]: I1213 01:51:19.821619    2456 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 01:51:19.821827 kubelet[2456]: I1213 01:51:19.821648    2456 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 01:51:19.823251 kubelet[2456]: I1213 01:51:19.822195    2456 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 01:51:19.825183 kubelet[2456]: I1213 01:51:19.824459    2456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 01:51:19.826747 kubelet[2456]: I1213 01:51:19.826722    2456 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:51:19.832948 kubelet[2456]: I1213 01:51:19.832933    2456 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 01:51:19.833222 kubelet[2456]: I1213 01:51:19.833209    2456 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 01:51:19.833419 kubelet[2456]: I1213 01:51:19.833409    2456 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 01:51:19.833634 kubelet[2456]: I1213 01:51:19.833623    2456 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 01:51:19.833690 kubelet[2456]: I1213 01:51:19.833684    2456 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 01:51:19.833763 kubelet[2456]: I1213 01:51:19.833757    2456 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:51:19.833876 kubelet[2456]: I1213 01:51:19.833870    2456 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 01:51:19.833928 kubelet[2456]: I1213 01:51:19.833923    2456 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 01:51:19.833985 kubelet[2456]: I1213 01:51:19.833979    2456 kubelet.go:312] "Adding apiserver pod source"
Dec 13 01:51:19.834031 kubelet[2456]: I1213 01:51:19.834026    2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 01:51:19.843909 kubelet[2456]: I1213 01:51:19.843891    2456 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 01:51:19.844345 kubelet[2456]: I1213 01:51:19.844316    2456 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 01:51:19.845506 kubelet[2456]: I1213 01:51:19.845490    2456 server.go:1256] "Started kubelet"
Dec 13 01:51:19.853628 kubelet[2456]: I1213 01:51:19.848875    2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 01:51:19.855705 kubelet[2456]: I1213 01:51:19.855680    2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 01:51:19.856429 kubelet[2456]: I1213 01:51:19.856407    2456 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 01:51:19.856512 kubelet[2456]: I1213 01:51:19.856475    2456 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 01:51:19.857740 kubelet[2456]: I1213 01:51:19.857715    2456 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 01:51:19.861192 kubelet[2456]: I1213 01:51:19.861175    2456 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 01:51:19.864175 kubelet[2456]: I1213 01:51:19.864159    2456 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 01:51:19.864437 kubelet[2456]: I1213 01:51:19.864425    2456 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 01:51:19.865760 kubelet[2456]: I1213 01:51:19.865567    2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 01:51:19.869028 kubelet[2456]: I1213 01:51:19.869009    2456 factory.go:221] Registration of the containerd container factory successfully
Dec 13 01:51:19.869028 kubelet[2456]: I1213 01:51:19.869028    2456 factory.go:221] Registration of the systemd container factory successfully
Dec 13 01:51:19.873620 kubelet[2456]: I1213 01:51:19.873593    2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 01:51:19.874528 kubelet[2456]: I1213 01:51:19.874506    2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 01:51:19.874528 kubelet[2456]: I1213 01:51:19.874531    2456 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 01:51:19.874659 kubelet[2456]: I1213 01:51:19.874548    2456 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 01:51:19.874659 kubelet[2456]: E1213 01:51:19.874599    2456 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 01:51:19.926254 kubelet[2456]: I1213 01:51:19.924998    2456 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 01:51:19.926254 kubelet[2456]: I1213 01:51:19.925020    2456 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 01:51:19.926254 kubelet[2456]: I1213 01:51:19.925039    2456 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:51:19.926254 kubelet[2456]: I1213 01:51:19.925218    2456 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 01:51:19.926254 kubelet[2456]: I1213 01:51:19.925244    2456 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 01:51:19.926254 kubelet[2456]: I1213 01:51:19.925254    2456 policy_none.go:49] "None policy: Start"
Dec 13 01:51:19.926652 kubelet[2456]: I1213 01:51:19.926629    2456 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 01:51:19.926728 kubelet[2456]: I1213 01:51:19.926659    2456 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 01:51:19.926856 kubelet[2456]: I1213 01:51:19.926836    2456 state_mem.go:75] "Updated machine memory state"
Dec 13 01:51:19.930647 kubelet[2456]: I1213 01:51:19.930627    2456 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 01:51:19.930875 kubelet[2456]: I1213 01:51:19.930860    2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 01:51:19.964183 kubelet[2456]: I1213 01:51:19.964150    2456 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:19.975022 kubelet[2456]: I1213 01:51:19.974743    2456 topology_manager.go:215] "Topology Admit Handler" podUID="99638ff36f1f24075577e1c24ea618e6" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:19.975333 kubelet[2456]: I1213 01:51:19.975315    2456 topology_manager.go:215] "Topology Admit Handler" podUID="98c6329e952b04925b1907b3bb1aad22" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:19.975727 kubelet[2456]: I1213 01:51:19.975708    2456 topology_manager.go:215] "Topology Admit Handler" podUID="6dd8a23607bef9720185e1110c6b4550" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:19.979746 kubelet[2456]: I1213 01:51:19.979718    2456 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:19.979942 kubelet[2456]: I1213 01:51:19.979905    2456 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:19.983185 kubelet[2456]: W1213 01:51:19.983165    2456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:51:19.986717 kubelet[2456]: W1213 01:51:19.986698    2456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:51:19.987602 kubelet[2456]: W1213 01:51:19.987582    2456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:51:19.987701 kubelet[2456]: E1213 01:51:19.987646    2456 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.066962 kubelet[2456]: I1213 01:51:20.066913    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067182 kubelet[2456]: I1213 01:51:20.066975    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067182 kubelet[2456]: I1213 01:51:20.067010    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6dd8a23607bef9720185e1110c6b4550-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-f5ec44d98c\" (UID: \"6dd8a23607bef9720185e1110c6b4550\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067182 kubelet[2456]: I1213 01:51:20.067039    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067182 kubelet[2456]: I1213 01:51:20.067070    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99638ff36f1f24075577e1c24ea618e6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" (UID: \"99638ff36f1f24075577e1c24ea618e6\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067182 kubelet[2456]: I1213 01:51:20.067115    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99638ff36f1f24075577e1c24ea618e6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" (UID: \"99638ff36f1f24075577e1c24ea618e6\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067450 kubelet[2456]: I1213 01:51:20.067147    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067450 kubelet[2456]: I1213 01:51:20.067180    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98c6329e952b04925b1907b3bb1aad22-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-f5ec44d98c\" (UID: \"98c6329e952b04925b1907b3bb1aad22\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.067450 kubelet[2456]: I1213 01:51:20.067214    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99638ff36f1f24075577e1c24ea618e6-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" (UID: \"99638ff36f1f24075577e1c24ea618e6\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.291436 sudo[2486]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Dec 13 01:51:20.291801 sudo[2486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 01:51:20.809292 sudo[2486]: pam_unix(sudo:session): session closed for user root
Dec 13 01:51:20.843171 kubelet[2456]: I1213 01:51:20.843134    2456 apiserver.go:52] "Watching apiserver"
Dec 13 01:51:20.864785 kubelet[2456]: I1213 01:51:20.864753    2456 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 01:51:20.916153 kubelet[2456]: W1213 01:51:20.916128    2456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:51:20.916395 kubelet[2456]: E1213 01:51:20.916375    2456 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-f5ec44d98c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c"
Dec 13 01:51:20.940646 kubelet[2456]: I1213 01:51:20.940609    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-f5ec44d98c" podStartSLOduration=2.940541332 podStartE2EDuration="2.940541332s" podCreationTimestamp="2024-12-13 01:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:51:20.940314629 +0000 UTC m=+1.189549613" watchObservedRunningTime="2024-12-13 01:51:20.940541332 +0000 UTC m=+1.189776416"
Dec 13 01:51:20.988566 kubelet[2456]: I1213 01:51:20.988527    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-f5ec44d98c" podStartSLOduration=1.988471726 podStartE2EDuration="1.988471726s" podCreationTimestamp="2024-12-13 01:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:51:20.956017289 +0000 UTC m=+1.205252373" watchObservedRunningTime="2024-12-13 01:51:20.988471726 +0000 UTC m=+1.237706810"
Dec 13 01:51:21.018591 kubelet[2456]: I1213 01:51:21.018553    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-f5ec44d98c" podStartSLOduration=2.018498717 podStartE2EDuration="2.018498717s" podCreationTimestamp="2024-12-13 01:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:51:20.991656779 +0000 UTC m=+1.240891763" watchObservedRunningTime="2024-12-13 01:51:21.018498717 +0000 UTC m=+1.267733801"
Dec 13 01:51:22.260932 sudo[1774]: pam_unix(sudo:session): session closed for user root
Dec 13 01:51:22.362727 sshd[1770]: pam_unix(sshd:session): session closed for user core
Dec 13 01:51:22.365876 systemd[1]: sshd@4-10.200.8.24:22-10.200.16.10:54468.service: Deactivated successfully.
Dec 13 01:51:22.366763 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 01:51:22.366958 systemd[1]: session-7.scope: Consumed 4.273s CPU time.
Dec 13 01:51:22.367525 systemd-logind[1402]: Session 7 logged out. Waiting for processes to exit.
Dec 13 01:51:22.368474 systemd-logind[1402]: Removed session 7.
Dec 13 01:51:33.354583 kubelet[2456]: I1213 01:51:33.354541    2456 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 01:51:33.355277 env[1419]: time="2024-12-13T01:51:33.355239175Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 01:51:33.355605 kubelet[2456]: I1213 01:51:33.355448    2456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 01:51:34.093739 kubelet[2456]: I1213 01:51:34.093703    2456 topology_manager.go:215] "Topology Admit Handler" podUID="055690f1-a6c8-4a18-9b7d-7e8c8959389d" podNamespace="kube-system" podName="kube-proxy-vcvz7"
Dec 13 01:51:34.101596 kubelet[2456]: I1213 01:51:34.100827    2456 topology_manager.go:215] "Topology Admit Handler" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" podNamespace="kube-system" podName="cilium-6n92g"
Dec 13 01:51:34.103599 systemd[1]: Created slice kubepods-besteffort-pod055690f1_a6c8_4a18_9b7d_7e8c8959389d.slice.
Dec 13 01:51:34.112862 systemd[1]: Created slice kubepods-burstable-podbee5c91c_580c_4889_a126_e8e34b3d1c28.slice.
Dec 13 01:51:34.153694 kubelet[2456]: I1213 01:51:34.153650    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-xtables-lock\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.153694 kubelet[2456]: I1213 01:51:34.153699    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldph5\" (UniqueName: \"kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-kube-api-access-ldph5\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.153935 kubelet[2456]: I1213 01:51:34.153738    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-bpf-maps\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.153935 kubelet[2456]: I1213 01:51:34.153760    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-hostproc\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.153935 kubelet[2456]: I1213 01:51:34.153796    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlhp\" (UniqueName: \"kubernetes.io/projected/055690f1-a6c8-4a18-9b7d-7e8c8959389d-kube-api-access-zwlhp\") pod \"kube-proxy-vcvz7\" (UID: \"055690f1-a6c8-4a18-9b7d-7e8c8959389d\") " pod="kube-system/kube-proxy-vcvz7"
Dec 13 01:51:34.153935 kubelet[2456]: I1213 01:51:34.153820    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-lib-modules\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.153935 kubelet[2456]: I1213 01:51:34.153848    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/055690f1-a6c8-4a18-9b7d-7e8c8959389d-lib-modules\") pod \"kube-proxy-vcvz7\" (UID: \"055690f1-a6c8-4a18-9b7d-7e8c8959389d\") " pod="kube-system/kube-proxy-vcvz7"
Dec 13 01:51:34.153935 kubelet[2456]: I1213 01:51:34.153884    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/055690f1-a6c8-4a18-9b7d-7e8c8959389d-xtables-lock\") pod \"kube-proxy-vcvz7\" (UID: \"055690f1-a6c8-4a18-9b7d-7e8c8959389d\") " pod="kube-system/kube-proxy-vcvz7"
Dec 13 01:51:34.154212 kubelet[2456]: I1213 01:51:34.153915    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-cgroup\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154212 kubelet[2456]: I1213 01:51:34.153963    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-config-path\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154212 kubelet[2456]: I1213 01:51:34.153995    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-hubble-tls\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154212 kubelet[2456]: I1213 01:51:34.154036    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-run\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154212 kubelet[2456]: I1213 01:51:34.154066    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-etc-cni-netd\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154212 kubelet[2456]: I1213 01:51:34.154110    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-net\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154453 kubelet[2456]: I1213 01:51:34.154144    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/055690f1-a6c8-4a18-9b7d-7e8c8959389d-kube-proxy\") pod \"kube-proxy-vcvz7\" (UID: \"055690f1-a6c8-4a18-9b7d-7e8c8959389d\") " pod="kube-system/kube-proxy-vcvz7"
Dec 13 01:51:34.154453 kubelet[2456]: I1213 01:51:34.154187    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-kernel\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154453 kubelet[2456]: I1213 01:51:34.154216    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cni-path\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.154453 kubelet[2456]: I1213 01:51:34.154272    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bee5c91c-580c-4889-a126-e8e34b3d1c28-clustermesh-secrets\") pod \"cilium-6n92g\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") " pod="kube-system/cilium-6n92g"
Dec 13 01:51:34.402243 kubelet[2456]: I1213 01:51:34.402135    2456 topology_manager.go:215] "Topology Admit Handler" podUID="a306b0d9-452f-4990-a53a-471e29586f86" podNamespace="kube-system" podName="cilium-operator-5cc964979-r2r97"
Dec 13 01:51:34.410356 systemd[1]: Created slice kubepods-besteffort-poda306b0d9_452f_4990_a53a_471e29586f86.slice.
Dec 13 01:51:34.414910 env[1419]: time="2024-12-13T01:51:34.414369835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcvz7,Uid:055690f1-a6c8-4a18-9b7d-7e8c8959389d,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:34.416599 env[1419]: time="2024-12-13T01:51:34.416301559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6n92g,Uid:bee5c91c-580c-4889-a126-e8e34b3d1c28,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:34.489139 env[1419]: time="2024-12-13T01:51:34.486835411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:34.489139 env[1419]: time="2024-12-13T01:51:34.486876311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:34.489139 env[1419]: time="2024-12-13T01:51:34.486892311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:34.489139 env[1419]: time="2024-12-13T01:51:34.487024813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405 pid=2560 runtime=io.containerd.runc.v2
Dec 13 01:51:34.489912 env[1419]: time="2024-12-13T01:51:34.488576932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:34.489912 env[1419]: time="2024-12-13T01:51:34.488615232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:34.489912 env[1419]: time="2024-12-13T01:51:34.488628632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:34.489912 env[1419]: time="2024-12-13T01:51:34.488918536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25c4758393e6ccfa919a9cc637770ede91409e394044896a408033529db3fab9 pid=2570 runtime=io.containerd.runc.v2
Dec 13 01:51:34.509099 systemd[1]: Started cri-containerd-25c4758393e6ccfa919a9cc637770ede91409e394044896a408033529db3fab9.scope.
Dec 13 01:51:34.516451 systemd[1]: Started cri-containerd-e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405.scope.
Dec 13 01:51:34.548916 env[1419]: time="2024-12-13T01:51:34.548873460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6n92g,Uid:bee5c91c-580c-4889-a126-e8e34b3d1c28,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\""
Dec 13 01:51:34.552134 env[1419]: time="2024-12-13T01:51:34.552098699Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Dec 13 01:51:34.555714 kubelet[2456]: I1213 01:51:34.555586    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a306b0d9-452f-4990-a53a-471e29586f86-cilium-config-path\") pod \"cilium-operator-5cc964979-r2r97\" (UID: \"a306b0d9-452f-4990-a53a-471e29586f86\") " pod="kube-system/cilium-operator-5cc964979-r2r97"
Dec 13 01:51:34.555714 kubelet[2456]: I1213 01:51:34.555619    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9njhs\" (UniqueName: \"kubernetes.io/projected/a306b0d9-452f-4990-a53a-471e29586f86-kube-api-access-9njhs\") pod \"cilium-operator-5cc964979-r2r97\" (UID: \"a306b0d9-452f-4990-a53a-471e29586f86\") " pod="kube-system/cilium-operator-5cc964979-r2r97"
Dec 13 01:51:34.564764 env[1419]: time="2024-12-13T01:51:34.564726751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcvz7,Uid:055690f1-a6c8-4a18-9b7d-7e8c8959389d,Namespace:kube-system,Attempt:0,} returns sandbox id \"25c4758393e6ccfa919a9cc637770ede91409e394044896a408033529db3fab9\""
Dec 13 01:51:34.568297 env[1419]: time="2024-12-13T01:51:34.568260294Z" level=info msg="CreateContainer within sandbox \"25c4758393e6ccfa919a9cc637770ede91409e394044896a408033529db3fab9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 01:51:34.614217 env[1419]: time="2024-12-13T01:51:34.614150348Z" level=info msg="CreateContainer within sandbox \"25c4758393e6ccfa919a9cc637770ede91409e394044896a408033529db3fab9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"991c895d8e158d1e8c4a628a87647e4aa3db6b473419ab51491f6ad9c89a07ce\""
Dec 13 01:51:34.616289 env[1419]: time="2024-12-13T01:51:34.614779756Z" level=info msg="StartContainer for \"991c895d8e158d1e8c4a628a87647e4aa3db6b473419ab51491f6ad9c89a07ce\""
Dec 13 01:51:34.633430 systemd[1]: Started cri-containerd-991c895d8e158d1e8c4a628a87647e4aa3db6b473419ab51491f6ad9c89a07ce.scope.
Dec 13 01:51:34.674489 env[1419]: time="2024-12-13T01:51:34.674385076Z" level=info msg="StartContainer for \"991c895d8e158d1e8c4a628a87647e4aa3db6b473419ab51491f6ad9c89a07ce\" returns successfully"
Dec 13 01:51:34.716809 env[1419]: time="2024-12-13T01:51:34.716768588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r2r97,Uid:a306b0d9-452f-4990-a53a-471e29586f86,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:34.768051 env[1419]: time="2024-12-13T01:51:34.767894906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:34.768280 env[1419]: time="2024-12-13T01:51:34.767937406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:34.768280 env[1419]: time="2024-12-13T01:51:34.767961606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:34.768280 env[1419]: time="2024-12-13T01:51:34.768208309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c pid=2706 runtime=io.containerd.runc.v2
Dec 13 01:51:34.780962 systemd[1]: Started cri-containerd-423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c.scope.
Dec 13 01:51:34.826188 env[1419]: time="2024-12-13T01:51:34.825186198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r2r97,Uid:a306b0d9-452f-4990-a53a-471e29586f86,Namespace:kube-system,Attempt:0,} returns sandbox id \"423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c\""
Dec 13 01:51:34.941781 kubelet[2456]: I1213 01:51:34.940827    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vcvz7" podStartSLOduration=0.940769994 podStartE2EDuration="940.769994ms" podCreationTimestamp="2024-12-13 01:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:51:34.940509791 +0000 UTC m=+15.189744875" watchObservedRunningTime="2024-12-13 01:51:34.940769994 +0000 UTC m=+15.190005078"
Dec 13 01:51:40.608547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151359471.mount: Deactivated successfully.
Dec 13 01:51:43.394026 env[1419]: time="2024-12-13T01:51:43.393979615Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:43.401052 env[1419]: time="2024-12-13T01:51:43.401010486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:43.405345 env[1419]: time="2024-12-13T01:51:43.405249429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:43.406221 env[1419]: time="2024-12-13T01:51:43.406187239Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Dec 13 01:51:43.408496 env[1419]: time="2024-12-13T01:51:43.408462162Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Dec 13 01:51:43.409380 env[1419]: time="2024-12-13T01:51:43.409350571Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 01:51:43.438608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335031671.mount: Deactivated successfully.
Dec 13 01:51:43.446607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021569419.mount: Deactivated successfully.
Dec 13 01:51:43.454561 env[1419]: time="2024-12-13T01:51:43.454524026Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\""
Dec 13 01:51:43.455295 env[1419]: time="2024-12-13T01:51:43.455268734Z" level=info msg="StartContainer for \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\""
Dec 13 01:51:43.473507 systemd[1]: Started cri-containerd-19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae.scope.
Dec 13 01:51:43.505018 env[1419]: time="2024-12-13T01:51:43.504946835Z" level=info msg="StartContainer for \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\" returns successfully"
Dec 13 01:51:43.511650 systemd[1]: cri-containerd-19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae.scope: Deactivated successfully.
Dec 13 01:51:44.435603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae-rootfs.mount: Deactivated successfully.
Dec 13 01:51:47.197507 env[1419]: time="2024-12-13T01:51:47.197443723Z" level=info msg="shim disconnected" id=19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae
Dec 13 01:51:47.198040 env[1419]: time="2024-12-13T01:51:47.198002429Z" level=warning msg="cleaning up after shim disconnected" id=19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae namespace=k8s.io
Dec 13 01:51:47.198040 env[1419]: time="2024-12-13T01:51:47.198032629Z" level=info msg="cleaning up dead shim"
Dec 13 01:51:47.208421 env[1419]: time="2024-12-13T01:51:47.208373426Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:51:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2886 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T01:51:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
Dec 13 01:51:47.859315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336697300.mount: Deactivated successfully.
Dec 13 01:51:47.986041 env[1419]: time="2024-12-13T01:51:47.985991309Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 01:51:48.035158 env[1419]: time="2024-12-13T01:51:48.035114263Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\""
Dec 13 01:51:48.041168 env[1419]: time="2024-12-13T01:51:48.041129618Z" level=info msg="StartContainer for \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\""
Dec 13 01:51:48.067948 systemd[1]: Started cri-containerd-6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed.scope.
Dec 13 01:51:48.115500 env[1419]: time="2024-12-13T01:51:48.114308792Z" level=info msg="StartContainer for \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\" returns successfully"
Dec 13 01:51:48.125257 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:51:48.125563 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 01:51:48.125731 systemd[1]: Stopping systemd-sysctl.service...
Dec 13 01:51:48.128586 systemd[1]: Starting systemd-sysctl.service...
Dec 13 01:51:48.131449 systemd[1]: cri-containerd-6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed.scope: Deactivated successfully.
Dec 13 01:51:48.142235 systemd[1]: Finished systemd-sysctl.service.
Dec 13 01:51:48.246867 env[1419]: time="2024-12-13T01:51:48.246813211Z" level=info msg="shim disconnected" id=6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed
Dec 13 01:51:48.247424 env[1419]: time="2024-12-13T01:51:48.247398416Z" level=warning msg="cleaning up after shim disconnected" id=6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed namespace=k8s.io
Dec 13 01:51:48.247516 env[1419]: time="2024-12-13T01:51:48.247499617Z" level=info msg="cleaning up dead shim"
Dec 13 01:51:48.257330 env[1419]: time="2024-12-13T01:51:48.257295807Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:51:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2950 runtime=io.containerd.runc.v2\n"
Dec 13 01:51:48.895272 env[1419]: time="2024-12-13T01:51:48.895219276Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:48.901890 env[1419]: time="2024-12-13T01:51:48.901854937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:48.904951 env[1419]: time="2024-12-13T01:51:48.904916865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 01:51:48.905465 env[1419]: time="2024-12-13T01:51:48.905434470Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Dec 13 01:51:48.908880 env[1419]: time="2024-12-13T01:51:48.908831801Z" level=info msg="CreateContainer within sandbox \"423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Dec 13 01:51:48.941370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003146893.mount: Deactivated successfully.
Dec 13 01:51:48.950296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096015736.mount: Deactivated successfully.
Dec 13 01:51:48.956698 env[1419]: time="2024-12-13T01:51:48.956656541Z" level=info msg="CreateContainer within sandbox \"423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\""
Dec 13 01:51:48.958862 env[1419]: time="2024-12-13T01:51:48.957369847Z" level=info msg="StartContainer for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\""
Dec 13 01:51:48.975390 systemd[1]: Started cri-containerd-290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad.scope.
Dec 13 01:51:48.987882 env[1419]: time="2024-12-13T01:51:48.987838528Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 01:51:49.027266 env[1419]: time="2024-12-13T01:51:49.027213386Z" level=info msg="StartContainer for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" returns successfully"
Dec 13 01:51:49.045169 env[1419]: time="2024-12-13T01:51:49.045127648Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\""
Dec 13 01:51:49.045922 env[1419]: time="2024-12-13T01:51:49.045890255Z" level=info msg="StartContainer for \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\""
Dec 13 01:51:49.072201 systemd[1]: Started cri-containerd-55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417.scope.
Dec 13 01:51:49.109712 systemd[1]: cri-containerd-55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417.scope: Deactivated successfully.
Dec 13 01:51:49.111779 env[1419]: time="2024-12-13T01:51:49.111742950Z" level=info msg="StartContainer for \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\" returns successfully"
Dec 13 01:51:49.614393 env[1419]: time="2024-12-13T01:51:49.612688478Z" level=info msg="shim disconnected" id=55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417
Dec 13 01:51:49.614393 env[1419]: time="2024-12-13T01:51:49.612796979Z" level=warning msg="cleaning up after shim disconnected" id=55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417 namespace=k8s.io
Dec 13 01:51:49.614393 env[1419]: time="2024-12-13T01:51:49.612809079Z" level=info msg="cleaning up dead shim"
Dec 13 01:51:49.632656 env[1419]: time="2024-12-13T01:51:49.632614458Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:51:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\n"
Dec 13 01:51:50.000522 env[1419]: time="2024-12-13T01:51:49.997417755Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 01:51:50.041323 env[1419]: time="2024-12-13T01:51:50.041279045Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\""
Dec 13 01:51:50.041977 env[1419]: time="2024-12-13T01:51:50.041945651Z" level=info msg="StartContainer for \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\""
Dec 13 01:51:50.069515 systemd[1]: Started cri-containerd-16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f.scope.
Dec 13 01:51:50.097643 systemd[1]: cri-containerd-16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f.scope: Deactivated successfully.
Dec 13 01:51:50.104417 env[1419]: time="2024-12-13T01:51:50.104317505Z" level=info msg="StartContainer for \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\" returns successfully"
Dec 13 01:51:50.134453 env[1419]: time="2024-12-13T01:51:50.134404473Z" level=info msg="shim disconnected" id=16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f
Dec 13 01:51:50.134649 env[1419]: time="2024-12-13T01:51:50.134456173Z" level=warning msg="cleaning up after shim disconnected" id=16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f namespace=k8s.io
Dec 13 01:51:50.134649 env[1419]: time="2024-12-13T01:51:50.134469273Z" level=info msg="cleaning up dead shim"
Dec 13 01:51:50.141679 env[1419]: time="2024-12-13T01:51:50.141646037Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:51:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3107 runtime=io.containerd.runc.v2\n"
Dec 13 01:51:50.855394 systemd[1]: run-containerd-runc-k8s.io-16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f-runc.i4MvSP.mount: Deactivated successfully.
Dec 13 01:51:50.855522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f-rootfs.mount: Deactivated successfully.
Dec 13 01:51:51.003995 env[1419]: time="2024-12-13T01:51:51.003950696Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 01:51:51.022077 kubelet[2456]: I1213 01:51:51.022047    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-r2r97" podStartSLOduration=2.943244501 podStartE2EDuration="17.021990554s" podCreationTimestamp="2024-12-13 01:51:34 +0000 UTC" firstStartedPulling="2024-12-13 01:51:34.826964919 +0000 UTC m=+15.076199903" lastFinishedPulling="2024-12-13 01:51:48.905710972 +0000 UTC m=+29.154945956" observedRunningTime="2024-12-13 01:51:50.02266738 +0000 UTC m=+30.271902364" watchObservedRunningTime="2024-12-13 01:51:51.021990554 +0000 UTC m=+31.271225538"
Dec 13 01:51:51.053093 env[1419]: time="2024-12-13T01:51:51.053050525Z" level=info msg="CreateContainer within sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\""
Dec 13 01:51:51.054632 env[1419]: time="2024-12-13T01:51:51.053542729Z" level=info msg="StartContainer for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\""
Dec 13 01:51:51.074424 systemd[1]: Started cri-containerd-bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073.scope.
Dec 13 01:51:51.111138 env[1419]: time="2024-12-13T01:51:51.111016231Z" level=info msg="StartContainer for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" returns successfully"
Dec 13 01:51:51.249269 kubelet[2456]: I1213 01:51:51.249234    2456 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Dec 13 01:51:51.288254 kubelet[2456]: I1213 01:51:51.288215    2456 topology_manager.go:215] "Topology Admit Handler" podUID="4d1b860d-c327-4c26-8074-20afa211288a" podNamespace="kube-system" podName="coredns-76f75df574-ndmnf"
Dec 13 01:51:51.295253 systemd[1]: Created slice kubepods-burstable-pod4d1b860d_c327_4c26_8074_20afa211288a.slice.
Dec 13 01:51:51.304357 kubelet[2456]: I1213 01:51:51.304333    2456 topology_manager.go:215] "Topology Admit Handler" podUID="b20223a8-b0f2-4b26-9731-eed903a047ca" podNamespace="kube-system" podName="coredns-76f75df574-8gxvk"
Dec 13 01:51:51.310187 systemd[1]: Created slice kubepods-burstable-podb20223a8_b0f2_4b26_9731_eed903a047ca.slice.
Dec 13 01:51:51.317681 kubelet[2456]: W1213 01:51:51.317654    2456 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.6-a-f5ec44d98c" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-f5ec44d98c' and this object
Dec 13 01:51:51.317806 kubelet[2456]: E1213 01:51:51.317692    2456 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.6-a-f5ec44d98c" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-f5ec44d98c' and this object
Dec 13 01:51:51.479333 kubelet[2456]: I1213 01:51:51.479219    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzbx\" (UniqueName: \"kubernetes.io/projected/4d1b860d-c327-4c26-8074-20afa211288a-kube-api-access-5qzbx\") pod \"coredns-76f75df574-ndmnf\" (UID: \"4d1b860d-c327-4c26-8074-20afa211288a\") " pod="kube-system/coredns-76f75df574-ndmnf"
Dec 13 01:51:51.479333 kubelet[2456]: I1213 01:51:51.479269    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqdb\" (UniqueName: \"kubernetes.io/projected/b20223a8-b0f2-4b26-9731-eed903a047ca-kube-api-access-nbqdb\") pod \"coredns-76f75df574-8gxvk\" (UID: \"b20223a8-b0f2-4b26-9731-eed903a047ca\") " pod="kube-system/coredns-76f75df574-8gxvk"
Dec 13 01:51:51.479333 kubelet[2456]: I1213 01:51:51.479301    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d1b860d-c327-4c26-8074-20afa211288a-config-volume\") pod \"coredns-76f75df574-ndmnf\" (UID: \"4d1b860d-c327-4c26-8074-20afa211288a\") " pod="kube-system/coredns-76f75df574-ndmnf"
Dec 13 01:51:51.479333 kubelet[2456]: I1213 01:51:51.479328    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b20223a8-b0f2-4b26-9731-eed903a047ca-config-volume\") pod \"coredns-76f75df574-8gxvk\" (UID: \"b20223a8-b0f2-4b26-9731-eed903a047ca\") " pod="kube-system/coredns-76f75df574-8gxvk"
Dec 13 01:51:52.022042 kubelet[2456]: I1213 01:51:52.022007    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6n92g" podStartSLOduration=9.166785831 podStartE2EDuration="18.021960183s" podCreationTimestamp="2024-12-13 01:51:34 +0000 UTC" firstStartedPulling="2024-12-13 01:51:34.551555392 +0000 UTC m=+14.800790476" lastFinishedPulling="2024-12-13 01:51:43.406729744 +0000 UTC m=+23.655964828" observedRunningTime="2024-12-13 01:51:52.021803181 +0000 UTC m=+32.271038165" watchObservedRunningTime="2024-12-13 01:51:52.021960183 +0000 UTC m=+32.271195167"
Dec 13 01:51:52.502014 env[1419]: time="2024-12-13T01:51:52.501965704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ndmnf,Uid:4d1b860d-c327-4c26-8074-20afa211288a,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:52.516527 env[1419]: time="2024-12-13T01:51:52.516492629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8gxvk,Uid:b20223a8-b0f2-4b26-9731-eed903a047ca,Namespace:kube-system,Attempt:0,}"
Dec 13 01:51:53.018889 systemd-networkd[1562]: cilium_host: Link UP
Dec 13 01:51:53.019004 systemd-networkd[1562]: cilium_net: Link UP
Dec 13 01:51:53.019007 systemd-networkd[1562]: cilium_net: Gained carrier
Dec 13 01:51:53.019152 systemd-networkd[1562]: cilium_host: Gained carrier
Dec 13 01:51:53.022112 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Dec 13 01:51:53.023709 systemd-networkd[1562]: cilium_host: Gained IPv6LL
Dec 13 01:51:53.186699 systemd-networkd[1562]: cilium_vxlan: Link UP
Dec 13 01:51:53.186708 systemd-networkd[1562]: cilium_vxlan: Gained carrier
Dec 13 01:51:53.435122 kernel: NET: Registered PF_ALG protocol family
Dec 13 01:51:53.877471 systemd-networkd[1562]: cilium_net: Gained IPv6LL
Dec 13 01:51:54.175254 systemd-networkd[1562]: lxc_health: Link UP
Dec 13 01:51:54.185807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 01:51:54.185462 systemd-networkd[1562]: lxc_health: Gained carrier
Dec 13 01:51:54.582804 systemd-networkd[1562]: lxca5414025e226: Link UP
Dec 13 01:51:54.591105 kernel: eth0: renamed from tmpad926
Dec 13 01:51:54.602514 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca5414025e226: link becomes ready
Dec 13 01:51:54.601556 systemd-networkd[1562]: lxca5414025e226: Gained carrier
Dec 13 01:51:54.622506 systemd-networkd[1562]: lxcff5ad0cadb1c: Link UP
Dec 13 01:51:54.632123 kernel: eth0: renamed from tmpd4275
Dec 13 01:51:54.640101 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcff5ad0cadb1c: link becomes ready
Dec 13 01:51:54.640296 systemd-networkd[1562]: lxcff5ad0cadb1c: Gained carrier
Dec 13 01:51:55.093227 systemd-networkd[1562]: cilium_vxlan: Gained IPv6LL
Dec 13 01:51:55.797232 systemd-networkd[1562]: lxca5414025e226: Gained IPv6LL
Dec 13 01:51:56.053219 systemd-networkd[1562]: lxc_health: Gained IPv6LL
Dec 13 01:51:56.439410 systemd-networkd[1562]: lxcff5ad0cadb1c: Gained IPv6LL
Dec 13 01:51:58.258161 env[1419]: time="2024-12-13T01:51:58.257654484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:58.258161 env[1419]: time="2024-12-13T01:51:58.257705684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:58.258161 env[1419]: time="2024-12-13T01:51:58.257721584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:58.258161 env[1419]: time="2024-12-13T01:51:58.257868285Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d42751f692ea96a68d00d438b3bac38427e1fdc8c38213ab2e73694161220bbd pid=3650 runtime=io.containerd.runc.v2
Dec 13 01:51:58.273421 env[1419]: time="2024-12-13T01:51:58.273338706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:51:58.273584 env[1419]: time="2024-12-13T01:51:58.273424507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:51:58.273584 env[1419]: time="2024-12-13T01:51:58.273455407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:51:58.273686 env[1419]: time="2024-12-13T01:51:58.273602008Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad926578e27d8bfd52059bd29b9c6c09c54f6e8dfbf201a3258081eada462269 pid=3668 runtime=io.containerd.runc.v2
Dec 13 01:51:58.306566 systemd[1]: run-containerd-runc-k8s.io-d42751f692ea96a68d00d438b3bac38427e1fdc8c38213ab2e73694161220bbd-runc.MK0HD0.mount: Deactivated successfully.
Dec 13 01:51:58.311978 systemd[1]: Started cri-containerd-d42751f692ea96a68d00d438b3bac38427e1fdc8c38213ab2e73694161220bbd.scope.
Dec 13 01:51:58.324986 systemd[1]: Started cri-containerd-ad926578e27d8bfd52059bd29b9c6c09c54f6e8dfbf201a3258081eada462269.scope.
Dec 13 01:51:58.408989 env[1419]: time="2024-12-13T01:51:58.408947064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8gxvk,Uid:b20223a8-b0f2-4b26-9731-eed903a047ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"d42751f692ea96a68d00d438b3bac38427e1fdc8c38213ab2e73694161220bbd\""
Dec 13 01:51:58.413465 env[1419]: time="2024-12-13T01:51:58.413413698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ndmnf,Uid:4d1b860d-c327-4c26-8074-20afa211288a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad926578e27d8bfd52059bd29b9c6c09c54f6e8dfbf201a3258081eada462269\""
Dec 13 01:51:58.423390 env[1419]: time="2024-12-13T01:51:58.423356276Z" level=info msg="CreateContainer within sandbox \"d42751f692ea96a68d00d438b3bac38427e1fdc8c38213ab2e73694161220bbd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 01:51:58.435443 env[1419]: time="2024-12-13T01:51:58.435408070Z" level=info msg="CreateContainer within sandbox \"ad926578e27d8bfd52059bd29b9c6c09c54f6e8dfbf201a3258081eada462269\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 01:51:58.483887 env[1419]: time="2024-12-13T01:51:58.483840948Z" level=info msg="CreateContainer within sandbox \"d42751f692ea96a68d00d438b3bac38427e1fdc8c38213ab2e73694161220bbd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fe0aa1cd67ee55105439232a63f750c9eb68c0f1d3f54fefa1927c9cc7aeda7\""
Dec 13 01:51:58.484638 env[1419]: time="2024-12-13T01:51:58.484601754Z" level=info msg="StartContainer for \"3fe0aa1cd67ee55105439232a63f750c9eb68c0f1d3f54fefa1927c9cc7aeda7\""
Dec 13 01:51:58.493651 env[1419]: time="2024-12-13T01:51:58.493609824Z" level=info msg="CreateContainer within sandbox \"ad926578e27d8bfd52059bd29b9c6c09c54f6e8dfbf201a3258081eada462269\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c6d7d33e043b53064373558cb1f285b2ce8bba3c22e30496a959da5b7fc1246\""
Dec 13 01:51:58.494353 env[1419]: time="2024-12-13T01:51:58.494322330Z" level=info msg="StartContainer for \"0c6d7d33e043b53064373558cb1f285b2ce8bba3c22e30496a959da5b7fc1246\""
Dec 13 01:51:58.506555 systemd[1]: Started cri-containerd-3fe0aa1cd67ee55105439232a63f750c9eb68c0f1d3f54fefa1927c9cc7aeda7.scope.
Dec 13 01:51:58.526068 systemd[1]: Started cri-containerd-0c6d7d33e043b53064373558cb1f285b2ce8bba3c22e30496a959da5b7fc1246.scope.
Dec 13 01:51:58.556885 env[1419]: time="2024-12-13T01:51:58.556836517Z" level=info msg="StartContainer for \"3fe0aa1cd67ee55105439232a63f750c9eb68c0f1d3f54fefa1927c9cc7aeda7\" returns successfully"
Dec 13 01:51:58.574190 env[1419]: time="2024-12-13T01:51:58.574134652Z" level=info msg="StartContainer for \"0c6d7d33e043b53064373558cb1f285b2ce8bba3c22e30496a959da5b7fc1246\" returns successfully"
Dec 13 01:51:59.041610 kubelet[2456]: I1213 01:51:59.041574    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8gxvk" podStartSLOduration=25.041528593 podStartE2EDuration="25.041528593s" podCreationTimestamp="2024-12-13 01:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:51:59.03978738 +0000 UTC m=+39.289022464" watchObservedRunningTime="2024-12-13 01:51:59.041528593 +0000 UTC m=+39.290763677"
Dec 13 01:51:59.109414 kubelet[2456]: I1213 01:51:59.109376    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ndmnf" podStartSLOduration=25.109325714 podStartE2EDuration="25.109325714s" podCreationTimestamp="2024-12-13 01:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:51:59.078326176 +0000 UTC m=+39.327561160" watchObservedRunningTime="2024-12-13 01:51:59.109325714 +0000 UTC m=+39.358560798"
Dec 13 01:53:47.524486 systemd[1]: Started sshd@5-10.200.8.24:22-10.200.16.10:57584.service.
Dec 13 01:53:48.150975 sshd[3824]: Accepted publickey for core from 10.200.16.10 port 57584 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:53:48.152513 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:53:48.157397 systemd[1]: Started session-8.scope.
Dec 13 01:53:48.157858 systemd-logind[1402]: New session 8 of user core.
Dec 13 01:53:48.660320 sshd[3824]: pam_unix(sshd:session): session closed for user core
Dec 13 01:53:48.663191 systemd[1]: sshd@5-10.200.8.24:22-10.200.16.10:57584.service: Deactivated successfully.
Dec 13 01:53:48.664060 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 01:53:48.664964 systemd-logind[1402]: Session 8 logged out. Waiting for processes to exit.
Dec 13 01:53:48.665766 systemd-logind[1402]: Removed session 8.
Dec 13 01:53:53.765808 systemd[1]: Started sshd@6-10.200.8.24:22-10.200.16.10:47666.service.
Dec 13 01:53:54.391587 sshd[3837]: Accepted publickey for core from 10.200.16.10 port 47666 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:53:54.393006 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:53:54.398016 systemd[1]: Started session-9.scope.
Dec 13 01:53:54.398776 systemd-logind[1402]: New session 9 of user core.
Dec 13 01:53:54.888243 sshd[3837]: pam_unix(sshd:session): session closed for user core
Dec 13 01:53:54.891968 systemd[1]: sshd@6-10.200.8.24:22-10.200.16.10:47666.service: Deactivated successfully.
Dec 13 01:53:54.892937 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 01:53:54.893452 systemd-logind[1402]: Session 9 logged out. Waiting for processes to exit.
Dec 13 01:53:54.894274 systemd-logind[1402]: Removed session 9.
Dec 13 01:53:59.995290 systemd[1]: Started sshd@7-10.200.8.24:22-10.200.16.10:41108.service.
Dec 13 01:54:00.622320 sshd[3849]: Accepted publickey for core from 10.200.16.10 port 41108 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:00.623715 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:00.628520 systemd-logind[1402]: New session 10 of user core.
Dec 13 01:54:00.629042 systemd[1]: Started session-10.scope.
Dec 13 01:54:01.128195 sshd[3849]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:01.131425 systemd[1]: sshd@7-10.200.8.24:22-10.200.16.10:41108.service: Deactivated successfully.
Dec 13 01:54:01.132535 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 01:54:01.133403 systemd-logind[1402]: Session 10 logged out. Waiting for processes to exit.
Dec 13 01:54:01.134316 systemd-logind[1402]: Removed session 10.
Dec 13 01:54:06.232020 systemd[1]: Started sshd@8-10.200.8.24:22-10.200.16.10:41116.service.
Dec 13 01:54:06.856127 sshd[3864]: Accepted publickey for core from 10.200.16.10 port 41116 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:06.857911 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:06.864000 systemd-logind[1402]: New session 11 of user core.
Dec 13 01:54:06.864781 systemd[1]: Started session-11.scope.
Dec 13 01:54:07.361482 sshd[3864]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:07.364761 systemd[1]: sshd@8-10.200.8.24:22-10.200.16.10:41116.service: Deactivated successfully.
Dec 13 01:54:07.365699 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 01:54:07.366409 systemd-logind[1402]: Session 11 logged out. Waiting for processes to exit.
Dec 13 01:54:07.367240 systemd-logind[1402]: Removed session 11.
Dec 13 01:54:12.476098 systemd[1]: Started sshd@9-10.200.8.24:22-10.200.16.10:52444.service.
Dec 13 01:54:13.101916 sshd[3876]: Accepted publickey for core from 10.200.16.10 port 52444 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:13.103387 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:13.108360 systemd[1]: Started session-12.scope.
Dec 13 01:54:13.108946 systemd-logind[1402]: New session 12 of user core.
Dec 13 01:54:13.599022 sshd[3876]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:13.602520 systemd[1]: sshd@9-10.200.8.24:22-10.200.16.10:52444.service: Deactivated successfully.
Dec 13 01:54:13.603640 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 01:54:13.604480 systemd-logind[1402]: Session 12 logged out. Waiting for processes to exit.
Dec 13 01:54:13.605490 systemd-logind[1402]: Removed session 12.
Dec 13 01:54:13.705753 systemd[1]: Started sshd@10-10.200.8.24:22-10.200.16.10:52446.service.
Dec 13 01:54:14.331376 sshd[3888]: Accepted publickey for core from 10.200.16.10 port 52446 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:14.333073 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:14.339067 systemd[1]: Started session-13.scope.
Dec 13 01:54:14.339827 systemd-logind[1402]: New session 13 of user core.
Dec 13 01:54:14.869675 sshd[3888]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:14.872941 systemd-logind[1402]: Session 13 logged out. Waiting for processes to exit.
Dec 13 01:54:14.873319 systemd[1]: sshd@10-10.200.8.24:22-10.200.16.10:52446.service: Deactivated successfully.
Dec 13 01:54:14.874196 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 01:54:14.875142 systemd-logind[1402]: Removed session 13.
Dec 13 01:54:14.975484 systemd[1]: Started sshd@11-10.200.8.24:22-10.200.16.10:52448.service.
Dec 13 01:54:15.600995 sshd[3897]: Accepted publickey for core from 10.200.16.10 port 52448 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:15.602861 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:15.608295 systemd[1]: Started session-14.scope.
Dec 13 01:54:15.608887 systemd-logind[1402]: New session 14 of user core.
Dec 13 01:54:16.108777 sshd[3897]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:16.112359 systemd-logind[1402]: Session 14 logged out. Waiting for processes to exit.
Dec 13 01:54:16.112653 systemd[1]: sshd@11-10.200.8.24:22-10.200.16.10:52448.service: Deactivated successfully.
Dec 13 01:54:16.113784 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 01:54:16.114816 systemd-logind[1402]: Removed session 14.
Dec 13 01:54:21.213377 systemd[1]: Started sshd@12-10.200.8.24:22-10.200.16.10:51110.service.
Dec 13 01:54:21.839481 sshd[3912]: Accepted publickey for core from 10.200.16.10 port 51110 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:21.841416 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:21.846434 systemd[1]: Started session-15.scope.
Dec 13 01:54:21.846878 systemd-logind[1402]: New session 15 of user core.
Dec 13 01:54:22.341579 sshd[3912]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:22.344343 systemd[1]: sshd@12-10.200.8.24:22-10.200.16.10:51110.service: Deactivated successfully.
Dec 13 01:54:22.345538 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 01:54:22.345566 systemd-logind[1402]: Session 15 logged out. Waiting for processes to exit.
Dec 13 01:54:22.346735 systemd-logind[1402]: Removed session 15.
Dec 13 01:54:22.449936 systemd[1]: Started sshd@13-10.200.8.24:22-10.200.16.10:51122.service.
Dec 13 01:54:23.085141 sshd[3925]: Accepted publickey for core from 10.200.16.10 port 51122 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:23.086566 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:23.091059 systemd-logind[1402]: New session 16 of user core.
Dec 13 01:54:23.092127 systemd[1]: Started session-16.scope.
Dec 13 01:54:23.655691 sshd[3925]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:23.659098 systemd[1]: sshd@13-10.200.8.24:22-10.200.16.10:51122.service: Deactivated successfully.
Dec 13 01:54:23.660234 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 01:54:23.661067 systemd-logind[1402]: Session 16 logged out. Waiting for processes to exit.
Dec 13 01:54:23.662056 systemd-logind[1402]: Removed session 16.
Dec 13 01:54:23.759864 systemd[1]: Started sshd@14-10.200.8.24:22-10.200.16.10:51124.service.
Dec 13 01:54:24.385829 sshd[3935]: Accepted publickey for core from 10.200.16.10 port 51124 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:24.387633 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:24.392994 systemd-logind[1402]: New session 17 of user core.
Dec 13 01:54:24.393669 systemd[1]: Started session-17.scope.
Dec 13 01:54:26.322781 sshd[3935]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:26.326090 systemd[1]: sshd@14-10.200.8.24:22-10.200.16.10:51124.service: Deactivated successfully.
Dec 13 01:54:26.326965 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 01:54:26.327996 systemd-logind[1402]: Session 17 logged out. Waiting for processes to exit.
Dec 13 01:54:26.329413 systemd-logind[1402]: Removed session 17.
Dec 13 01:54:26.427644 systemd[1]: Started sshd@15-10.200.8.24:22-10.200.16.10:51134.service.
Dec 13 01:54:27.052859 sshd[3952]: Accepted publickey for core from 10.200.16.10 port 51134 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:27.054514 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:27.059866 systemd[1]: Started session-18.scope.
Dec 13 01:54:27.060751 systemd-logind[1402]: New session 18 of user core.
Dec 13 01:54:27.647503 sshd[3952]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:27.650560 systemd[1]: sshd@15-10.200.8.24:22-10.200.16.10:51134.service: Deactivated successfully.
Dec 13 01:54:27.651448 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 01:54:27.652163 systemd-logind[1402]: Session 18 logged out. Waiting for processes to exit.
Dec 13 01:54:27.652965 systemd-logind[1402]: Removed session 18.
Dec 13 01:54:27.752193 systemd[1]: Started sshd@16-10.200.8.24:22-10.200.16.10:51148.service.
Dec 13 01:54:28.378884 sshd[3962]: Accepted publickey for core from 10.200.16.10 port 51148 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:28.380590 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:28.385816 systemd[1]: Started session-19.scope.
Dec 13 01:54:28.386436 systemd-logind[1402]: New session 19 of user core.
Dec 13 01:54:28.874937 sshd[3962]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:28.878926 systemd[1]: sshd@16-10.200.8.24:22-10.200.16.10:51148.service: Deactivated successfully.
Dec 13 01:54:28.880771 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 01:54:28.880841 systemd-logind[1402]: Session 19 logged out. Waiting for processes to exit.
Dec 13 01:54:28.882235 systemd-logind[1402]: Removed session 19.
Dec 13 01:54:33.984037 systemd[1]: Started sshd@17-10.200.8.24:22-10.200.16.10:38122.service.
Dec 13 01:54:34.609200 sshd[3976]: Accepted publickey for core from 10.200.16.10 port 38122 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:34.610944 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:34.616755 systemd-logind[1402]: New session 20 of user core.
Dec 13 01:54:34.617470 systemd[1]: Started session-20.scope.
Dec 13 01:54:35.103323 sshd[3976]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:35.106768 systemd[1]: sshd@17-10.200.8.24:22-10.200.16.10:38122.service: Deactivated successfully.
Dec 13 01:54:35.107778 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 01:54:35.108501 systemd-logind[1402]: Session 20 logged out. Waiting for processes to exit.
Dec 13 01:54:35.109333 systemd-logind[1402]: Removed session 20.
Dec 13 01:54:40.208223 systemd[1]: Started sshd@18-10.200.8.24:22-10.200.16.10:54170.service.
Dec 13 01:54:40.837515 sshd[3990]: Accepted publickey for core from 10.200.16.10 port 54170 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:40.839046 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:40.847705 systemd[1]: Started session-21.scope.
Dec 13 01:54:40.848247 systemd-logind[1402]: New session 21 of user core.
Dec 13 01:54:41.337443 sshd[3990]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:41.340231 systemd[1]: sshd@18-10.200.8.24:22-10.200.16.10:54170.service: Deactivated successfully.
Dec 13 01:54:41.341462 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 01:54:41.341500 systemd-logind[1402]: Session 21 logged out. Waiting for processes to exit.
Dec 13 01:54:41.342572 systemd-logind[1402]: Removed session 21.
Dec 13 01:54:46.443293 systemd[1]: Started sshd@19-10.200.8.24:22-10.200.16.10:54186.service.
Dec 13 01:54:47.069205 sshd[4005]: Accepted publickey for core from 10.200.16.10 port 54186 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:47.070624 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:47.075874 systemd[1]: Started session-22.scope.
Dec 13 01:54:47.076522 systemd-logind[1402]: New session 22 of user core.
Dec 13 01:54:47.563644 sshd[4005]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:47.566890 systemd[1]: sshd@19-10.200.8.24:22-10.200.16.10:54186.service: Deactivated successfully.
Dec 13 01:54:47.567874 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 01:54:47.568545 systemd-logind[1402]: Session 22 logged out. Waiting for processes to exit.
Dec 13 01:54:47.569391 systemd-logind[1402]: Removed session 22.
Dec 13 01:54:47.667913 systemd[1]: Started sshd@20-10.200.8.24:22-10.200.16.10:54190.service.
Dec 13 01:54:48.294706 sshd[4016]: Accepted publickey for core from 10.200.16.10 port 54190 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:48.296605 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:48.302443 systemd[1]: Started session-23.scope.
Dec 13 01:54:48.302663 systemd-logind[1402]: New session 23 of user core.
Dec 13 01:54:49.952703 systemd[1]: run-containerd-runc-k8s.io-bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073-runc.Yntqcz.mount: Deactivated successfully.
Dec 13 01:54:49.958125 env[1419]: time="2024-12-13T01:54:49.957839000Z" level=info msg="StopContainer for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" with timeout 30 (s)"
Dec 13 01:54:49.959869 env[1419]: time="2024-12-13T01:54:49.959477810Z" level=info msg="Stop container \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" with signal terminated"
Dec 13 01:54:49.975693 systemd[1]: cri-containerd-290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad.scope: Deactivated successfully.
Dec 13 01:54:49.986154 env[1419]: time="2024-12-13T01:54:49.986012481Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 01:54:49.994248 env[1419]: time="2024-12-13T01:54:49.994217834Z" level=info msg="StopContainer for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" with timeout 2 (s)"
Dec 13 01:54:49.994574 env[1419]: time="2024-12-13T01:54:49.994546836Z" level=info msg="Stop container \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" with signal terminated"
Dec 13 01:54:50.001024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad-rootfs.mount: Deactivated successfully.
Dec 13 01:54:50.007729 systemd-networkd[1562]: lxc_health: Link DOWN
Dec 13 01:54:50.007737 systemd-networkd[1562]: lxc_health: Lost carrier
Dec 13 01:54:50.026920 systemd[1]: cri-containerd-bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073.scope: Deactivated successfully.
Dec 13 01:54:50.027258 systemd[1]: cri-containerd-bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073.scope: Consumed 7.049s CPU time.
Dec 13 01:54:50.047144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073-rootfs.mount: Deactivated successfully.
Dec 13 01:54:50.070547 env[1419]: time="2024-12-13T01:54:50.070494324Z" level=info msg="shim disconnected" id=bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073
Dec 13 01:54:50.070547 env[1419]: time="2024-12-13T01:54:50.070552125Z" level=warning msg="cleaning up after shim disconnected" id=bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073 namespace=k8s.io
Dec 13 01:54:50.070861 env[1419]: time="2024-12-13T01:54:50.070565325Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:50.073964 env[1419]: time="2024-12-13T01:54:50.073882746Z" level=info msg="shim disconnected" id=290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad
Dec 13 01:54:50.074075 env[1419]: time="2024-12-13T01:54:50.073991247Z" level=warning msg="cleaning up after shim disconnected" id=290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad namespace=k8s.io
Dec 13 01:54:50.074075 env[1419]: time="2024-12-13T01:54:50.074005747Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:50.081014 env[1419]: time="2024-12-13T01:54:50.080979292Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4086 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:50.088275 env[1419]: time="2024-12-13T01:54:50.088241338Z" level=info msg="StopContainer for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" returns successfully"
Dec 13 01:54:50.089282 env[1419]: time="2024-12-13T01:54:50.089254945Z" level=info msg="StopPodSandbox for \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\""
Dec 13 01:54:50.089472 env[1419]: time="2024-12-13T01:54:50.089435246Z" level=info msg="Container to stop \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 01:54:50.089570 env[1419]: time="2024-12-13T01:54:50.089551447Z" level=info msg="Container to stop \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 01:54:50.089649 env[1419]: time="2024-12-13T01:54:50.089634847Z" level=info msg="Container to stop \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 01:54:50.089729 env[1419]: time="2024-12-13T01:54:50.089714348Z" level=info msg="Container to stop \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 01:54:50.089788 env[1419]: time="2024-12-13T01:54:50.089460646Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4094 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:50.090143 env[1419]: time="2024-12-13T01:54:50.089779148Z" level=info msg="Container to stop \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 01:54:50.095322 env[1419]: time="2024-12-13T01:54:50.095294284Z" level=info msg="StopContainer for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" returns successfully"
Dec 13 01:54:50.095775 env[1419]: time="2024-12-13T01:54:50.095743187Z" level=info msg="StopPodSandbox for \"423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c\""
Dec 13 01:54:50.095867 env[1419]: time="2024-12-13T01:54:50.095810987Z" level=info msg="Container to stop \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 01:54:50.096760 systemd[1]: cri-containerd-e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405.scope: Deactivated successfully.
Dec 13 01:54:50.109589 systemd[1]: cri-containerd-423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c.scope: Deactivated successfully.
Dec 13 01:54:50.133179 env[1419]: time="2024-12-13T01:54:50.133128827Z" level=info msg="shim disconnected" id=e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405
Dec 13 01:54:50.133376 env[1419]: time="2024-12-13T01:54:50.133180827Z" level=warning msg="cleaning up after shim disconnected" id=e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405 namespace=k8s.io
Dec 13 01:54:50.133376 env[1419]: time="2024-12-13T01:54:50.133194427Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:50.134295 env[1419]: time="2024-12-13T01:54:50.134259934Z" level=info msg="shim disconnected" id=423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c
Dec 13 01:54:50.134295 env[1419]: time="2024-12-13T01:54:50.134294434Z" level=warning msg="cleaning up after shim disconnected" id=423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c namespace=k8s.io
Dec 13 01:54:50.134469 env[1419]: time="2024-12-13T01:54:50.134304434Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:50.145283 env[1419]: time="2024-12-13T01:54:50.145236905Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4148 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:50.145829 env[1419]: time="2024-12-13T01:54:50.145795708Z" level=info msg="TearDown network for sandbox \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" successfully"
Dec 13 01:54:50.145960 env[1419]: time="2024-12-13T01:54:50.145938409Z" level=info msg="StopPodSandbox for \"e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405\" returns successfully"
Dec 13 01:54:50.146131 env[1419]: time="2024-12-13T01:54:50.146005810Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4149 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:50.146811 env[1419]: time="2024-12-13T01:54:50.146730114Z" level=info msg="TearDown network for sandbox \"423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c\" successfully"
Dec 13 01:54:50.146811 env[1419]: time="2024-12-13T01:54:50.146758314Z" level=info msg="StopPodSandbox for \"423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c\" returns successfully"
Dec 13 01:54:50.305719 kubelet[2456]: I1213 01:54:50.305649    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-lib-modules\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306412 kubelet[2456]: I1213 01:54:50.305838    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cni-path\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306412 kubelet[2456]: I1213 01:54:50.305949    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bee5c91c-580c-4889-a126-e8e34b3d1c28-clustermesh-secrets\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306412 kubelet[2456]: I1213 01:54:50.306010    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldph5\" (UniqueName: \"kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-kube-api-access-ldph5\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306629 kubelet[2456]: I1213 01:54:50.306532    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-kernel\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306629 kubelet[2456]: I1213 01:54:50.306584    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njhs\" (UniqueName: \"kubernetes.io/projected/a306b0d9-452f-4990-a53a-471e29586f86-kube-api-access-9njhs\") pod \"a306b0d9-452f-4990-a53a-471e29586f86\" (UID: \"a306b0d9-452f-4990-a53a-471e29586f86\") "
Dec 13 01:54:50.306629 kubelet[2456]: I1213 01:54:50.306622    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a306b0d9-452f-4990-a53a-471e29586f86-cilium-config-path\") pod \"a306b0d9-452f-4990-a53a-471e29586f86\" (UID: \"a306b0d9-452f-4990-a53a-471e29586f86\") "
Dec 13 01:54:50.306798 kubelet[2456]: I1213 01:54:50.306705    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-bpf-maps\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306798 kubelet[2456]: I1213 01:54:50.306743    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-cgroup\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306798 kubelet[2456]: I1213 01:54:50.306774    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-run\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306954 kubelet[2456]: I1213 01:54:50.306806    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-xtables-lock\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306954 kubelet[2456]: I1213 01:54:50.306838    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-etc-cni-netd\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306954 kubelet[2456]: I1213 01:54:50.306881    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-config-path\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306954 kubelet[2456]: I1213 01:54:50.306913    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-net\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.306954 kubelet[2456]: I1213 01:54:50.306946    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-hostproc\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.307248 kubelet[2456]: I1213 01:54:50.306980    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-hubble-tls\") pod \"bee5c91c-580c-4889-a126-e8e34b3d1c28\" (UID: \"bee5c91c-580c-4889-a126-e8e34b3d1c28\") "
Dec 13 01:54:50.310989 kubelet[2456]: I1213 01:54:50.310956    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee5c91c-580c-4889-a126-e8e34b3d1c28-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 01:54:50.311125 kubelet[2456]: I1213 01:54:50.305753    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.311125 kubelet[2456]: I1213 01:54:50.310956    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 01:54:50.311125 kubelet[2456]: I1213 01:54:50.305877    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cni-path" (OuterVolumeSpecName: "cni-path") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.311125 kubelet[2456]: I1213 01:54:50.311049    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.311125 kubelet[2456]: I1213 01:54:50.311072    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.314359 kubelet[2456]: I1213 01:54:50.314328    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a306b0d9-452f-4990-a53a-471e29586f86-kube-api-access-9njhs" (OuterVolumeSpecName: "kube-api-access-9njhs") pod "a306b0d9-452f-4990-a53a-471e29586f86" (UID: "a306b0d9-452f-4990-a53a-471e29586f86"). InnerVolumeSpecName "kube-api-access-9njhs". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 01:54:50.314359 kubelet[2456]: I1213 01:54:50.314328    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-kube-api-access-ldph5" (OuterVolumeSpecName: "kube-api-access-ldph5") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "kube-api-access-ldph5". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 01:54:50.314561 kubelet[2456]: I1213 01:54:50.314542    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.314658 kubelet[2456]: I1213 01:54:50.314642    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.317234 kubelet[2456]: I1213 01:54:50.317068    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a306b0d9-452f-4990-a53a-471e29586f86-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a306b0d9-452f-4990-a53a-471e29586f86" (UID: "a306b0d9-452f-4990-a53a-471e29586f86"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 01:54:50.317734 kubelet[2456]: I1213 01:54:50.317710    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 01:54:50.317855 kubelet[2456]: I1213 01:54:50.317839    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.317958 kubelet[2456]: I1213 01:54:50.317941    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-hostproc" (OuterVolumeSpecName: "hostproc") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.318061 kubelet[2456]: I1213 01:54:50.318045    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.318191 kubelet[2456]: I1213 01:54:50.318151    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bee5c91c-580c-4889-a126-e8e34b3d1c28" (UID: "bee5c91c-580c-4889-a126-e8e34b3d1c28"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:50.381175 kubelet[2456]: I1213 01:54:50.381149    2456 scope.go:117] "RemoveContainer" containerID="290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad"
Dec 13 01:54:50.386107 systemd[1]: Removed slice kubepods-besteffort-poda306b0d9_452f_4990_a53a_471e29586f86.slice.
Dec 13 01:54:50.389974 env[1419]: time="2024-12-13T01:54:50.389466573Z" level=info msg="RemoveContainer for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\""
Dec 13 01:54:50.396030 systemd[1]: Removed slice kubepods-burstable-podbee5c91c_580c_4889_a126_e8e34b3d1c28.slice.
Dec 13 01:54:50.396184 systemd[1]: kubepods-burstable-podbee5c91c_580c_4889_a126_e8e34b3d1c28.slice: Consumed 7.152s CPU time.
Dec 13 01:54:50.405025 env[1419]: time="2024-12-13T01:54:50.404992473Z" level=info msg="RemoveContainer for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" returns successfully"
Dec 13 01:54:50.405940 kubelet[2456]: I1213 01:54:50.405546    2456 scope.go:117] "RemoveContainer" containerID="290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad"
Dec 13 01:54:50.406374 env[1419]: time="2024-12-13T01:54:50.406189681Z" level=error msg="ContainerStatus for \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\": not found"
Dec 13 01:54:50.407064 kubelet[2456]: E1213 01:54:50.406983    2456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\": not found" containerID="290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad"
Dec 13 01:54:50.407214 kubelet[2456]: I1213 01:54:50.407137    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad"} err="failed to get container status \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\": rpc error: code = NotFound desc = an error occurred when try to find container \"290d49758b9481f01155318e255492e566d0dcd3426813df6a509be52d5bfbad\": not found"
Dec 13 01:54:50.407214 kubelet[2456]: I1213 01:54:50.407161    2456 scope.go:117] "RemoveContainer" containerID="bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073"
Dec 13 01:54:50.407321 kubelet[2456]: I1213 01:54:50.407292    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a306b0d9-452f-4990-a53a-471e29586f86-cilium-config-path\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407321 kubelet[2456]: I1213 01:54:50.407310    2456 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-bpf-maps\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407325    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-cgroup\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407339    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-run\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407356    2456 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407370    2456 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9njhs\" (UniqueName: \"kubernetes.io/projected/a306b0d9-452f-4990-a53a-471e29586f86-kube-api-access-9njhs\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407385    2456 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-xtables-lock\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407400    2456 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-etc-cni-netd\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407412 kubelet[2456]: I1213 01:54:50.407415    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee5c91c-580c-4889-a126-e8e34b3d1c28-cilium-config-path\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407429    2456 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-host-proc-sys-net\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407444    2456 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-hostproc\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407459    2456 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-hubble-tls\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407472    2456 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-lib-modules\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407487    2456 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bee5c91c-580c-4889-a126-e8e34b3d1c28-cni-path\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407502    2456 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bee5c91c-580c-4889-a126-e8e34b3d1c28-clustermesh-secrets\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.407630 kubelet[2456]: I1213 01:54:50.407517    2456 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ldph5\" (UniqueName: \"kubernetes.io/projected/bee5c91c-580c-4889-a126-e8e34b3d1c28-kube-api-access-ldph5\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:50.408761 env[1419]: time="2024-12-13T01:54:50.408734197Z" level=info msg="RemoveContainer for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\""
Dec 13 01:54:50.419185 env[1419]: time="2024-12-13T01:54:50.418484260Z" level=info msg="RemoveContainer for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" returns successfully"
Dec 13 01:54:50.419278 kubelet[2456]: I1213 01:54:50.418664    2456 scope.go:117] "RemoveContainer" containerID="16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f"
Dec 13 01:54:50.420357 env[1419]: time="2024-12-13T01:54:50.420323772Z" level=info msg="RemoveContainer for \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\""
Dec 13 01:54:50.427680 env[1419]: time="2024-12-13T01:54:50.427649519Z" level=info msg="RemoveContainer for \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\" returns successfully"
Dec 13 01:54:50.427812 kubelet[2456]: I1213 01:54:50.427793    2456 scope.go:117] "RemoveContainer" containerID="55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417"
Dec 13 01:54:50.428688 env[1419]: time="2024-12-13T01:54:50.428659025Z" level=info msg="RemoveContainer for \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\""
Dec 13 01:54:50.438839 env[1419]: time="2024-12-13T01:54:50.438806090Z" level=info msg="RemoveContainer for \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\" returns successfully"
Dec 13 01:54:50.438967 kubelet[2456]: I1213 01:54:50.438948    2456 scope.go:117] "RemoveContainer" containerID="6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed"
Dec 13 01:54:50.440011 env[1419]: time="2024-12-13T01:54:50.439982598Z" level=info msg="RemoveContainer for \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\""
Dec 13 01:54:50.448546 env[1419]: time="2024-12-13T01:54:50.448508053Z" level=info msg="RemoveContainer for \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\" returns successfully"
Dec 13 01:54:50.448719 kubelet[2456]: I1213 01:54:50.448700    2456 scope.go:117] "RemoveContainer" containerID="19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae"
Dec 13 01:54:50.449695 env[1419]: time="2024-12-13T01:54:50.449669360Z" level=info msg="RemoveContainer for \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\""
Dec 13 01:54:50.457550 env[1419]: time="2024-12-13T01:54:50.457521310Z" level=info msg="RemoveContainer for \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\" returns successfully"
Dec 13 01:54:50.457709 kubelet[2456]: I1213 01:54:50.457694    2456 scope.go:117] "RemoveContainer" containerID="bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073"
Dec 13 01:54:50.458046 env[1419]: time="2024-12-13T01:54:50.457992413Z" level=error msg="ContainerStatus for \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\": not found"
Dec 13 01:54:50.458184 kubelet[2456]: E1213 01:54:50.458166    2456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\": not found" containerID="bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073"
Dec 13 01:54:50.458279 kubelet[2456]: I1213 01:54:50.458212    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073"} err="failed to get container status \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc98a834d4b9c9b75dae7c015ceb30ee1f11d8d9e8a73e4d7f6e989ee3dd9073\": not found"
Dec 13 01:54:50.458279 kubelet[2456]: I1213 01:54:50.458228    2456 scope.go:117] "RemoveContainer" containerID="16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f"
Dec 13 01:54:50.458473 env[1419]: time="2024-12-13T01:54:50.458424416Z" level=error msg="ContainerStatus for \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\": not found"
Dec 13 01:54:50.458587 kubelet[2456]: E1213 01:54:50.458568    2456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\": not found" containerID="16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f"
Dec 13 01:54:50.458663 kubelet[2456]: I1213 01:54:50.458605    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f"} err="failed to get container status \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"16f36464c3cbd951d2dea1bbca11ac5e4ccc08fe6ef45c541fd6d1d89a5a1c3f\": not found"
Dec 13 01:54:50.458663 kubelet[2456]: I1213 01:54:50.458625    2456 scope.go:117] "RemoveContainer" containerID="55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417"
Dec 13 01:54:50.458838 env[1419]: time="2024-12-13T01:54:50.458794119Z" level=error msg="ContainerStatus for \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\": not found"
Dec 13 01:54:50.458947 kubelet[2456]: E1213 01:54:50.458929    2456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\": not found" containerID="55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417"
Dec 13 01:54:50.459016 kubelet[2456]: I1213 01:54:50.458962    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417"} err="failed to get container status \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\": rpc error: code = NotFound desc = an error occurred when try to find container \"55fa4e784fd04535976f7e380ee0ace135b6aec03029dfcd727ce3d589fe7417\": not found"
Dec 13 01:54:50.459016 kubelet[2456]: I1213 01:54:50.458975    2456 scope.go:117] "RemoveContainer" containerID="6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed"
Dec 13 01:54:50.459231 env[1419]: time="2024-12-13T01:54:50.459165221Z" level=error msg="ContainerStatus for \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\": not found"
Dec 13 01:54:50.459339 kubelet[2456]: E1213 01:54:50.459318    2456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\": not found" containerID="6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed"
Dec 13 01:54:50.459409 kubelet[2456]: I1213 01:54:50.459353    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed"} err="failed to get container status \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"6902d66cc29daefc4e0161b2b0de92de7c53d6ee5a8dbd379c51127ec12530ed\": not found"
Dec 13 01:54:50.459409 kubelet[2456]: I1213 01:54:50.459366    2456 scope.go:117] "RemoveContainer" containerID="19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae"
Dec 13 01:54:50.459573 env[1419]: time="2024-12-13T01:54:50.459528723Z" level=error msg="ContainerStatus for \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\": not found"
Dec 13 01:54:50.459679 kubelet[2456]: E1213 01:54:50.459662    2456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\": not found" containerID="19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae"
Dec 13 01:54:50.459751 kubelet[2456]: I1213 01:54:50.459693    2456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae"} err="failed to get container status \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"19da5eb704d892836b6f49dde1e26668cd532fd5f27a5185fccf2372cfe135ae\": not found"
Dec 13 01:54:50.944459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c-rootfs.mount: Deactivated successfully.
Dec 13 01:54:50.944813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-423745d0d419d93e95f9ae675c68d95e068ed6ced5ecaf9ca42127339e730f0c-shm.mount: Deactivated successfully.
Dec 13 01:54:50.944996 systemd[1]: var-lib-kubelet-pods-a306b0d9\x2d452f\x2d4990\x2da53a\x2d471e29586f86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9njhs.mount: Deactivated successfully.
Dec 13 01:54:50.945160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405-rootfs.mount: Deactivated successfully.
Dec 13 01:54:50.945242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e51cd35bcdbf689cfba71f47042ed4b1f5230beead1b45bacd7cca596d762405-shm.mount: Deactivated successfully.
Dec 13 01:54:50.945321 systemd[1]: var-lib-kubelet-pods-bee5c91c\x2d580c\x2d4889\x2da126\x2de8e34b3d1c28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldph5.mount: Deactivated successfully.
Dec 13 01:54:50.945407 systemd[1]: var-lib-kubelet-pods-bee5c91c\x2d580c\x2d4889\x2da126\x2de8e34b3d1c28-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 01:54:50.945482 systemd[1]: var-lib-kubelet-pods-bee5c91c\x2d580c\x2d4889\x2da126\x2de8e34b3d1c28-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 01:54:51.878644 kubelet[2456]: I1213 01:54:51.878601    2456 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a306b0d9-452f-4990-a53a-471e29586f86" path="/var/lib/kubelet/pods/a306b0d9-452f-4990-a53a-471e29586f86/volumes"
Dec 13 01:54:51.879349 kubelet[2456]: I1213 01:54:51.879318    2456 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" path="/var/lib/kubelet/pods/bee5c91c-580c-4889-a126-e8e34b3d1c28/volumes"
Dec 13 01:54:51.983755 sshd[4016]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:51.987492 systemd[1]: sshd@20-10.200.8.24:22-10.200.16.10:54190.service: Deactivated successfully.
Dec 13 01:54:51.988548 systemd[1]: session-23.scope: Deactivated successfully.
Dec 13 01:54:51.989448 systemd-logind[1402]: Session 23 logged out. Waiting for processes to exit.
Dec 13 01:54:51.990507 systemd-logind[1402]: Removed session 23.
Dec 13 01:54:52.088020 systemd[1]: Started sshd@21-10.200.8.24:22-10.200.16.10:34694.service.
Dec 13 01:54:52.713637 sshd[4181]: Accepted publickey for core from 10.200.16.10 port 34694 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:52.715272 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:52.720731 systemd[1]: Started session-24.scope.
Dec 13 01:54:52.721185 systemd-logind[1402]: New session 24 of user core.
Dec 13 01:54:53.589022 kubelet[2456]: I1213 01:54:53.588976    2456 topology_manager.go:215] "Topology Admit Handler" podUID="e49883ed-5e0b-4772-88fd-12928a2f48c8" podNamespace="kube-system" podName="cilium-sf59r"
Dec 13 01:54:53.589498 kubelet[2456]: E1213 01:54:53.589067    2456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" containerName="cilium-agent"
Dec 13 01:54:53.589498 kubelet[2456]: E1213 01:54:53.589094    2456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" containerName="mount-cgroup"
Dec 13 01:54:53.589498 kubelet[2456]: E1213 01:54:53.589104    2456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" containerName="apply-sysctl-overwrites"
Dec 13 01:54:53.589498 kubelet[2456]: E1213 01:54:53.589112    2456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a306b0d9-452f-4990-a53a-471e29586f86" containerName="cilium-operator"
Dec 13 01:54:53.589498 kubelet[2456]: E1213 01:54:53.589120    2456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" containerName="mount-bpf-fs"
Dec 13 01:54:53.589498 kubelet[2456]: E1213 01:54:53.589130    2456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" containerName="clean-cilium-state"
Dec 13 01:54:53.589498 kubelet[2456]: I1213 01:54:53.589172    2456 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee5c91c-580c-4889-a126-e8e34b3d1c28" containerName="cilium-agent"
Dec 13 01:54:53.589498 kubelet[2456]: I1213 01:54:53.589184    2456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a306b0d9-452f-4990-a53a-471e29586f86" containerName="cilium-operator"
Dec 13 01:54:53.594947 kubelet[2456]: W1213 01:54:53.594917    2456 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.6-a-f5ec44d98c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-f5ec44d98c' and this object
Dec 13 01:54:53.595147 kubelet[2456]: E1213 01:54:53.595128    2456 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.6-a-f5ec44d98c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-f5ec44d98c' and this object
Dec 13 01:54:53.595356 kubelet[2456]: W1213 01:54:53.595340    2456 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.6-a-f5ec44d98c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-f5ec44d98c' and this object
Dec 13 01:54:53.595471 kubelet[2456]: E1213 01:54:53.595459    2456 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.6-a-f5ec44d98c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-f5ec44d98c' and this object
Dec 13 01:54:53.599469 systemd[1]: Created slice kubepods-burstable-pode49883ed_5e0b_4772_88fd_12928a2f48c8.slice.
Dec 13 01:54:53.621886 kubelet[2456]: I1213 01:54:53.621864    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-etc-cni-netd\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622071 kubelet[2456]: I1213 01:54:53.622055    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-clustermesh-secrets\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622234 kubelet[2456]: I1213 01:54:53.622221    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-net\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622370 kubelet[2456]: I1213 01:54:53.622357    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-lib-modules\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622520 kubelet[2456]: I1213 01:54:53.622507    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6j2c\" (UniqueName: \"kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-kube-api-access-q6j2c\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622648 kubelet[2456]: I1213 01:54:53.622637    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-xtables-lock\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622763 kubelet[2456]: I1213 01:54:53.622753    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-ipsec-secrets\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622877 kubelet[2456]: I1213 01:54:53.622866    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-hubble-tls\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.622987 kubelet[2456]: I1213 01:54:53.622975    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-cgroup\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.623128 kubelet[2456]: I1213 01:54:53.623116    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-config-path\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.623316 kubelet[2456]: I1213 01:54:53.623298    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-hostproc\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.623464 kubelet[2456]: I1213 01:54:53.623447    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cni-path\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.623638 kubelet[2456]: I1213 01:54:53.623617    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-bpf-maps\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.623802 kubelet[2456]: I1213 01:54:53.623759    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-kernel\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.623947 kubelet[2456]: I1213 01:54:53.623933    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-run\") pod \"cilium-sf59r\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") " pod="kube-system/cilium-sf59r"
Dec 13 01:54:53.682985 sshd[4181]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:53.685920 systemd[1]: sshd@21-10.200.8.24:22-10.200.16.10:34694.service: Deactivated successfully.
Dec 13 01:54:53.686846 systemd[1]: session-24.scope: Deactivated successfully.
Dec 13 01:54:53.687532 systemd-logind[1402]: Session 24 logged out. Waiting for processes to exit.
Dec 13 01:54:53.688355 systemd-logind[1402]: Removed session 24.
Dec 13 01:54:53.789187 systemd[1]: Started sshd@22-10.200.8.24:22-10.200.16.10:34702.service.
Dec 13 01:54:54.412577 sshd[4193]: Accepted publickey for core from 10.200.16.10 port 34702 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:54.413972 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:54.419132 systemd-logind[1402]: New session 25 of user core.
Dec 13 01:54:54.419615 systemd[1]: Started session-25.scope.
Dec 13 01:54:54.726846 kubelet[2456]: E1213 01:54:54.726723    2456 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition
Dec 13 01:54:54.726846 kubelet[2456]: E1213 01:54:54.726835    2456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-clustermesh-secrets podName:e49883ed-5e0b-4772-88fd-12928a2f48c8 nodeName:}" failed. No retries permitted until 2024-12-13 01:54:55.226807346 +0000 UTC m=+215.476042430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-clustermesh-secrets") pod "cilium-sf59r" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8") : failed to sync secret cache: timed out waiting for the condition
Dec 13 01:54:54.835976 kubelet[2456]: E1213 01:54:54.835937    2456 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[clustermesh-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-sf59r" podUID="e49883ed-5e0b-4772-88fd-12928a2f48c8"
Dec 13 01:54:54.920662 sshd[4193]: pam_unix(sshd:session): session closed for user core
Dec 13 01:54:54.923790 systemd[1]: sshd@22-10.200.8.24:22-10.200.16.10:34702.service: Deactivated successfully.
Dec 13 01:54:54.924689 systemd[1]: session-25.scope: Deactivated successfully.
Dec 13 01:54:54.925454 systemd-logind[1402]: Session 25 logged out. Waiting for processes to exit.
Dec 13 01:54:54.926427 systemd-logind[1402]: Removed session 25.
Dec 13 01:54:54.980672 kubelet[2456]: E1213 01:54:54.980542    2456 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 01:54:55.031302 systemd[1]: Started sshd@23-10.200.8.24:22-10.200.16.10:34718.service.
Dec 13 01:54:55.439674 kubelet[2456]: I1213 01:54:55.439641    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-etc-cni-netd\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.439858 kubelet[2456]: I1213 01:54:55.439694    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6j2c\" (UniqueName: \"kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-kube-api-access-q6j2c\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.439858 kubelet[2456]: I1213 01:54:55.439719    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cni-path\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.439858 kubelet[2456]: I1213 01:54:55.439740    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-cgroup\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.439858 kubelet[2456]: I1213 01:54:55.439765    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-xtables-lock\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.440101 kubelet[2456]: I1213 01:54:55.440061    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.440190 kubelet[2456]: I1213 01:54:55.440128    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-run\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.442821 kubelet[2456]: I1213 01:54:55.440252    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-lib-modules\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.442821 kubelet[2456]: I1213 01:54:55.440277    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-ipsec-secrets\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.442821 kubelet[2456]: I1213 01:54:55.440294    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-net\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.442821 kubelet[2456]: I1213 01:54:55.440309    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-hostproc\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.442821 kubelet[2456]: I1213 01:54:55.440324    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-bpf-maps\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.442821 kubelet[2456]: I1213 01:54:55.440340    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-kernel\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.443266 kubelet[2456]: I1213 01:54:55.440357    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-hubble-tls\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.443266 kubelet[2456]: I1213 01:54:55.440375    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-config-path\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.443266 kubelet[2456]: I1213 01:54:55.440392    2456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-clustermesh-secrets\") pod \"e49883ed-5e0b-4772-88fd-12928a2f48c8\" (UID: \"e49883ed-5e0b-4772-88fd-12928a2f48c8\") "
Dec 13 01:54:55.443266 kubelet[2456]: I1213 01:54:55.440421    2456 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-etc-cni-netd\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.443266 kubelet[2456]: I1213 01:54:55.440151    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443266 kubelet[2456]: I1213 01:54:55.440604    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443527 kubelet[2456]: I1213 01:54:55.440673    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cni-path" (OuterVolumeSpecName: "cni-path") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443527 kubelet[2456]: I1213 01:54:55.440689    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443527 kubelet[2456]: I1213 01:54:55.440702    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443527 kubelet[2456]: I1213 01:54:55.440715    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443527 kubelet[2456]: I1213 01:54:55.440733    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443747 kubelet[2456]: I1213 01:54:55.440750    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-hostproc" (OuterVolumeSpecName: "hostproc") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.443747 kubelet[2456]: I1213 01:54:55.441470    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 01:54:55.446601 kubelet[2456]: I1213 01:54:55.446574    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 01:54:55.449810 systemd[1]: var-lib-kubelet-pods-e49883ed\x2d5e0b\x2d4772\x2d88fd\x2d12928a2f48c8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Dec 13 01:54:55.453644 systemd[1]: var-lib-kubelet-pods-e49883ed\x2d5e0b\x2d4772\x2d88fd\x2d12928a2f48c8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 01:54:55.455680 kubelet[2456]: I1213 01:54:55.455649    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 01:54:55.455789 kubelet[2456]: I1213 01:54:55.455775    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 01:54:55.460120 systemd[1]: var-lib-kubelet-pods-e49883ed\x2d5e0b\x2d4772\x2d88fd\x2d12928a2f48c8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 01:54:55.460845 kubelet[2456]: I1213 01:54:55.460540    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 01:54:55.463752 systemd[1]: var-lib-kubelet-pods-e49883ed\x2d5e0b\x2d4772\x2d88fd\x2d12928a2f48c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6j2c.mount: Deactivated successfully.
Dec 13 01:54:55.466994 kubelet[2456]: I1213 01:54:55.466969    2456 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-kube-api-access-q6j2c" (OuterVolumeSpecName: "kube-api-access-q6j2c") pod "e49883ed-5e0b-4772-88fd-12928a2f48c8" (UID: "e49883ed-5e0b-4772-88fd-12928a2f48c8"). InnerVolumeSpecName "kube-api-access-q6j2c". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 01:54:55.541388 kubelet[2456]: I1213 01:54:55.541348    2456 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-xtables-lock\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541388 kubelet[2456]: I1213 01:54:55.541386    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-run\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541388 kubelet[2456]: I1213 01:54:55.541402    2456 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-lib-modules\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541417    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541430    2456 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-hostproc\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541442    2456 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-bpf-maps\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541454    2456 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541465    2456 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-host-proc-sys-net\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541478    2456 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-hubble-tls\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541491    2456 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e49883ed-5e0b-4772-88fd-12928a2f48c8-clustermesh-secrets\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541651 kubelet[2456]: I1213 01:54:55.541504    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-config-path\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541849 kubelet[2456]: I1213 01:54:55.541516    2456 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q6j2c\" (UniqueName: \"kubernetes.io/projected/e49883ed-5e0b-4772-88fd-12928a2f48c8-kube-api-access-q6j2c\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541849 kubelet[2456]: I1213 01:54:55.541530    2456 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cni-path\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.541849 kubelet[2456]: I1213 01:54:55.541544    2456 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e49883ed-5e0b-4772-88fd-12928a2f48c8-cilium-cgroup\") on node \"ci-3510.3.6-a-f5ec44d98c\" DevicePath \"\""
Dec 13 01:54:55.680538 sshd[4205]: Accepted publickey for core from 10.200.16.10 port 34718 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M
Dec 13 01:54:55.681982 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:54:55.686895 systemd[1]: Started session-26.scope.
Dec 13 01:54:55.687365 systemd-logind[1402]: New session 26 of user core.
Dec 13 01:54:55.883309 systemd[1]: Removed slice kubepods-burstable-pode49883ed_5e0b_4772_88fd_12928a2f48c8.slice.
Dec 13 01:54:56.466316 kubelet[2456]: I1213 01:54:56.466285    2456 topology_manager.go:215] "Topology Admit Handler" podUID="d7416a31-4aea-4d5a-896e-ed1dc5140530" podNamespace="kube-system" podName="cilium-2vsps"
Dec 13 01:54:56.473552 systemd[1]: Created slice kubepods-burstable-podd7416a31_4aea_4d5a_896e_ed1dc5140530.slice.
Dec 13 01:54:56.549185 kubelet[2456]: I1213 01:54:56.549075    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7416a31-4aea-4d5a-896e-ed1dc5140530-clustermesh-secrets\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549434 kubelet[2456]: I1213 01:54:56.549370    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb7cc\" (UniqueName: \"kubernetes.io/projected/d7416a31-4aea-4d5a-896e-ed1dc5140530-kube-api-access-fb7cc\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549506 kubelet[2456]: I1213 01:54:56.549435    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-bpf-maps\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549506 kubelet[2456]: I1213 01:54:56.549462    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-etc-cni-netd\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549596 kubelet[2456]: I1213 01:54:56.549531    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-lib-modules\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549596 kubelet[2456]: I1213 01:54:56.549593    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-xtables-lock\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549700 kubelet[2456]: I1213 01:54:56.549626    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7416a31-4aea-4d5a-896e-ed1dc5140530-cilium-config-path\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549700 kubelet[2456]: I1213 01:54:56.549696    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-host-proc-sys-net\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549791 kubelet[2456]: I1213 01:54:56.549781    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-host-proc-sys-kernel\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549861 kubelet[2456]: I1213 01:54:56.549844    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-hostproc\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549934 kubelet[2456]: I1213 01:54:56.549918    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7416a31-4aea-4d5a-896e-ed1dc5140530-cilium-ipsec-secrets\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.549993 kubelet[2456]: I1213 01:54:56.549958    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7416a31-4aea-4d5a-896e-ed1dc5140530-hubble-tls\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.550039 kubelet[2456]: I1213 01:54:56.550026    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-cilium-run\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.550135 kubelet[2456]: I1213 01:54:56.550119    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-cilium-cgroup\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.550189 kubelet[2456]: I1213 01:54:56.550159    2456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7416a31-4aea-4d5a-896e-ed1dc5140530-cni-path\") pod \"cilium-2vsps\" (UID: \"d7416a31-4aea-4d5a-896e-ed1dc5140530\") " pod="kube-system/cilium-2vsps"
Dec 13 01:54:56.777761 env[1419]: time="2024-12-13T01:54:56.777699597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vsps,Uid:d7416a31-4aea-4d5a-896e-ed1dc5140530,Namespace:kube-system,Attempt:0,}"
Dec 13 01:54:56.816916 env[1419]: time="2024-12-13T01:54:56.816847043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:54:56.817131 env[1419]: time="2024-12-13T01:54:56.816904944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:54:56.817131 env[1419]: time="2024-12-13T01:54:56.816928344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:54:56.817342 env[1419]: time="2024-12-13T01:54:56.817298246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5 pid=4231 runtime=io.containerd.runc.v2
Dec 13 01:54:56.833735 systemd[1]: Started cri-containerd-56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5.scope.
Dec 13 01:54:56.862552 env[1419]: time="2024-12-13T01:54:56.862505231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vsps,Uid:d7416a31-4aea-4d5a-896e-ed1dc5140530,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\""
Dec 13 01:54:56.866549 env[1419]: time="2024-12-13T01:54:56.866506756Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 01:54:56.895252 env[1419]: time="2024-12-13T01:54:56.895208337Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc\""
Dec 13 01:54:56.896196 env[1419]: time="2024-12-13T01:54:56.896140643Z" level=info msg="StartContainer for \"2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc\""
Dec 13 01:54:56.912811 systemd[1]: Started cri-containerd-2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc.scope.
Dec 13 01:54:56.953903 env[1419]: time="2024-12-13T01:54:56.953850106Z" level=info msg="StartContainer for \"2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc\" returns successfully"
Dec 13 01:54:56.959043 systemd[1]: cri-containerd-2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc.scope: Deactivated successfully.
Dec 13 01:54:57.079326 env[1419]: time="2024-12-13T01:54:57.079164194Z" level=info msg="shim disconnected" id=2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc
Dec 13 01:54:57.079326 env[1419]: time="2024-12-13T01:54:57.079232595Z" level=warning msg="cleaning up after shim disconnected" id=2ed133e14164472d5b7d63d1c0f130dc1c32ea04af8f5864cdf4f2633eef38bc namespace=k8s.io
Dec 13 01:54:57.079326 env[1419]: time="2024-12-13T01:54:57.079248195Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:57.087447 env[1419]: time="2024-12-13T01:54:57.087405946Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4312 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:57.419238 env[1419]: time="2024-12-13T01:54:57.419126429Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 01:54:57.453436 env[1419]: time="2024-12-13T01:54:57.453386945Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182\""
Dec 13 01:54:57.454138 env[1419]: time="2024-12-13T01:54:57.454098549Z" level=info msg="StartContainer for \"19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182\""
Dec 13 01:54:57.472365 systemd[1]: Started cri-containerd-19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182.scope.
Dec 13 01:54:57.508898 env[1419]: time="2024-12-13T01:54:57.508847093Z" level=info msg="StartContainer for \"19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182\" returns successfully"
Dec 13 01:54:57.511798 systemd[1]: cri-containerd-19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182.scope: Deactivated successfully.
Dec 13 01:54:57.542635 env[1419]: time="2024-12-13T01:54:57.542585905Z" level=info msg="shim disconnected" id=19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182
Dec 13 01:54:57.542635 env[1419]: time="2024-12-13T01:54:57.542634705Z" level=warning msg="cleaning up after shim disconnected" id=19f35796b73dac4abfbf8eadc6caafd380923f401c85c49ce9e67321fe53e182 namespace=k8s.io
Dec 13 01:54:57.542912 env[1419]: time="2024-12-13T01:54:57.542645805Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:57.550851 env[1419]: time="2024-12-13T01:54:57.550813256Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4373 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:57.877682 kubelet[2456]: I1213 01:54:57.877641    2456 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e49883ed-5e0b-4772-88fd-12928a2f48c8" path="/var/lib/kubelet/pods/e49883ed-5e0b-4772-88fd-12928a2f48c8/volumes"
Dec 13 01:54:58.423250 env[1419]: time="2024-12-13T01:54:58.423195627Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 01:54:58.467263 env[1419]: time="2024-12-13T01:54:58.467218403Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9\""
Dec 13 01:54:58.467944 env[1419]: time="2024-12-13T01:54:58.467908207Z" level=info msg="StartContainer for \"eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9\""
Dec 13 01:54:58.496681 systemd[1]: Started cri-containerd-eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9.scope.
Dec 13 01:54:58.541433 systemd[1]: cri-containerd-eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9.scope: Deactivated successfully.
Dec 13 01:54:58.546612 env[1419]: time="2024-12-13T01:54:58.546569500Z" level=info msg="StartContainer for \"eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9\" returns successfully"
Dec 13 01:54:58.580903 env[1419]: time="2024-12-13T01:54:58.580854514Z" level=info msg="shim disconnected" id=eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9
Dec 13 01:54:58.580903 env[1419]: time="2024-12-13T01:54:58.580902815Z" level=warning msg="cleaning up after shim disconnected" id=eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9 namespace=k8s.io
Dec 13 01:54:58.581213 env[1419]: time="2024-12-13T01:54:58.580914215Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:58.589468 env[1419]: time="2024-12-13T01:54:58.589429168Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4432 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:58.661390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb7bca870df4e07b54de4529510285619001934962e7850b6c5f3a4e662996f9-rootfs.mount: Deactivated successfully.
Dec 13 01:54:59.426102 env[1419]: time="2024-12-13T01:54:59.426045898Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 01:54:59.463477 env[1419]: time="2024-12-13T01:54:59.463401331Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa\""
Dec 13 01:54:59.464054 env[1419]: time="2024-12-13T01:54:59.464008235Z" level=info msg="StartContainer for \"0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa\""
Dec 13 01:54:59.487683 systemd[1]: Started cri-containerd-0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa.scope.
Dec 13 01:54:59.522188 systemd[1]: cri-containerd-0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa.scope: Deactivated successfully.
Dec 13 01:54:59.525822 env[1419]: time="2024-12-13T01:54:59.525486519Z" level=info msg="StartContainer for \"0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa\" returns successfully"
Dec 13 01:54:59.555994 env[1419]: time="2024-12-13T01:54:59.555947509Z" level=info msg="shim disconnected" id=0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa
Dec 13 01:54:59.555994 env[1419]: time="2024-12-13T01:54:59.555992809Z" level=warning msg="cleaning up after shim disconnected" id=0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa namespace=k8s.io
Dec 13 01:54:59.555994 env[1419]: time="2024-12-13T01:54:59.556006909Z" level=info msg="cleaning up dead shim"
Dec 13 01:54:59.563736 env[1419]: time="2024-12-13T01:54:59.563695757Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4487 runtime=io.containerd.runc.v2\n"
Dec 13 01:54:59.660951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b57c6520cb06a571c1674b5d64ea951cd30225a1ed04a533bcace16311243aa-rootfs.mount: Deactivated successfully.
Dec 13 01:54:59.981305 kubelet[2456]: E1213 01:54:59.981260    2456 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 01:55:00.431438 env[1419]: time="2024-12-13T01:55:00.431383366Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 01:55:00.471250 env[1419]: time="2024-12-13T01:55:00.471199614Z" level=info msg="CreateContainer within sandbox \"56ea7aab4cc4f2667a36c8aeaa10e925c225c299fa6c4a0a885219a7f6a405b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f\""
Dec 13 01:55:00.472001 env[1419]: time="2024-12-13T01:55:00.471969118Z" level=info msg="StartContainer for \"79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f\""
Dec 13 01:55:00.500422 systemd[1]: Started cri-containerd-79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f.scope.
Dec 13 01:55:00.538285 env[1419]: time="2024-12-13T01:55:00.538234031Z" level=info msg="StartContainer for \"79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f\" returns successfully"
Dec 13 01:55:00.661942 systemd[1]: run-containerd-runc-k8s.io-79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f-runc.0SlsLS.mount: Deactivated successfully.
Dec 13 01:55:00.984121 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Dec 13 01:55:01.451468 kubelet[2456]: I1213 01:55:01.450783    2456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2vsps" podStartSLOduration=5.450738502 podStartE2EDuration="5.450738502s" podCreationTimestamp="2024-12-13 01:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:01.4504296 +0000 UTC m=+221.699664584" watchObservedRunningTime="2024-12-13 01:55:01.450738502 +0000 UTC m=+221.699973486"
Dec 13 01:55:03.680595 systemd-networkd[1562]: lxc_health: Link UP
Dec 13 01:55:03.705106 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 01:55:03.704984 systemd-networkd[1562]: lxc_health: Gained carrier
Dec 13 01:55:04.047438 kubelet[2456]: I1213 01:55:04.047407    2456 setters.go:568] "Node became not ready" node="ci-3510.3.6-a-f5ec44d98c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:55:04Z","lastTransitionTime":"2024-12-13T01:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Dec 13 01:55:04.529658 systemd[1]: run-containerd-runc-k8s.io-79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f-runc.gnz7sF.mount: Deactivated successfully.
Dec 13 01:55:04.789222 systemd-networkd[1562]: lxc_health: Gained IPv6LL
Dec 13 01:55:06.757302 systemd[1]: run-containerd-runc-k8s.io-79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f-runc.mXRKnv.mount: Deactivated successfully.
Dec 13 01:55:08.883590 systemd[1]: run-containerd-runc-k8s.io-79679d35b80a8dd31169bdd79bc7ef3dc6bc46a9b5788a01e0e60e70cbdb901f-runc.PiXS7N.mount: Deactivated successfully.
Dec 13 01:55:09.061601 sshd[4205]: pam_unix(sshd:session): session closed for user core
Dec 13 01:55:09.064606 systemd[1]: sshd@23-10.200.8.24:22-10.200.16.10:34718.service: Deactivated successfully.
Dec 13 01:55:09.065504 systemd[1]: session-26.scope: Deactivated successfully.
Dec 13 01:55:09.066246 systemd-logind[1402]: Session 26 logged out. Waiting for processes to exit.
Dec 13 01:55:09.067061 systemd-logind[1402]: Removed session 26.