Dec 13 14:24:03.139535 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024
Dec 13 14:24:03.139574 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e
Dec 13 14:24:03.139592 kernel: BIOS-provided physical RAM map:
Dec 13 14:24:03.139605 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved
Dec 13 14:24:03.139618 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable
Dec 13 14:24:03.139631 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved
Dec 13 14:24:03.139649 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable
Dec 13 14:24:03.139663 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved
Dec 13 14:24:03.139677 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable
Dec 13 14:24:03.139690 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data
Dec 13 14:24:03.139704 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable
Dec 13 14:24:03.139718 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved
Dec 13 14:24:03.139731 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data
Dec 13 14:24:03.139745 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS
Dec 13 14:24:03.140045 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable
Dec 13 14:24:03.140061 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved
Dec 13 14:24:03.140076 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable
Dec 13 14:24:03.140091 kernel: NX (Execute Disable) protection: active
Dec 13 14:24:03.140106 kernel: efi: EFI v2.70 by EDK II
Dec 13 14:24:03.140256 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 
Dec 13 14:24:03.140271 kernel: random: crng init done
Dec 13 14:24:03.140286 kernel: SMBIOS 2.4 present.
Dec 13 14:24:03.140303 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Dec 13 14:24:03.140325 kernel: Hypervisor detected: KVM
Dec 13 14:24:03.140469 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 14:24:03.140484 kernel: kvm-clock: cpu 0, msr 12d19a001, primary cpu clock
Dec 13 14:24:03.140498 kernel: kvm-clock: using sched offset of 13515365746 cycles
Dec 13 14:24:03.140514 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 14:24:03.140529 kernel: tsc: Detected 2299.998 MHz processor
Dec 13 14:24:03.140681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 14:24:03.140701 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 14:24:03.140716 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000
Dec 13 14:24:03.140735 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 14:24:03.140842 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000
Dec 13 14:24:03.140859 kernel: Using GB pages for direct mapping
Dec 13 14:24:03.140874 kernel: Secure boot disabled
Dec 13 14:24:03.140890 kernel: ACPI: Early table checksum verification disabled
Dec 13 14:24:03.140905 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google)
Dec 13 14:24:03.140920 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001      01000013)
Dec 13 14:24:03.140936 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
Dec 13 14:24:03.140962 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
Dec 13 14:24:03.140978 kernel: ACPI: FACS 0x00000000BFBF2000 000040
Dec 13 14:24:03.140995 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322)
Dec 13 14:24:03.141011 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE          00000001 GOOG 00000001)
Dec 13 14:24:03.141028 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001)
Dec 13 14:24:03.141045 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001)
Dec 13 14:24:03.141064 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001)
Dec 13 14:24:03.141081 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
Dec 13 14:24:03.141097 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3]
Dec 13 14:24:03.141114 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63]
Dec 13 14:24:03.141130 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f]
Dec 13 14:24:03.141147 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315]
Dec 13 14:24:03.141163 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033]
Dec 13 14:24:03.141179 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7]
Dec 13 14:24:03.141196 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075]
Dec 13 14:24:03.141215 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f]
Dec 13 14:24:03.141231 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027]
Dec 13 14:24:03.141248 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Dec 13 14:24:03.141264 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Dec 13 14:24:03.141280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Dec 13 14:24:03.141297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
Dec 13 14:24:03.141314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff]
Dec 13 14:24:03.141337 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
Dec 13 14:24:03.141354 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff]
Dec 13 14:24:03.141373 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff]
Dec 13 14:24:03.141390 kernel: Zone ranges:
Dec 13 14:24:03.141405 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 14:24:03.141419 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 14:24:03.141434 kernel:   Normal   [mem 0x0000000100000000-0x000000021fffffff]
Dec 13 14:24:03.141449 kernel: Movable zone start for each node
Dec 13 14:24:03.141465 kernel: Early memory node ranges
Dec 13 14:24:03.141479 kernel:   node   0: [mem 0x0000000000001000-0x0000000000054fff]
Dec 13 14:24:03.141495 kernel:   node   0: [mem 0x0000000000060000-0x0000000000097fff]
Dec 13 14:24:03.141515 kernel:   node   0: [mem 0x0000000000100000-0x00000000bd276fff]
Dec 13 14:24:03.141531 kernel:   node   0: [mem 0x00000000bd281000-0x00000000bf8ecfff]
Dec 13 14:24:03.141548 kernel:   node   0: [mem 0x00000000bfbff000-0x00000000bffdffff]
Dec 13 14:24:03.141564 kernel:   node   0: [mem 0x0000000100000000-0x000000021fffffff]
Dec 13 14:24:03.141580 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff]
Dec 13 14:24:03.141596 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 14:24:03.141613 kernel: On node 0, zone DMA: 11 pages in unavailable ranges
Dec 13 14:24:03.141630 kernel: On node 0, zone DMA: 104 pages in unavailable ranges
Dec 13 14:24:03.141646 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges
Dec 13 14:24:03.141666 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges
Dec 13 14:24:03.141683 kernel: On node 0, zone Normal: 32 pages in unavailable ranges
Dec 13 14:24:03.141699 kernel: ACPI: PM-Timer IO Port: 0xb008
Dec 13 14:24:03.141715 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 14:24:03.141731 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 14:24:03.141799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 14:24:03.141817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 14:24:03.141834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 14:24:03.141850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 14:24:03.141871 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 14:24:03.141888 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Dec 13 14:24:03.141904 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices
Dec 13 14:24:03.141921 kernel: Booting paravirtualized kernel on KVM
Dec 13 14:24:03.141938 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 14:24:03.141955 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Dec 13 14:24:03.141972 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576
Dec 13 14:24:03.141989 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152
Dec 13 14:24:03.142004 kernel: pcpu-alloc: [0] 0 1 
Dec 13 14:24:03.142024 kernel: kvm-guest: PV spinlocks enabled
Dec 13 14:24:03.142041 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Dec 13 14:24:03.142057 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1932270
Dec 13 14:24:03.142074 kernel: Policy zone: Normal
Dec 13 14:24:03.142092 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e
Dec 13 14:24:03.142109 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 14:24:03.142125 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 14:24:03.142142 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 14:24:03.142159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 14:24:03.142180 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 344876K reserved, 0K cma-reserved)
Dec 13 14:24:03.142197 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Dec 13 14:24:03.142213 kernel: Kernel/User page tables isolation: enabled
Dec 13 14:24:03.142230 kernel: ftrace: allocating 34549 entries in 135 pages
Dec 13 14:24:03.142247 kernel: ftrace: allocated 135 pages with 4 groups
Dec 13 14:24:03.142263 kernel: rcu: Hierarchical RCU implementation.
Dec 13 14:24:03.142280 kernel: rcu:         RCU event tracing is enabled.
Dec 13 14:24:03.142296 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Dec 13 14:24:03.142324 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 14:24:03.142355 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 14:24:03.142372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 14:24:03.142393 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Dec 13 14:24:03.142411 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Dec 13 14:24:03.142428 kernel: Console: colour dummy device 80x25
Dec 13 14:24:03.142446 kernel: printk: console [ttyS0] enabled
Dec 13 14:24:03.142464 kernel: ACPI: Core revision 20210730
Dec 13 14:24:03.142481 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 14:24:03.142499 kernel: x2apic enabled
Dec 13 14:24:03.142520 kernel: Switched APIC routing to physical x2apic.
Dec 13 14:24:03.142538 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1
Dec 13 14:24:03.142556 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Dec 13 14:24:03.142573 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998)
Dec 13 14:24:03.142591 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
Dec 13 14:24:03.142608 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
Dec 13 14:24:03.142626 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 14:24:03.142647 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Dec 13 14:24:03.142664 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Dec 13 14:24:03.142682 kernel: Spectre V2 : Mitigation: IBRS
Dec 13 14:24:03.142700 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 14:24:03.142718 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 13 14:24:03.142735 kernel: RETBleed: Mitigation: IBRS
Dec 13 14:24:03.142766 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 14:24:03.142784 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl
Dec 13 14:24:03.142802 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Dec 13 14:24:03.142824 kernel: MDS: Mitigation: Clear CPU buffers
Dec 13 14:24:03.142841 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 14:24:03.142858 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 14:24:03.142876 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 14:24:03.142894 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 14:24:03.142911 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 14:24:03.142929 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Dec 13 14:24:03.142947 kernel: Freeing SMP alternatives memory: 32K
Dec 13 14:24:03.142965 kernel: pid_max: default: 32768 minimum: 301
Dec 13 14:24:03.142986 kernel: LSM: Security Framework initializing
Dec 13 14:24:03.143003 kernel: SELinux:  Initializing.
Dec 13 14:24:03.143022 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 14:24:03.143040 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 14:24:03.143057 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0)
Dec 13 14:24:03.143075 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only.
Dec 13 14:24:03.143093 kernel: signal: max sigframe size: 1776
Dec 13 14:24:03.143111 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 14:24:03.143128 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Dec 13 14:24:03.143148 kernel: smp: Bringing up secondary CPUs ...
Dec 13 14:24:03.143166 kernel: x86: Booting SMP configuration:
Dec 13 14:24:03.143184 kernel: .... node  #0, CPUs:      #1
Dec 13 14:24:03.143201 kernel: kvm-clock: cpu 1, msr 12d19a041, secondary cpu clock
Dec 13 14:24:03.143220 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Dec 13 14:24:03.143239 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Dec 13 14:24:03.143257 kernel: smp: Brought up 1 node, 2 CPUs
Dec 13 14:24:03.143274 kernel: smpboot: Max logical packages: 1
Dec 13 14:24:03.143295 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS)
Dec 13 14:24:03.143312 kernel: devtmpfs: initialized
Dec 13 14:24:03.143336 kernel: x86/mm: Memory block size: 128MB
Dec 13 14:24:03.143354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes)
Dec 13 14:24:03.143372 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 14:24:03.143389 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Dec 13 14:24:03.143406 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 14:24:03.143424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 14:24:03.143441 kernel: audit: initializing netlink subsys (disabled)
Dec 13 14:24:03.143463 kernel: audit: type=2000 audit(1734099841.455:1): state=initialized audit_enabled=0 res=1
Dec 13 14:24:03.143481 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 14:24:03.143498 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 14:24:03.143516 kernel: cpuidle: using governor menu
Dec 13 14:24:03.143533 kernel: ACPI: bus type PCI registered
Dec 13 14:24:03.143551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 14:24:03.143568 kernel: dca service started, version 1.12.1
Dec 13 14:24:03.143585 kernel: PCI: Using configuration type 1 for base access
Dec 13 14:24:03.143603 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 14:24:03.143625 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 14:24:03.143643 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 14:24:03.143660 kernel: ACPI: Added _OSI(Module Device)
Dec 13 14:24:03.143678 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 14:24:03.143695 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 14:24:03.143713 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 14:24:03.143731 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 13 14:24:03.146394 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 13 14:24:03.146425 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 13 14:24:03.146450 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Dec 13 14:24:03.146609 kernel: ACPI: Interpreter enabled
Dec 13 14:24:03.146628 kernel: ACPI: PM: (supports S0 S3 S5)
Dec 13 14:24:03.146646 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 14:24:03.146665 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 14:24:03.146828 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Dec 13 14:24:03.146848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 14:24:03.147342 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 14:24:03.147516 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Dec 13 14:24:03.147539 kernel: PCI host bridge to bus 0000:00
Dec 13 14:24:03.147690 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 14:24:03.147890 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 14:24:03.148878 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 14:24:03.149055 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window]
Dec 13 14:24:03.149214 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 14:24:03.149434 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 13 14:24:03.149632 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100
Dec 13 14:24:03.149833 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Dec 13 14:24:03.150014 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Dec 13 14:24:03.150202 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000
Dec 13 14:24:03.150381 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc07f]
Dec 13 14:24:03.150578 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f]
Dec 13 14:24:03.150786 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Dec 13 14:24:03.150967 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc03f]
Dec 13 14:24:03.151146 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f]
Dec 13 14:24:03.151340 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00
Dec 13 14:24:03.151518 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Dec 13 14:24:03.151707 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f]
Dec 13 14:24:03.151735 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 14:24:03.151768 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 14:24:03.151785 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 14:24:03.151802 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 14:24:03.151819 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 14:24:03.151837 kernel: iommu: Default domain type: Translated 
Dec 13 14:24:03.151856 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 13 14:24:03.151874 kernel: vgaarb: loaded
Dec 13 14:24:03.151892 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 14:24:03.151914 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 14:24:03.151933 kernel: PTP clock support registered
Dec 13 14:24:03.151951 kernel: Registered efivars operations
Dec 13 14:24:03.151968 kernel: PCI: Using ACPI for IRQ routing
Dec 13 14:24:03.151986 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 14:24:03.152003 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff]
Dec 13 14:24:03.152022 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff]
Dec 13 14:24:03.152039 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff]
Dec 13 14:24:03.152057 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff]
Dec 13 14:24:03.152077 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff]
Dec 13 14:24:03.152094 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 14:24:03.152112 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 14:24:03.152130 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 14:24:03.152148 kernel: pnp: PnP ACPI init
Dec 13 14:24:03.152166 kernel: pnp: PnP ACPI: found 7 devices
Dec 13 14:24:03.152184 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 14:24:03.152202 kernel: NET: Registered PF_INET protocol family
Dec 13 14:24:03.152220 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 14:24:03.152241 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 14:24:03.152259 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 14:24:03.152277 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 14:24:03.152295 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 14:24:03.152313 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 14:24:03.152330 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 14:24:03.152347 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 14:24:03.152364 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 14:24:03.152386 kernel: NET: Registered PF_XDP protocol family
Dec 13 14:24:03.152554 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 14:24:03.152726 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 14:24:03.152902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 14:24:03.157032 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window]
Dec 13 14:24:03.157215 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 14:24:03.157239 kernel: PCI: CLS 0 bytes, default 64
Dec 13 14:24:03.157263 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 14:24:03.157280 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB)
Dec 13 14:24:03.157297 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Dec 13 14:24:03.157313 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Dec 13 14:24:03.157330 kernel: clocksource: Switched to clocksource tsc
Dec 13 14:24:03.157346 kernel: Initialise system trusted keyrings
Dec 13 14:24:03.157363 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Dec 13 14:24:03.157380 kernel: Key type asymmetric registered
Dec 13 14:24:03.157396 kernel: Asymmetric key parser 'x509' registered
Dec 13 14:24:03.157415 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Dec 13 14:24:03.157431 kernel: io scheduler mq-deadline registered
Dec 13 14:24:03.157448 kernel: io scheduler kyber registered
Dec 13 14:24:03.157464 kernel: io scheduler bfq registered
Dec 13 14:24:03.157481 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 14:24:03.157499 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 13 14:24:03.157681 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Dec 13 14:24:03.157702 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Dec 13 14:24:03.157891 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Dec 13 14:24:03.157916 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 13 14:24:03.158082 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Dec 13 14:24:03.158103 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 14:24:03.158119 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 14:24:03.158135 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Dec 13 14:24:03.158152 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A
Dec 13 14:24:03.158167 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A
Dec 13 14:24:03.158348 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0)
Dec 13 14:24:03.158375 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 14:24:03.158393 kernel: i8042: Warning: Keylock active
Dec 13 14:24:03.158409 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 14:24:03.158427 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 14:24:03.158614 kernel: rtc_cmos 00:00: RTC can wake from S4
Dec 13 14:24:03.158874 kernel: rtc_cmos 00:00: registered as rtc0
Dec 13 14:24:03.159034 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:24:02 UTC (1734099842)
Dec 13 14:24:03.159187 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Dec 13 14:24:03.159213 kernel: intel_pstate: CPU model not supported
Dec 13 14:24:03.159228 kernel: pstore: Registered efi as persistent store backend
Dec 13 14:24:03.159244 kernel: NET: Registered PF_INET6 protocol family
Dec 13 14:24:03.159262 kernel: Segment Routing with IPv6
Dec 13 14:24:03.159278 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 14:24:03.159296 kernel: NET: Registered PF_PACKET protocol family
Dec 13 14:24:03.159313 kernel: Key type dns_resolver registered
Dec 13 14:24:03.159329 kernel: IPI shorthand broadcast: enabled
Dec 13 14:24:03.159346 kernel: sched_clock: Marking stable (748914366, 152898107)->(969813862, -68001389)
Dec 13 14:24:03.159367 kernel: registered taskstats version 1
Dec 13 14:24:03.159383 kernel: Loading compiled-in X.509 certificates
Dec 13 14:24:03.159398 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Dec 13 14:24:03.159414 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115'
Dec 13 14:24:03.159429 kernel: Key type .fscrypt registered
Dec 13 14:24:03.159444 kernel: Key type fscrypt-provisioning registered
Dec 13 14:24:03.159462 kernel: pstore: Using crash dump compression: deflate
Dec 13 14:24:03.159478 kernel: ima: Allocated hash algorithm: sha1
Dec 13 14:24:03.159495 kernel: ima: No architecture policies found
Dec 13 14:24:03.159515 kernel: clk: Disabling unused clocks
Dec 13 14:24:03.159531 kernel: Freeing unused kernel image (initmem) memory: 47472K
Dec 13 14:24:03.159549 kernel: Write protecting the kernel read-only data: 28672k
Dec 13 14:24:03.159574 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 13 14:24:03.159591 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K
Dec 13 14:24:03.159605 kernel: Run /init as init process
Dec 13 14:24:03.159627 kernel:   with arguments:
Dec 13 14:24:03.159648 kernel:     /init
Dec 13 14:24:03.159663 kernel:   with environment:
Dec 13 14:24:03.159681 kernel:     HOME=/
Dec 13 14:24:03.159698 kernel:     TERM=linux
Dec 13 14:24:03.159714 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 14:24:03.159736 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 14:24:03.159773 systemd[1]: Detected virtualization kvm.
Dec 13 14:24:03.159791 systemd[1]: Detected architecture x86-64.
Dec 13 14:24:03.159808 systemd[1]: Running in initrd.
Dec 13 14:24:03.159827 systemd[1]: No hostname configured, using default hostname.
Dec 13 14:24:03.159843 systemd[1]: Hostname set to <localhost>.
Dec 13 14:24:03.159861 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 14:24:03.159876 systemd[1]: Queued start job for default target initrd.target.
Dec 13 14:24:03.159891 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 14:24:03.159908 systemd[1]: Reached target cryptsetup.target.
Dec 13 14:24:03.159925 systemd[1]: Reached target paths.target.
Dec 13 14:24:03.159942 systemd[1]: Reached target slices.target.
Dec 13 14:24:03.159963 systemd[1]: Reached target swap.target.
Dec 13 14:24:03.159979 systemd[1]: Reached target timers.target.
Dec 13 14:24:03.159998 systemd[1]: Listening on iscsid.socket.
Dec 13 14:24:03.160015 systemd[1]: Listening on iscsiuio.socket.
Dec 13 14:24:03.160033 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 14:24:03.160051 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 14:24:03.160069 systemd[1]: Listening on systemd-journald.socket.
Dec 13 14:24:03.160086 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 14:24:03.160107 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 14:24:03.160125 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 14:24:03.160163 systemd[1]: Reached target sockets.target.
Dec 13 14:24:03.160185 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 14:24:03.160204 systemd[1]: Finished network-cleanup.service.
Dec 13 14:24:03.160223 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 14:24:03.160242 systemd[1]: Starting systemd-journald.service...
Dec 13 14:24:03.160263 systemd[1]: Starting systemd-modules-load.service...
Dec 13 14:24:03.160281 systemd[1]: Starting systemd-resolved.service...
Dec 13 14:24:03.160300 systemd[1]: Starting systemd-vconsole-setup.service...
Dec 13 14:24:03.160318 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 14:24:03.160337 kernel: audit: type=1130 audit(1734099843.151:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.160356 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 14:24:03.160380 systemd-journald[188]: Journal started
Dec 13 14:24:03.160474 systemd-journald[188]: Runtime Journal (/run/log/journal/45a6448b8db2a1dcd037dbd5c167a35a) is 8.0M, max 148.8M, 140.8M free.
Dec 13 14:24:03.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.159660 systemd-modules-load[189]: Inserted module 'overlay'
Dec 13 14:24:03.187611 systemd[1]: Started systemd-journald.service.
Dec 13 14:24:03.187655 kernel: audit: type=1130 audit(1734099843.160:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.187682 kernel: audit: type=1130 audit(1734099843.167:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.187713 kernel: audit: type=1130 audit(1734099843.169:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.169182 systemd[1]: Finished systemd-vconsole-setup.service.
Dec 13 14:24:03.172133 systemd[1]: Starting dracut-cmdline-ask.service...
Dec 13 14:24:03.180979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 14:24:03.204309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 14:24:03.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.213777 kernel: audit: type=1130 audit(1734099843.202:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.225543 systemd-resolved[190]: Positive Trust Anchors:
Dec 13 14:24:03.225566 systemd-resolved[190]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 14:24:03.225618 systemd-resolved[190]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 14:24:03.237773 systemd-resolved[190]: Defaulting to hostname 'linux'.
Dec 13 14:24:03.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.239398 systemd[1]: Started systemd-resolved.service.
Dec 13 14:24:03.239874 systemd[1]: Reached target nss-lookup.target.
Dec 13 14:24:03.246866 kernel: audit: type=1130 audit(1734099843.237:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.250159 systemd[1]: Finished dracut-cmdline-ask.service.
Dec 13 14:24:03.262581 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 14:24:03.262636 kernel: audit: type=1130 audit(1734099843.254:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.257364 systemd[1]: Starting dracut-cmdline.service...
Dec 13 14:24:03.266557 systemd-modules-load[189]: Inserted module 'br_netfilter'
Dec 13 14:24:03.271880 kernel: Bridge firewalling registered
Dec 13 14:24:03.276154 dracut-cmdline[205]: dracut-dracut-053
Dec 13 14:24:03.280463 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e
Dec 13 14:24:03.301776 kernel: SCSI subsystem initialized
Dec 13 14:24:03.320360 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 14:24:03.320445 kernel: device-mapper: uevent: version 1.0.3
Dec 13 14:24:03.321692 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Dec 13 14:24:03.326848 systemd-modules-load[189]: Inserted module 'dm_multipath'
Dec 13 14:24:03.327995 systemd[1]: Finished systemd-modules-load.service.
Dec 13 14:24:03.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.341223 systemd[1]: Starting systemd-sysctl.service...
Dec 13 14:24:03.349204 kernel: audit: type=1130 audit(1734099843.338:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.359142 systemd[1]: Finished systemd-sysctl.service.
Dec 13 14:24:03.369913 kernel: audit: type=1130 audit(1734099843.361:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.382778 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 14:24:03.403789 kernel: iscsi: registered transport (tcp)
Dec 13 14:24:03.430796 kernel: iscsi: registered transport (qla4xxx)
Dec 13 14:24:03.430888 kernel: QLogic iSCSI HBA Driver
Dec 13 14:24:03.475141 systemd[1]: Finished dracut-cmdline.service.
Dec 13 14:24:03.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.480580 systemd[1]: Starting dracut-pre-udev.service...
Dec 13 14:24:03.541847 kernel: raid6: avx2x4   gen() 18448 MB/s
Dec 13 14:24:03.562802 kernel: raid6: avx2x4   xor()  6892 MB/s
Dec 13 14:24:03.583829 kernel: raid6: avx2x2   gen() 17811 MB/s
Dec 13 14:24:03.604797 kernel: raid6: avx2x2   xor() 18419 MB/s
Dec 13 14:24:03.625808 kernel: raid6: avx2x1   gen() 14283 MB/s
Dec 13 14:24:03.646798 kernel: raid6: avx2x1   xor() 16111 MB/s
Dec 13 14:24:03.667799 kernel: raid6: sse2x4   gen() 11062 MB/s
Dec 13 14:24:03.688799 kernel: raid6: sse2x4   xor()  6644 MB/s
Dec 13 14:24:03.709791 kernel: raid6: sse2x2   gen() 12068 MB/s
Dec 13 14:24:03.730805 kernel: raid6: sse2x2   xor()  7429 MB/s
Dec 13 14:24:03.751796 kernel: raid6: sse2x1   gen() 10541 MB/s
Dec 13 14:24:03.777787 kernel: raid6: sse2x1   xor()  5181 MB/s
Dec 13 14:24:03.777832 kernel: raid6: using algorithm avx2x4 gen() 18448 MB/s
Dec 13 14:24:03.777855 kernel: raid6: .... xor() 6892 MB/s, rmw enabled
Dec 13 14:24:03.782863 kernel: raid6: using avx2x2 recovery algorithm
Dec 13 14:24:03.807793 kernel: xor: automatically using best checksumming function   avx       
Dec 13 14:24:03.920788 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Dec 13 14:24:03.932597 systemd[1]: Finished dracut-pre-udev.service.
Dec 13 14:24:03.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.931000 audit: BPF prog-id=7 op=LOAD
Dec 13 14:24:03.931000 audit: BPF prog-id=8 op=LOAD
Dec 13 14:24:03.934005 systemd[1]: Starting systemd-udevd.service...
Dec 13 14:24:03.951191 systemd-udevd[387]: Using default interface naming scheme 'v252'.
Dec 13 14:24:03.972078 systemd[1]: Started systemd-udevd.service.
Dec 13 14:24:03.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:03.982216 systemd[1]: Starting dracut-pre-trigger.service...
Dec 13 14:24:03.996868 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation
Dec 13 14:24:04.035552 systemd[1]: Finished dracut-pre-trigger.service.
Dec 13 14:24:04.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:04.036744 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 14:24:04.104785 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 14:24:04.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:04.195016 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 14:24:04.210773 kernel: scsi host0: Virtio SCSI HBA
Dec 13 14:24:04.288728 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 14:24:04.288863 kernel: scsi 0:0:1:0: Direct-Access     Google   PersistentDisk   1    PQ: 0 ANSI: 6
Dec 13 14:24:04.288928 kernel: AES CTR mode by8 optimization enabled
Dec 13 14:24:04.363682 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB)
Dec 13 14:24:04.424813 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks
Dec 13 14:24:04.425056 kernel: sd 0:0:1:0: [sda] Write Protect is off
Dec 13 14:24:04.425298 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08
Dec 13 14:24:04.425499 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 13 14:24:04.425653 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 14:24:04.425669 kernel: GPT:17805311 != 25165823
Dec 13 14:24:04.425683 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 14:24:04.425696 kernel: GPT:17805311 != 25165823
Dec 13 14:24:04.425709 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 14:24:04.425722 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 14:24:04.425736 kernel: sd 0:0:1:0: [sda] Attached SCSI disk
Dec 13 14:24:04.478776 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443)
Dec 13 14:24:04.492279 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Dec 13 14:24:04.523577 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Dec 13 14:24:04.538374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Dec 13 14:24:04.553917 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Dec 13 14:24:04.577069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 14:24:04.595034 systemd[1]: Starting disk-uuid.service...
Dec 13 14:24:04.620098 disk-uuid[508]: Primary Header is updated.
Dec 13 14:24:04.620098 disk-uuid[508]: Secondary Entries is updated.
Dec 13 14:24:04.620098 disk-uuid[508]: Secondary Header is updated.
Dec 13 14:24:04.646871 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 14:24:04.665781 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 14:24:04.695789 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 14:24:05.683680 disk-uuid[509]: The operation has completed successfully.
Dec 13 14:24:05.693016 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 14:24:05.751970 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 14:24:05.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:05.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:05.752129 systemd[1]: Finished disk-uuid.service.
Dec 13 14:24:05.762832 systemd[1]: Starting verity-setup.service...
Dec 13 14:24:05.791785 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 14:24:05.881640 systemd[1]: Found device dev-mapper-usr.device.
Dec 13 14:24:05.896237 systemd[1]: Finished verity-setup.service.
Dec 13 14:24:05.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:05.897589 systemd[1]: Mounting sysusr-usr.mount...
Dec 13 14:24:05.999003 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 14:24:05.999561 systemd[1]: Mounted sysusr-usr.mount.
Dec 13 14:24:06.007143 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Dec 13 14:24:06.054080 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 14:24:06.054123 kernel: BTRFS info (device sda6): using free space tree
Dec 13 14:24:06.054155 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 14:24:06.008127 systemd[1]: Starting ignition-setup.service...
Dec 13 14:24:06.074907 kernel: BTRFS info (device sda6): enabling ssd optimizations
Dec 13 14:24:06.023019 systemd[1]: Starting parse-ip-for-networkd.service...
Dec 13 14:24:06.080737 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 14:24:06.096139 systemd[1]: Finished ignition-setup.service.
Dec 13 14:24:06.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.097461 systemd[1]: Starting ignition-fetch-offline.service...
Dec 13 14:24:06.166374 systemd[1]: Finished parse-ip-for-networkd.service.
Dec 13 14:24:06.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.174000 audit: BPF prog-id=9 op=LOAD
Dec 13 14:24:06.177223 systemd[1]: Starting systemd-networkd.service...
Dec 13 14:24:06.212937 systemd-networkd[684]: lo: Link UP
Dec 13 14:24:06.212950 systemd-networkd[684]: lo: Gained carrier
Dec 13 14:24:06.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.213804 systemd-networkd[684]: Enumeration completed
Dec 13 14:24:06.213956 systemd[1]: Started systemd-networkd.service.
Dec 13 14:24:06.214368 systemd-networkd[684]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 14:24:06.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.216730 systemd-networkd[684]: eth0: Link UP
Dec 13 14:24:06.297929 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 14:24:06.297929 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log
Dec 13 14:24:06.297929 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 13 14:24:06.297929 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 13 14:24:06.297929 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 13 14:24:06.297929 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 14:24:06.297929 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 13 14:24:06.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.216739 systemd-networkd[684]: eth0: Gained carrier
Dec 13 14:24:06.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.393155 ignition[614]: Ignition 2.14.0
Dec 13 14:24:06.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.225893 systemd-networkd[684]: eth0: DHCPv4 address 10.128.0.48/32, gateway 10.128.0.1 acquired from 169.254.169.254
Dec 13 14:24:06.393170 ignition[614]: Stage: fetch-offline
Dec 13 14:24:06.229039 systemd[1]: Reached target network.target.
Dec 13 14:24:06.393245 ignition[614]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:06.245218 systemd[1]: Starting iscsiuio.service...
Dec 13 14:24:06.393286 ignition[614]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:06.258059 systemd[1]: Started iscsiuio.service.
Dec 13 14:24:06.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.410729 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:06.273688 systemd[1]: Starting iscsid.service...
Dec 13 14:24:06.410995 ignition[614]: parsed url from cmdline: ""
Dec 13 14:24:06.290056 systemd[1]: Started iscsid.service.
Dec 13 14:24:06.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.411003 ignition[614]: no config URL provided
Dec 13 14:24:06.306331 systemd[1]: Starting dracut-initqueue.service...
Dec 13 14:24:06.411011 ignition[614]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 14:24:06.324635 systemd[1]: Finished dracut-initqueue.service.
Dec 13 14:24:06.411023 ignition[614]: no config at "/usr/lib/ignition/user.ign"
Dec 13 14:24:06.359240 systemd[1]: Reached target remote-fs-pre.target.
Dec 13 14:24:06.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.411032 ignition[614]: failed to fetch config: resource requires networking
Dec 13 14:24:06.394957 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 14:24:06.411237 ignition[614]: Ignition finished successfully
Dec 13 14:24:06.411905 systemd[1]: Reached target remote-fs.target.
Dec 13 14:24:06.482734 ignition[708]: Ignition 2.14.0
Dec 13 14:24:06.413206 systemd[1]: Starting dracut-pre-mount.service...
Dec 13 14:24:06.482743 ignition[708]: Stage: fetch
Dec 13 14:24:06.430428 systemd[1]: Finished ignition-fetch-offline.service.
Dec 13 14:24:06.482909 ignition[708]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:06.455358 systemd[1]: Finished dracut-pre-mount.service.
Dec 13 14:24:06.482948 ignition[708]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:06.471176 systemd[1]: Starting ignition-fetch.service...
Dec 13 14:24:06.491395 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:06.503614 unknown[708]: fetched base config from "system"
Dec 13 14:24:06.491598 ignition[708]: parsed url from cmdline: ""
Dec 13 14:24:06.503628 unknown[708]: fetched base config from "system"
Dec 13 14:24:06.491604 ignition[708]: no config URL provided
Dec 13 14:24:06.503639 unknown[708]: fetched user config from "gcp"
Dec 13 14:24:06.491611 ignition[708]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 14:24:06.520380 systemd[1]: Finished ignition-fetch.service.
Dec 13 14:24:06.491622 ignition[708]: no config at "/usr/lib/ignition/user.ign"
Dec 13 14:24:06.536290 systemd[1]: Starting ignition-kargs.service...
Dec 13 14:24:06.491658 ignition[708]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1
Dec 13 14:24:06.559824 systemd[1]: Finished ignition-kargs.service.
Dec 13 14:24:06.499127 ignition[708]: GET result: OK
Dec 13 14:24:06.575251 systemd[1]: Starting ignition-disks.service...
Dec 13 14:24:06.499241 ignition[708]: parsing config with SHA512: 580c8313aa0229d20df9c19cd0050ba49bd62c1c8cd0318f0edc4bb4219f1d93f09eaa87f9bec5029fa863fcf88a823c5695b5a828083b20c52a0218ad312bca
Dec 13 14:24:06.614383 systemd[1]: Finished ignition-disks.service.
Dec 13 14:24:06.507484 ignition[708]: fetch: fetch complete
Dec 13 14:24:06.635314 systemd[1]: Reached target initrd-root-device.target.
Dec 13 14:24:06.507494 ignition[708]: fetch: fetch passed
Dec 13 14:24:06.651069 systemd[1]: Reached target local-fs-pre.target.
Dec 13 14:24:06.507566 ignition[708]: Ignition finished successfully
Dec 13 14:24:06.661164 systemd[1]: Reached target local-fs.target.
Dec 13 14:24:06.549586 ignition[714]: Ignition 2.14.0
Dec 13 14:24:06.683066 systemd[1]: Reached target sysinit.target.
Dec 13 14:24:06.549597 ignition[714]: Stage: kargs
Dec 13 14:24:06.690127 systemd[1]: Reached target basic.target.
Dec 13 14:24:06.549738 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:06.711161 systemd[1]: Starting systemd-fsck-root.service...
Dec 13 14:24:06.549803 ignition[714]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:06.557302 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:06.558589 ignition[714]: kargs: kargs passed
Dec 13 14:24:06.558639 ignition[714]: Ignition finished successfully
Dec 13 14:24:06.587654 ignition[720]: Ignition 2.14.0
Dec 13 14:24:06.587666 ignition[720]: Stage: disks
Dec 13 14:24:06.587859 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:06.587901 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:06.595565 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:06.596952 ignition[720]: disks: disks passed
Dec 13 14:24:06.597120 ignition[720]: Ignition finished successfully
Dec 13 14:24:06.759447 systemd-fsck[728]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks
Dec 13 14:24:06.969735 systemd[1]: Finished systemd-fsck-root.service.
Dec 13 14:24:06.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:06.979137 systemd[1]: Mounting sysroot.mount...
Dec 13 14:24:07.009964 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 14:24:07.008082 systemd[1]: Mounted sysroot.mount.
Dec 13 14:24:07.021220 systemd[1]: Reached target initrd-root-fs.target.
Dec 13 14:24:07.041723 systemd[1]: Mounting sysroot-usr.mount...
Dec 13 14:24:07.058428 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Dec 13 14:24:07.058521 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 14:24:07.058569 systemd[1]: Reached target ignition-diskful.target.
Dec 13 14:24:07.079324 systemd[1]: Mounted sysroot-usr.mount.
Dec 13 14:24:07.103943 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 14:24:07.153624 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (734)
Dec 13 14:24:07.153665 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 14:24:07.153688 kernel: BTRFS info (device sda6): using free space tree
Dec 13 14:24:07.153709 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 14:24:07.109150 systemd[1]: Starting initrd-setup-root.service...
Dec 13 14:24:07.174923 kernel: BTRFS info (device sda6): enabling ssd optimizations
Dec 13 14:24:07.172351 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 14:24:07.183921 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 14:24:07.194883 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory
Dec 13 14:24:07.212903 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 14:24:07.223871 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 14:24:07.240232 systemd[1]: Finished initrd-setup-root.service.
Dec 13 14:24:07.279958 kernel: kauditd_printk_skb: 23 callbacks suppressed
Dec 13 14:24:07.280006 kernel: audit: type=1130 audit(1734099847.238:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:07.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:07.241827 systemd[1]: Starting ignition-mount.service...
Dec 13 14:24:07.288063 systemd[1]: Starting sysroot-boot.service...
Dec 13 14:24:07.302059 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Dec 13 14:24:07.302220 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Dec 13 14:24:07.327906 ignition[800]: INFO     : Ignition 2.14.0
Dec 13 14:24:07.327906 ignition[800]: INFO     : Stage: mount
Dec 13 14:24:07.327906 ignition[800]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:07.327906 ignition[800]: DEBUG    : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:07.424927 kernel: audit: type=1130 audit(1734099847.334:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:07.424984 kernel: audit: type=1130 audit(1734099847.384:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:07.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:07.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:07.334144 systemd[1]: Finished sysroot-boot.service.
Dec 13 14:24:07.438944 ignition[800]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:07.438944 ignition[800]: INFO     : mount: mount passed
Dec 13 14:24:07.438944 ignition[800]: INFO     : Ignition finished successfully
Dec 13 14:24:07.483291 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (810)
Dec 13 14:24:07.483332 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 14:24:07.483357 kernel: BTRFS info (device sda6): using free space tree
Dec 13 14:24:07.483381 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 14:24:07.338900 systemd[1]: Finished ignition-mount.service.
Dec 13 14:24:07.509950 kernel: BTRFS info (device sda6): enabling ssd optimizations
Dec 13 14:24:07.387370 systemd[1]: Starting ignition-files.service...
Dec 13 14:24:07.436372 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 14:24:07.506171 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 14:24:07.540140 ignition[829]: INFO     : Ignition 2.14.0
Dec 13 14:24:07.540140 ignition[829]: INFO     : Stage: files
Dec 13 14:24:07.540140 ignition[829]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:07.540140 ignition[829]: DEBUG    : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:07.595925 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (829)
Dec 13 14:24:07.559921 unknown[829]: wrote ssh authorized keys file for user: core
Dec 13 14:24:07.605015 ignition[829]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:07.605015 ignition[829]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 14:24:07.605015 ignition[829]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/hosts"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(4): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1723058309"
Dec 13 14:24:07.605015 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1723058309": device or resource busy
Dec 13 14:24:07.605015 ignition[829]: ERROR    : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1723058309", trying btrfs: device or resource busy
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(5): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1723058309"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1723058309"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(6): [started]  unmounting "/mnt/oem1723058309"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1723058309"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts"
Dec 13 14:24:07.605015 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 14:24:07.868979 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 14:24:07.734959 systemd-networkd[684]: eth0: Gained IPv6LL
Dec 13 14:24:08.720428 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Dec 13 14:24:08.877971 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 14:24:08.895920 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Dec 13 14:24:08.895920 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Dec 13 14:24:09.237598 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET result: OK
Dec 13 14:24:09.391906 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem681610475"
Dec 13 14:24:09.406913 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem681610475": device or resource busy
Dec 13 14:24:09.406913 ignition[829]: ERROR    : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem681610475", trying btrfs: device or resource busy
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(b): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem681610475"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem681610475"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(c): [started]  unmounting "/mnt/oem681610475"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem681610475"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 14:24:09.406913 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(11): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(12): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): [started]  writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): op(14): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1854214117"
Dec 13 14:24:09.652953 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1854214117": device or resource busy
Dec 13 14:24:09.652953 ignition[829]: ERROR    : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1854214117", trying btrfs: device or resource busy
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): op(15): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1854214117"
Dec 13 14:24:09.652953 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1854214117"
Dec 13 14:24:09.423198 systemd[1]: mnt-oem1854214117.mount: Deactivated successfully.
Dec 13 14:24:09.910952 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): op(16): [started]  unmounting "/mnt/oem1854214117"
Dec 13 14:24:09.910952 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem1854214117"
Dec 13 14:24:09.910952 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service"
Dec 13 14:24:09.910952 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(17): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:24:09.910952 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1
Dec 13 14:24:09.910952 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(17): GET result: OK
Dec 13 14:24:10.041217 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Dec 13 14:24:10.041217 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): [started]  writing file "/sysroot/etc/systemd/system/oem-gce.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): op(19): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2555444694"
Dec 13 14:24:10.077048 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem2555444694": device or resource busy
Dec 13 14:24:10.077048 ignition[829]: ERROR    : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2555444694", trying btrfs: device or resource busy
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2555444694"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2555444694"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started]  unmounting "/mnt/oem2555444694"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem2555444694"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1c): [started]  processing unit "coreos-metadata-sshkeys@.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1d): [started]  processing unit "oem-gce.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1d): [finished] processing unit "oem-gce.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1e): [started]  processing unit "oem-gce-enable-oslogin.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1f): [started]  processing unit "prepare-helm.service"
Dec 13 14:24:10.077048 ignition[829]: INFO     : files: op(1f): op(20): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 14:24:10.562977 kernel: audit: type=1130 audit(1734099850.083:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.563033 kernel: audit: type=1130 audit(1734099850.181:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.563060 kernel: audit: type=1130 audit(1734099850.233:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.563084 kernel: audit: type=1131 audit(1734099850.233:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.563106 kernel: audit: type=1130 audit(1734099850.334:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.563137 kernel: audit: type=1131 audit(1734099850.334:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.563152 kernel: audit: type=1130 audit(1734099850.489:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.070478 systemd[1]: Finished ignition-files.service.
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(1f): [finished] processing unit "prepare-helm.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(21): [started]  setting preset to enabled for "oem-gce-enable-oslogin.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(21): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(22): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(22): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(23): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(24): [started]  setting preset to enabled for "oem-gce.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: op(24): [finished] setting preset to enabled for "oem-gce.service"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: createResultFile: createFiles: op(25): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 14:24:10.577930 ignition[829]: INFO     : files: files passed
Dec 13 14:24:10.577930 ignition[829]: INFO     : Ignition finished successfully
Dec 13 14:24:10.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.095647 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Dec 13 14:24:10.129948 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Dec 13 14:24:10.874958 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 14:24:10.131111 systemd[1]: Starting ignition-quench.service...
Dec 13 14:24:10.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.154282 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Dec 13 14:24:10.206177 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 14:24:10.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.206292 systemd[1]: Finished ignition-quench.service.
Dec 13 14:24:10.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.235180 systemd[1]: Reached target ignition-complete.target.
Dec 13 14:24:10.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.293217 systemd[1]: Starting initrd-parse-etc.service...
Dec 13 14:24:10.330557 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 14:24:10.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.008187 ignition[867]: INFO     : Ignition 2.14.0
Dec 13 14:24:11.008187 ignition[867]: INFO     : Stage: umount
Dec 13 14:24:11.008187 ignition[867]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 14:24:11.008187 ignition[867]: DEBUG    : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Dec 13 14:24:11.008187 ignition[867]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Dec 13 14:24:11.008187 ignition[867]: INFO     : umount: umount passed
Dec 13 14:24:11.008187 ignition[867]: INFO     : Ignition finished successfully
Dec 13 14:24:11.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.330695 systemd[1]: Finished initrd-parse-etc.service.
Dec 13 14:24:11.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.336405 systemd[1]: Reached target initrd-fs.target.
Dec 13 14:24:11.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.397183 systemd[1]: Reached target initrd.target.
Dec 13 14:24:11.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.437155 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Dec 13 14:24:10.438514 systemd[1]: Starting dracut-pre-pivot.service...
Dec 13 14:24:10.479950 systemd[1]: mnt-oem2555444694.mount: Deactivated successfully.
Dec 13 14:24:11.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.480518 systemd[1]: Finished dracut-pre-pivot.service.
Dec 13 14:24:10.492596 systemd[1]: Starting initrd-cleanup.service...
Dec 13 14:24:10.555074 systemd[1]: Stopped target nss-lookup.target.
Dec 13 14:24:10.571275 systemd[1]: Stopped target remote-cryptsetup.target.
Dec 13 14:24:10.578295 systemd[1]: Stopped target timers.target.
Dec 13 14:24:10.619252 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 14:24:10.619440 systemd[1]: Stopped dracut-pre-pivot.service.
Dec 13 14:24:10.633490 systemd[1]: Stopped target initrd.target.
Dec 13 14:24:10.671276 systemd[1]: Stopped target basic.target.
Dec 13 14:24:10.684316 systemd[1]: Stopped target ignition-complete.target.
Dec 13 14:24:11.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.725251 systemd[1]: Stopped target ignition-diskful.target.
Dec 13 14:24:11.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.746245 systemd[1]: Stopped target initrd-root-device.target.
Dec 13 14:24:10.759450 systemd[1]: Stopped target remote-fs.target.
Dec 13 14:24:10.782399 systemd[1]: Stopped target remote-fs-pre.target.
Dec 13 14:24:10.804326 systemd[1]: Stopped target sysinit.target.
Dec 13 14:24:11.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.818308 systemd[1]: Stopped target local-fs.target.
Dec 13 14:24:11.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.840245 systemd[1]: Stopped target local-fs-pre.target.
Dec 13 14:24:11.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.407000 audit: BPF prog-id=6 op=UNLOAD
Dec 13 14:24:10.858215 systemd[1]: Stopped target swap.target.
Dec 13 14:24:10.882203 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 14:24:10.882419 systemd[1]: Stopped dracut-pre-mount.service.
Dec 13 14:24:11.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.904326 systemd[1]: Stopped target cryptsetup.target.
Dec 13 14:24:11.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.920136 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 14:24:11.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.920326 systemd[1]: Stopped dracut-initqueue.service.
Dec 13 14:24:10.936332 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 14:24:11.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.936517 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Dec 13 14:24:10.954270 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 14:24:10.954444 systemd[1]: Stopped ignition-files.service.
Dec 13 14:24:11.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.970705 systemd[1]: Stopping ignition-mount.service...
Dec 13 14:24:11.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.983934 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 14:24:11.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:10.984204 systemd[1]: Stopped kmod-static-nodes.service.
Dec 13 14:24:11.001370 systemd[1]: Stopping sysroot-boot.service...
Dec 13 14:24:11.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.015951 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 14:24:11.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.016232 systemd[1]: Stopped systemd-udev-trigger.service.
Dec 13 14:24:11.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:11.023318 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 14:24:11.023489 systemd[1]: Stopped dracut-pre-trigger.service.
Dec 13 14:24:11.057074 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 14:24:11.058258 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 14:24:11.058370 systemd[1]: Stopped ignition-mount.service.
Dec 13 14:24:11.734235 systemd-journald[188]: Received SIGTERM from PID 1 (n/a).
Dec 13 14:24:11.081556 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 14:24:11.742923 iscsid[693]: iscsid shutting down.
Dec 13 14:24:11.081671 systemd[1]: Stopped sysroot-boot.service.
Dec 13 14:24:11.092823 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 14:24:11.092973 systemd[1]: Stopped ignition-disks.service.
Dec 13 14:24:11.119133 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 14:24:11.119206 systemd[1]: Stopped ignition-kargs.service.
Dec 13 14:24:11.135288 systemd[1]: ignition-fetch.service: Deactivated successfully.
Dec 13 14:24:11.135359 systemd[1]: Stopped ignition-fetch.service.
Dec 13 14:24:11.150108 systemd[1]: Stopped target network.target.
Dec 13 14:24:11.165023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 14:24:11.165124 systemd[1]: Stopped ignition-fetch-offline.service.
Dec 13 14:24:11.190106 systemd[1]: Stopped target paths.target.
Dec 13 14:24:11.205029 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 14:24:11.209854 systemd[1]: Stopped systemd-ask-password-console.path.
Dec 13 14:24:11.212048 systemd[1]: Stopped target slices.target.
Dec 13 14:24:11.239052 systemd[1]: Stopped target sockets.target.
Dec 13 14:24:11.260141 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 14:24:11.260187 systemd[1]: Closed iscsid.socket.
Dec 13 14:24:11.268185 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 14:24:11.268232 systemd[1]: Closed iscsiuio.socket.
Dec 13 14:24:11.294083 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 14:24:11.294164 systemd[1]: Stopped ignition-setup.service.
Dec 13 14:24:11.309107 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 14:24:11.309178 systemd[1]: Stopped initrd-setup-root.service.
Dec 13 14:24:11.326326 systemd[1]: Stopping systemd-networkd.service...
Dec 13 14:24:11.329872 systemd-networkd[684]: eth0: DHCPv6 lease lost
Dec 13 14:24:11.341192 systemd[1]: Stopping systemd-resolved.service...
Dec 13 14:24:11.363635 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 14:24:11.363782 systemd[1]: Stopped systemd-resolved.service.
Dec 13 14:24:11.379764 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 14:24:11.379940 systemd[1]: Stopped systemd-networkd.service.
Dec 13 14:24:11.394819 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 14:24:11.394943 systemd[1]: Finished initrd-cleanup.service.
Dec 13 14:24:11.410321 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 14:24:11.410364 systemd[1]: Closed systemd-networkd.socket.
Dec 13 14:24:11.424949 systemd[1]: Stopping network-cleanup.service...
Dec 13 14:24:11.431086 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 14:24:11.431165 systemd[1]: Stopped parse-ip-for-networkd.service.
Dec 13 14:24:11.453161 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 14:24:11.453237 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 14:24:11.466464 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 14:24:11.466555 systemd[1]: Stopped systemd-modules-load.service.
Dec 13 14:24:11.481195 systemd[1]: Stopping systemd-udevd.service...
Dec 13 14:24:11.497591 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 14:24:11.498283 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 14:24:11.498435 systemd[1]: Stopped systemd-udevd.service.
Dec 13 14:24:11.512457 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 14:24:11.512544 systemd[1]: Closed systemd-udevd-control.socket.
Dec 13 14:24:11.530115 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 14:24:11.530174 systemd[1]: Closed systemd-udevd-kernel.socket.
Dec 13 14:24:11.547100 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 14:24:11.547182 systemd[1]: Stopped dracut-pre-udev.service.
Dec 13 14:24:11.563141 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 14:24:11.563217 systemd[1]: Stopped dracut-cmdline.service.
Dec 13 14:24:11.578153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 14:24:11.578226 systemd[1]: Stopped dracut-cmdline-ask.service.
Dec 13 14:24:11.594276 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Dec 13 14:24:11.610916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 14:24:11.611058 systemd[1]: Stopped systemd-vconsole-setup.service.
Dec 13 14:24:11.626742 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 14:24:11.626919 systemd[1]: Stopped network-cleanup.service.
Dec 13 14:24:11.641291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 14:24:11.641404 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Dec 13 14:24:11.660257 systemd[1]: Reached target initrd-switch-root.target.
Dec 13 14:24:11.678033 systemd[1]: Starting initrd-switch-root.service...
Dec 13 14:24:11.702479 systemd[1]: Switching root.
Dec 13 14:24:11.745798 systemd-journald[188]: Journal stopped
Dec 13 14:24:16.501635 kernel: SELinux:  Class mctp_socket not defined in policy.
Dec 13 14:24:16.501785 kernel: SELinux:  Class anon_inode not defined in policy.
Dec 13 14:24:16.501944 kernel: SELinux: the above unknown classes and permissions will be allowed
Dec 13 14:24:16.501978 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 14:24:16.502007 kernel: SELinux:  policy capability open_perms=1
Dec 13 14:24:16.502030 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 14:24:16.502061 kernel: SELinux:  policy capability always_check_network=0
Dec 13 14:24:16.502084 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 14:24:16.502106 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 14:24:16.502128 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 14:24:16.502151 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 14:24:16.502176 systemd[1]: Successfully loaded SELinux policy in 110.557ms.
Dec 13 14:24:16.502229 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.531ms.
Dec 13 14:24:16.502256 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 14:24:16.502280 systemd[1]: Detected virtualization kvm.
Dec 13 14:24:16.502304 systemd[1]: Detected architecture x86-64.
Dec 13 14:24:16.502327 systemd[1]: Detected first boot.
Dec 13 14:24:16.502353 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 14:24:16.502379 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Dec 13 14:24:16.502400 kernel: kauditd_printk_skb: 40 callbacks suppressed
Dec 13 14:24:16.502434 kernel: audit: type=1400 audit(1734099852.379:84): avc:  denied  { associate } for  pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 14:24:16.502473 kernel: audit: type=1300 audit(1734099852.379:84): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:16.502502 kernel: audit: type=1327 audit(1734099852.379:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 14:24:16.502531 kernel: audit: type=1400 audit(1734099852.393:85): avc:  denied  { associate } for  pid=900 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 14:24:16.502583 kernel: audit: type=1300 audit(1734099852.393:85): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000024105 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:16.502611 kernel: audit: type=1307 audit(1734099852.393:85): cwd="/"
Dec 13 14:24:16.502644 kernel: audit: type=1302 audit(1734099852.393:85): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:16.502671 kernel: audit: type=1302 audit(1734099852.393:85): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:16.502699 kernel: audit: type=1327 audit(1734099852.393:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 14:24:16.502730 systemd[1]: Populated /etc with preset unit settings.
Dec 13 14:24:16.502818 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:24:16.502850 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:24:16.502881 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:24:16.502909 kernel: audit: type=1334 audit(1734099855.632:86): prog-id=12 op=LOAD
Dec 13 14:24:16.502932 systemd[1]: iscsiuio.service: Deactivated successfully.
Dec 13 14:24:16.502957 systemd[1]: Stopped iscsiuio.service.
Dec 13 14:24:16.502980 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 14:24:16.503004 systemd[1]: Stopped iscsid.service.
Dec 13 14:24:16.503027 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 14:24:16.503050 systemd[1]: Stopped initrd-switch-root.service.
Dec 13 14:24:16.503074 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 14:24:16.503102 systemd[1]: Created slice system-addon\x2dconfig.slice.
Dec 13 14:24:16.503126 systemd[1]: Created slice system-addon\x2drun.slice.
Dec 13 14:24:16.503156 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Dec 13 14:24:16.503181 systemd[1]: Created slice system-getty.slice.
Dec 13 14:24:16.503205 systemd[1]: Created slice system-modprobe.slice.
Dec 13 14:24:16.503228 systemd[1]: Created slice system-serial\x2dgetty.slice.
Dec 13 14:24:16.503252 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Dec 13 14:24:16.503276 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Dec 13 14:24:16.503303 systemd[1]: Created slice user.slice.
Dec 13 14:24:16.503328 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 14:24:16.503354 systemd[1]: Started systemd-ask-password-wall.path.
Dec 13 14:24:16.503378 systemd[1]: Set up automount boot.automount.
Dec 13 14:24:16.503402 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Dec 13 14:24:16.503427 systemd[1]: Stopped target initrd-switch-root.target.
Dec 13 14:24:16.503451 systemd[1]: Stopped target initrd-fs.target.
Dec 13 14:24:16.503475 systemd[1]: Stopped target initrd-root-fs.target.
Dec 13 14:24:16.503500 systemd[1]: Reached target integritysetup.target.
Dec 13 14:24:16.503527 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 14:24:16.503562 systemd[1]: Reached target remote-fs.target.
Dec 13 14:24:16.503587 systemd[1]: Reached target slices.target.
Dec 13 14:24:16.503611 systemd[1]: Reached target swap.target.
Dec 13 14:24:16.503634 systemd[1]: Reached target torcx.target.
Dec 13 14:24:16.503658 systemd[1]: Reached target veritysetup.target.
Dec 13 14:24:16.503683 systemd[1]: Listening on systemd-coredump.socket.
Dec 13 14:24:16.503707 systemd[1]: Listening on systemd-initctl.socket.
Dec 13 14:24:16.503731 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 14:24:16.505217 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 14:24:16.505264 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 14:24:16.505302 systemd[1]: Listening on systemd-userdbd.socket.
Dec 13 14:24:16.505326 systemd[1]: Mounting dev-hugepages.mount...
Dec 13 14:24:16.505349 systemd[1]: Mounting dev-mqueue.mount...
Dec 13 14:24:16.505372 systemd[1]: Mounting media.mount...
Dec 13 14:24:16.506439 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:16.506484 systemd[1]: Mounting sys-kernel-debug.mount...
Dec 13 14:24:16.506510 systemd[1]: Mounting sys-kernel-tracing.mount...
Dec 13 14:24:16.506539 systemd[1]: Mounting tmp.mount...
Dec 13 14:24:16.506563 systemd[1]: Starting flatcar-tmpfiles.service...
Dec 13 14:24:16.506588 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:24:16.506613 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 14:24:16.506638 systemd[1]: Starting modprobe@configfs.service...
Dec 13 14:24:16.506661 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:24:16.506685 systemd[1]: Starting modprobe@drm.service...
Dec 13 14:24:16.506709 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:24:16.506738 systemd[1]: Starting modprobe@fuse.service...
Dec 13 14:24:16.506807 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:24:16.506836 kernel: fuse: init (API version 7.34)
Dec 13 14:24:16.506860 kernel: loop: module loaded
Dec 13 14:24:16.506883 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 14:24:16.506908 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 14:24:16.506931 systemd[1]: Stopped systemd-fsck-root.service.
Dec 13 14:24:16.506955 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 14:24:16.506979 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 14:24:16.507003 systemd[1]: Stopped systemd-journald.service.
Dec 13 14:24:16.507029 systemd[1]: Starting systemd-journald.service...
Dec 13 14:24:16.507054 systemd[1]: Starting systemd-modules-load.service...
Dec 13 14:24:16.507077 systemd[1]: Starting systemd-network-generator.service...
Dec 13 14:24:16.507100 systemd[1]: Starting systemd-remount-fs.service...
Dec 13 14:24:16.507130 systemd-journald[991]: Journal started
Dec 13 14:24:16.507223 systemd-journald[991]: Runtime Journal (/run/log/journal/45a6448b8db2a1dcd037dbd5c167a35a) is 8.0M, max 148.8M, 140.8M free.
Dec 13 14:24:11.744000 audit: BPF prog-id=9 op=UNLOAD
Dec 13 14:24:12.064000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 14:24:12.217000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 14:24:12.217000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 14:24:12.217000 audit: BPF prog-id=10 op=LOAD
Dec 13 14:24:12.217000 audit: BPF prog-id=10 op=UNLOAD
Dec 13 14:24:12.217000 audit: BPF prog-id=11 op=LOAD
Dec 13 14:24:12.218000 audit: BPF prog-id=11 op=UNLOAD
Dec 13 14:24:12.379000 audit[900]: AVC avc:  denied  { associate } for  pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 14:24:12.379000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:12.379000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 14:24:12.393000 audit[900]: AVC avc:  denied  { associate } for  pid=900 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 14:24:12.393000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000024105 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:12.393000 audit: CWD cwd="/"
Dec 13 14:24:12.393000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:12.393000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:12.393000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 14:24:15.632000 audit: BPF prog-id=12 op=LOAD
Dec 13 14:24:15.632000 audit: BPF prog-id=3 op=UNLOAD
Dec 13 14:24:15.639000 audit: BPF prog-id=13 op=LOAD
Dec 13 14:24:15.639000 audit: BPF prog-id=14 op=LOAD
Dec 13 14:24:15.639000 audit: BPF prog-id=4 op=UNLOAD
Dec 13 14:24:15.639000 audit: BPF prog-id=5 op=UNLOAD
Dec 13 14:24:15.640000 audit: BPF prog-id=15 op=LOAD
Dec 13 14:24:15.641000 audit: BPF prog-id=12 op=UNLOAD
Dec 13 14:24:15.641000 audit: BPF prog-id=16 op=LOAD
Dec 13 14:24:15.641000 audit: BPF prog-id=17 op=LOAD
Dec 13 14:24:15.641000 audit: BPF prog-id=13 op=UNLOAD
Dec 13 14:24:15.641000 audit: BPF prog-id=14 op=UNLOAD
Dec 13 14:24:15.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:15.658000 audit: BPF prog-id=15 op=UNLOAD
Dec 13 14:24:15.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:15.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:15.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:15.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.452000 audit: BPF prog-id=18 op=LOAD
Dec 13 14:24:16.452000 audit: BPF prog-id=19 op=LOAD
Dec 13 14:24:16.452000 audit: BPF prog-id=20 op=LOAD
Dec 13 14:24:16.452000 audit: BPF prog-id=16 op=UNLOAD
Dec 13 14:24:16.452000 audit: BPF prog-id=17 op=UNLOAD
Dec 13 14:24:16.495000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 14:24:16.495000 audit[991]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff8f4d3170 a2=4000 a3=7fff8f4d320c items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:16.495000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 14:24:15.631933 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 14:24:12.374650 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:24:15.644639 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 14:24:12.376537 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 14:24:12.376573 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 14:24:12.376626 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Dec 13 14:24:12.376646 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="skipped missing lower profile" missing profile=oem
Dec 13 14:24:12.376705 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Dec 13 14:24:12.376728 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Dec 13 14:24:12.377009 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Dec 13 14:24:12.377067 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 14:24:12.377084 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 14:24:12.379908 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Dec 13 14:24:12.379975 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Dec 13 14:24:12.380012 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6
Dec 13 14:24:12.380041 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Dec 13 14:24:12.380072 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6
Dec 13 14:24:12.380099 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Dec 13 14:24:14.991462 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:14Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:24:14.991770 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:14Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:24:14.991955 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:14Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:24:14.992193 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:14Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 14:24:14.992255 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:14Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Dec 13 14:24:14.992324 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T14:24:14Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Dec 13 14:24:16.526851 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 14:24:16.540782 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 14:24:16.547100 systemd[1]: Stopped verity-setup.service.
Dec 13 14:24:16.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.566779 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:16.576806 systemd[1]: Started systemd-journald.service.
Dec 13 14:24:16.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.586258 systemd[1]: Mounted dev-hugepages.mount.
Dec 13 14:24:16.594133 systemd[1]: Mounted dev-mqueue.mount.
Dec 13 14:24:16.601115 systemd[1]: Mounted media.mount.
Dec 13 14:24:16.608120 systemd[1]: Mounted sys-kernel-debug.mount.
Dec 13 14:24:16.617085 systemd[1]: Mounted sys-kernel-tracing.mount.
Dec 13 14:24:16.626075 systemd[1]: Mounted tmp.mount.
Dec 13 14:24:16.633241 systemd[1]: Finished flatcar-tmpfiles.service.
Dec 13 14:24:16.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.642331 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 14:24:16.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.651348 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 14:24:16.651581 systemd[1]: Finished modprobe@configfs.service.
Dec 13 14:24:16.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.660393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:24:16.660618 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:24:16.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.669359 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 14:24:16.669587 systemd[1]: Finished modprobe@drm.service.
Dec 13 14:24:16.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.678364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:24:16.678614 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:24:16.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.687361 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 14:24:16.687599 systemd[1]: Finished modprobe@fuse.service.
Dec 13 14:24:16.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.696361 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:24:16.696587 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:24:16.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.705378 systemd[1]: Finished systemd-modules-load.service.
Dec 13 14:24:16.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.714364 systemd[1]: Finished systemd-network-generator.service.
Dec 13 14:24:16.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.723345 systemd[1]: Finished systemd-remount-fs.service.
Dec 13 14:24:16.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.732368 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 14:24:16.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.741826 systemd[1]: Reached target network-pre.target.
Dec 13 14:24:16.751573 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Dec 13 14:24:16.761466 systemd[1]: Mounting sys-kernel-config.mount...
Dec 13 14:24:16.768936 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 14:24:16.772661 systemd[1]: Starting systemd-hwdb-update.service...
Dec 13 14:24:16.782028 systemd[1]: Starting systemd-journal-flush.service...
Dec 13 14:24:16.788787 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:24:16.791498 systemd[1]: Starting systemd-random-seed.service...
Dec 13 14:24:16.798953 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:24:16.800822 systemd[1]: Starting systemd-sysctl.service...
Dec 13 14:24:16.809967 systemd[1]: Starting systemd-sysusers.service...
Dec 13 14:24:16.814586 systemd-journald[991]: Time spent on flushing to /var/log/journal/45a6448b8db2a1dcd037dbd5c167a35a is 53.222ms for 1161 entries.
Dec 13 14:24:16.814586 systemd-journald[991]: System Journal (/var/log/journal/45a6448b8db2a1dcd037dbd5c167a35a) is 8.0M, max 584.8M, 576.8M free.
Dec 13 14:24:16.920293 systemd-journald[991]: Received client request to flush runtime journal.
Dec 13 14:24:16.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:16.825843 systemd[1]: Starting systemd-udev-settle.service...
Dec 13 14:24:16.837554 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Dec 13 14:24:16.923456 udevadm[1005]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 14:24:16.847301 systemd[1]: Mounted sys-kernel-config.mount.
Dec 13 14:24:16.857269 systemd[1]: Finished systemd-random-seed.service.
Dec 13 14:24:16.867399 systemd[1]: Finished systemd-sysctl.service.
Dec 13 14:24:16.879540 systemd[1]: Reached target first-boot-complete.target.
Dec 13 14:24:16.897031 systemd[1]: Finished systemd-sysusers.service.
Dec 13 14:24:16.922479 systemd[1]: Finished systemd-journal-flush.service.
Dec 13 14:24:16.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.501588 systemd[1]: Finished systemd-hwdb-update.service.
Dec 13 14:24:17.538110 kernel: kauditd_printk_skb: 53 callbacks suppressed
Dec 13 14:24:17.538280 kernel: audit: type=1130 audit(1734099857.508:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.513000 audit: BPF prog-id=21 op=LOAD
Dec 13 14:24:17.539310 systemd[1]: Starting systemd-udevd.service...
Dec 13 14:24:17.545597 kernel: audit: type=1334 audit(1734099857.513:139): prog-id=21 op=LOAD
Dec 13 14:24:17.545688 kernel: audit: type=1334 audit(1734099857.536:140): prog-id=22 op=LOAD
Dec 13 14:24:17.545732 kernel: audit: type=1334 audit(1734099857.536:141): prog-id=7 op=UNLOAD
Dec 13 14:24:17.545842 kernel: audit: type=1334 audit(1734099857.536:142): prog-id=8 op=UNLOAD
Dec 13 14:24:17.536000 audit: BPF prog-id=22 op=LOAD
Dec 13 14:24:17.536000 audit: BPF prog-id=7 op=UNLOAD
Dec 13 14:24:17.536000 audit: BPF prog-id=8 op=UNLOAD
Dec 13 14:24:17.586215 systemd-udevd[1009]: Using default interface naming scheme 'v252'.
Dec 13 14:24:17.641053 systemd[1]: Started systemd-udevd.service.
Dec 13 14:24:17.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.672784 kernel: audit: type=1130 audit(1734099857.648:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.679009 systemd[1]: Starting systemd-networkd.service...
Dec 13 14:24:17.676000 audit: BPF prog-id=23 op=LOAD
Dec 13 14:24:17.692777 kernel: audit: type=1334 audit(1734099857.676:144): prog-id=23 op=LOAD
Dec 13 14:24:17.703000 audit: BPF prog-id=24 op=LOAD
Dec 13 14:24:17.712794 kernel: audit: type=1334 audit(1734099857.703:145): prog-id=24 op=LOAD
Dec 13 14:24:17.711000 audit: BPF prog-id=25 op=LOAD
Dec 13 14:24:17.714848 systemd[1]: Starting systemd-userdbd.service...
Dec 13 14:24:17.720794 kernel: audit: type=1334 audit(1734099857.711:146): prog-id=25 op=LOAD
Dec 13 14:24:17.730279 kernel: audit: type=1334 audit(1734099857.711:147): prog-id=26 op=LOAD
Dec 13 14:24:17.711000 audit: BPF prog-id=26 op=LOAD
Dec 13 14:24:17.800210 systemd[1]: Started systemd-userdbd.service.
Dec 13 14:24:17.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.809163 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Dec 13 14:24:17.853775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Dec 13 14:24:17.880000 audit[1016]: AVC avc:  denied  { confidentiality } for  pid=1016 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 14:24:17.923782 kernel: ACPI: button: Power Button [PWRF]
Dec 13 14:24:17.880000 audit[1016]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56166f34ec80 a1=337fc a2=7f304a970bc5 a3=5 items=110 ppid=1009 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:17.880000 audit: CWD cwd="/"
Dec 13 14:24:17.880000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=1 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=2 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=3 name=(null) inode=12985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=4 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=5 name=(null) inode=12986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=6 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=7 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=8 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=9 name=(null) inode=12988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=10 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=11 name=(null) inode=12989 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=12 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=13 name=(null) inode=12990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=14 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=15 name=(null) inode=12991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=16 name=(null) inode=12987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=17 name=(null) inode=12992 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=18 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=19 name=(null) inode=12993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=20 name=(null) inode=12993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=21 name=(null) inode=12994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=22 name=(null) inode=12993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=23 name=(null) inode=12995 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=24 name=(null) inode=12993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=25 name=(null) inode=12996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=26 name=(null) inode=12993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=27 name=(null) inode=12997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=28 name=(null) inode=12993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=29 name=(null) inode=12998 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=30 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=31 name=(null) inode=12999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=32 name=(null) inode=12999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=33 name=(null) inode=13000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=34 name=(null) inode=12999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=35 name=(null) inode=13001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=36 name=(null) inode=12999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=37 name=(null) inode=13002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=38 name=(null) inode=12999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=39 name=(null) inode=13003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=40 name=(null) inode=12999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=41 name=(null) inode=13004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=42 name=(null) inode=12984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=43 name=(null) inode=13005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=44 name=(null) inode=13005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=45 name=(null) inode=13006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=46 name=(null) inode=13005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=47 name=(null) inode=13007 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=48 name=(null) inode=13005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=49 name=(null) inode=13008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=50 name=(null) inode=13005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=51 name=(null) inode=13009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=52 name=(null) inode=13005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=53 name=(null) inode=13010 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=55 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=56 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=57 name=(null) inode=13012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=58 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=59 name=(null) inode=13013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=60 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=61 name=(null) inode=13014 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=62 name=(null) inode=13014 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=63 name=(null) inode=13015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=64 name=(null) inode=13014 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=65 name=(null) inode=13016 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=66 name=(null) inode=13014 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=67 name=(null) inode=13017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=68 name=(null) inode=13014 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=69 name=(null) inode=13018 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=70 name=(null) inode=13014 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=71 name=(null) inode=13019 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=72 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=73 name=(null) inode=13020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=74 name=(null) inode=13020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=75 name=(null) inode=13021 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=76 name=(null) inode=13020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=77 name=(null) inode=13022 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=78 name=(null) inode=13020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=79 name=(null) inode=13023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=80 name=(null) inode=13020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=81 name=(null) inode=13024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=82 name=(null) inode=13020 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=83 name=(null) inode=13025 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=84 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=85 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=86 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=87 name=(null) inode=13027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=88 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=89 name=(null) inode=13028 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=90 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=91 name=(null) inode=13029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=92 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=93 name=(null) inode=13030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=94 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=95 name=(null) inode=13031 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=96 name=(null) inode=13011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=97 name=(null) inode=13032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=98 name=(null) inode=13032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=99 name=(null) inode=13033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=100 name=(null) inode=13032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=101 name=(null) inode=13034 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=102 name=(null) inode=13032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=103 name=(null) inode=13035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=104 name=(null) inode=13032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=105 name=(null) inode=13036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=106 name=(null) inode=13032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=107 name=(null) inode=13037 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PATH item=109 name=(null) inode=13038 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 14:24:17.880000 audit: PROCTITLE proctitle="(udev-worker)"
Dec 13 14:24:17.945137 systemd-networkd[1023]: lo: Link UP
Dec 13 14:24:17.945154 systemd-networkd[1023]: lo: Gained carrier
Dec 13 14:24:17.945984 systemd-networkd[1023]: Enumeration completed
Dec 13 14:24:17.946141 systemd[1]: Started systemd-networkd.service.
Dec 13 14:24:17.946784 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 14:24:17.948814 systemd-networkd[1023]: eth0: Link UP
Dec 13 14:24:17.948830 systemd-networkd[1023]: eth0: Gained carrier
Dec 13 14:24:17.959772 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
Dec 13 14:24:17.986935 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Dec 13 14:24:17.986983 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Dec 13 14:24:17.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:17.992774 kernel: ACPI: button: Sleep Button [SLPF]
Dec 13 14:24:17.992980 systemd-networkd[1023]: eth0: DHCPv4 address 10.128.0.48/32, gateway 10.128.0.1 acquired from 169.254.169.254
Dec 13 14:24:18.019814 kernel: EDAC MC: Ver: 3.0.0
Dec 13 14:24:18.036808 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 14:24:18.096870 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1010)
Dec 13 14:24:18.117473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 14:24:18.126327 systemd[1]: Finished systemd-udev-settle.service.
Dec 13 14:24:18.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.136680 systemd[1]: Starting lvm2-activation-early.service...
Dec 13 14:24:18.165484 lvm[1046]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 14:24:18.201125 systemd[1]: Finished lvm2-activation-early.service.
Dec 13 14:24:18.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.210159 systemd[1]: Reached target cryptsetup.target.
Dec 13 14:24:18.220444 systemd[1]: Starting lvm2-activation.service...
Dec 13 14:24:18.225469 lvm[1047]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 14:24:18.256172 systemd[1]: Finished lvm2-activation.service.
Dec 13 14:24:18.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.265131 systemd[1]: Reached target local-fs-pre.target.
Dec 13 14:24:18.273929 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 14:24:18.273984 systemd[1]: Reached target local-fs.target.
Dec 13 14:24:18.281916 systemd[1]: Reached target machines.target.
Dec 13 14:24:18.292592 systemd[1]: Starting ldconfig.service...
Dec 13 14:24:18.300768 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:24:18.300875 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:24:18.302599 systemd[1]: Starting systemd-boot-update.service...
Dec 13 14:24:18.311442 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Dec 13 14:24:18.323909 systemd[1]: Starting systemd-machine-id-commit.service...
Dec 13 14:24:18.335905 systemd[1]: Starting systemd-sysext.service...
Dec 13 14:24:18.337113 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1049 (bootctl)
Dec 13 14:24:18.339796 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Dec 13 14:24:18.354516 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Dec 13 14:24:18.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.371505 systemd[1]: Unmounting usr-share-oem.mount...
Dec 13 14:24:18.381507 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Dec 13 14:24:18.381745 systemd[1]: Unmounted usr-share-oem.mount.
Dec 13 14:24:18.409810 kernel: loop0: detected capacity change from 0 to 205544
Dec 13 14:24:18.521311 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31)
Dec 13 14:24:18.521311 systemd-fsck[1060]: /dev/sda1: 789 files, 119291/258078 clusters
Dec 13 14:24:18.525867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Dec 13 14:24:18.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.538235 systemd[1]: Mounting boot.mount...
Dec 13 14:24:18.574485 systemd[1]: Mounted boot.mount.
Dec 13 14:24:18.607105 systemd[1]: Finished systemd-boot-update.service.
Dec 13 14:24:18.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.862198 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 14:24:18.863047 systemd[1]: Finished systemd-machine-id-commit.service.
Dec 13 14:24:18.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:18.895785 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 14:24:18.928793 kernel: loop1: detected capacity change from 0 to 205544
Dec 13 14:24:18.960145 (sd-sysext)[1065]: Using extensions 'kubernetes'.
Dec 13 14:24:18.960848 (sd-sysext)[1065]: Merged extensions into '/usr'.
Dec 13 14:24:18.985810 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:18.988600 systemd[1]: Mounting usr-share-oem.mount...
Dec 13 14:24:19.000298 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.002842 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:24:19.004122 ldconfig[1048]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 14:24:19.011526 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:24:19.020689 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:24:19.027998 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.028241 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:24:19.028497 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:19.033064 systemd[1]: Finished ldconfig.service.
Dec 13 14:24:19.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.040445 systemd[1]: Mounted usr-share-oem.mount.
Dec 13 14:24:19.048489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:24:19.048737 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:24:19.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.057608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:24:19.057842 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:24:19.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.066605 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:24:19.066840 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:24:19.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.076822 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:24:19.077022 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.078650 systemd[1]: Finished systemd-sysext.service.
Dec 13 14:24:19.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.088787 systemd[1]: Starting ensure-sysext.service...
Dec 13 14:24:19.097449 systemd[1]: Starting systemd-tmpfiles-setup.service...
Dec 13 14:24:19.109647 systemd[1]: Reloading.
Dec 13 14:24:19.129064 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Dec 13 14:24:19.138084 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 14:24:19.153502 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 14:24:19.223591 /usr/lib/systemd/system-generators/torcx-generator[1092]: time="2024-12-13T14:24:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:24:19.224861 /usr/lib/systemd/system-generators/torcx-generator[1092]: time="2024-12-13T14:24:19Z" level=info msg="torcx already run"
Dec 13 14:24:19.377323 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:24:19.377361 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:24:19.418822 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:24:19.497000 audit: BPF prog-id=27 op=LOAD
Dec 13 14:24:19.497000 audit: BPF prog-id=24 op=UNLOAD
Dec 13 14:24:19.498000 audit: BPF prog-id=28 op=LOAD
Dec 13 14:24:19.498000 audit: BPF prog-id=29 op=LOAD
Dec 13 14:24:19.498000 audit: BPF prog-id=25 op=UNLOAD
Dec 13 14:24:19.498000 audit: BPF prog-id=26 op=UNLOAD
Dec 13 14:24:19.499000 audit: BPF prog-id=30 op=LOAD
Dec 13 14:24:19.499000 audit: BPF prog-id=23 op=UNLOAD
Dec 13 14:24:19.500000 audit: BPF prog-id=31 op=LOAD
Dec 13 14:24:19.500000 audit: BPF prog-id=32 op=LOAD
Dec 13 14:24:19.500000 audit: BPF prog-id=21 op=UNLOAD
Dec 13 14:24:19.500000 audit: BPF prog-id=22 op=UNLOAD
Dec 13 14:24:19.501000 audit: BPF prog-id=33 op=LOAD
Dec 13 14:24:19.501000 audit: BPF prog-id=18 op=UNLOAD
Dec 13 14:24:19.501000 audit: BPF prog-id=34 op=LOAD
Dec 13 14:24:19.501000 audit: BPF prog-id=35 op=LOAD
Dec 13 14:24:19.501000 audit: BPF prog-id=19 op=UNLOAD
Dec 13 14:24:19.502000 audit: BPF prog-id=20 op=UNLOAD
Dec 13 14:24:19.513420 systemd[1]: Finished systemd-tmpfiles-setup.service.
Dec 13 14:24:19.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.528115 systemd[1]: Starting audit-rules.service...
Dec 13 14:24:19.537814 systemd[1]: Starting clean-ca-certificates.service...
Dec 13 14:24:19.548082 systemd[1]: Starting oem-gce-enable-oslogin.service...
Dec 13 14:24:19.559271 systemd[1]: Starting systemd-journal-catalog-update.service...
Dec 13 14:24:19.567000 audit: BPF prog-id=36 op=LOAD
Dec 13 14:24:19.570603 systemd[1]: Starting systemd-resolved.service...
Dec 13 14:24:19.576065 systemd-networkd[1023]: eth0: Gained IPv6LL
Dec 13 14:24:19.577000 audit: BPF prog-id=37 op=LOAD
Dec 13 14:24:19.580258 systemd[1]: Starting systemd-timesyncd.service...
Dec 13 14:24:19.589155 systemd[1]: Starting systemd-update-utmp.service...
Dec 13 14:24:19.597337 systemd[1]: Finished clean-ca-certificates.service.
Dec 13 14:24:19.597000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.607040 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully.
Dec 13 14:24:19.607316 systemd[1]: Finished oem-gce-enable-oslogin.service.
Dec 13 14:24:19.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 14:24:19.622308 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:19.622862 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.625375 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:24:19.635083 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:24:19.637000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Dec 13 14:24:19.637000 audit[1166]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec5a6e040 a2=420 a3=0 items=0 ppid=1136 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 14:24:19.637000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Dec 13 14:24:19.639621 augenrules[1166]: No rules
Dec 13 14:24:19.644046 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:24:19.653218 systemd[1]: Starting oem-gce-enable-oslogin.service...
Dec 13 14:24:19.660952 enable-oslogin[1174]: /etc/pam.d/sshd already exists. Not enabling OS Login
Dec 13 14:24:19.661987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.662361 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:24:19.662669 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 14:24:19.662939 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:19.666410 systemd[1]: Finished audit-rules.service.
Dec 13 14:24:19.674877 systemd[1]: Finished systemd-journal-catalog-update.service.
Dec 13 14:24:19.685934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:24:19.686155 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:24:19.695846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:24:19.696069 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:24:19.704838 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:24:19.705049 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:24:19.714801 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully.
Dec 13 14:24:19.715035 systemd[1]: Finished oem-gce-enable-oslogin.service.
Dec 13 14:24:19.728388 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:19.728856 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.731503 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 14:24:19.741115 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 14:24:19.747962 systemd-timesyncd[1157]: Contacted time server 169.254.169.254:123 (169.254.169.254).
Dec 13 14:24:19.748042 systemd-timesyncd[1157]: Initial clock synchronization to Fri 2024-12-13 14:24:19.356827 UTC.
Dec 13 14:24:19.751530 systemd[1]: Starting modprobe@loop.service...
Dec 13 14:24:19.756702 systemd-resolved[1154]: Positive Trust Anchors:
Dec 13 14:24:19.756728 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 14:24:19.756814 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 14:24:19.760772 systemd[1]: Starting oem-gce-enable-oslogin.service...
Dec 13 14:24:19.765581 enable-oslogin[1179]: /etc/pam.d/sshd already exists. Not enabling OS Login
Dec 13 14:24:19.768957 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.769226 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:24:19.771492 systemd[1]: Starting systemd-update-done.service...
Dec 13 14:24:19.778926 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 14:24:19.779146 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 14:24:19.781217 systemd[1]: Started systemd-timesyncd.service.
Dec 13 14:24:19.791385 systemd[1]: Finished systemd-update-utmp.service.
Dec 13 14:24:19.800608 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 14:24:19.800850 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 14:24:19.801805 systemd-resolved[1154]: Defaulting to hostname 'linux'.
Dec 13 14:24:19.810413 systemd[1]: Started systemd-resolved.service.
Dec 13 14:24:19.819552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 14:24:19.819791 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 14:24:19.828616 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 14:24:19.828847 systemd[1]: Finished modprobe@loop.service.
Dec 13 14:24:19.838580 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully.
Dec 13 14:24:19.838876 systemd[1]: Finished oem-gce-enable-oslogin.service.
Dec 13 14:24:19.847560 systemd[1]: Finished systemd-update-done.service.
Dec 13 14:24:19.857693 systemd[1]: Reached target network.target.
Dec 13 14:24:19.867099 systemd[1]: Reached target nss-lookup.target.
Dec 13 14:24:19.876108 systemd[1]: Reached target time-set.target.
Dec 13 14:24:19.885042 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 14:24:19.885281 systemd[1]: Reached target sysinit.target.
Dec 13 14:24:19.894280 systemd[1]: Started motdgen.path.
Dec 13 14:24:19.902155 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Dec 13 14:24:19.912382 systemd[1]: Started logrotate.timer.
Dec 13 14:24:19.920435 systemd[1]: Started mdadm.timer.
Dec 13 14:24:19.928139 systemd[1]: Started systemd-tmpfiles-clean.timer.
Dec 13 14:24:19.937058 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 14:24:19.937255 systemd[1]: Reached target paths.target.
Dec 13 14:24:19.944089 systemd[1]: Reached target timers.target.
Dec 13 14:24:19.951832 systemd[1]: Listening on dbus.socket.
Dec 13 14:24:19.960820 systemd[1]: Starting docker.socket...
Dec 13 14:24:19.972291 systemd[1]: Listening on sshd.socket.
Dec 13 14:24:19.979173 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:24:19.979418 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 14:24:19.982579 systemd[1]: Listening on docker.socket.
Dec 13 14:24:19.992540 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 14:24:19.992738 systemd[1]: Reached target sockets.target.
Dec 13 14:24:20.001076 systemd[1]: Reached target basic.target.
Dec 13 14:24:20.008049 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 14:24:20.008243 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 14:24:20.010070 systemd[1]: Starting containerd.service...
Dec 13 14:24:20.018568 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Dec 13 14:24:20.028963 systemd[1]: Starting dbus.service...
Dec 13 14:24:20.036529 systemd[1]: Starting enable-oem-cloudinit.service...
Dec 13 14:24:20.049156 jq[1186]: false
Dec 13 14:24:20.045853 systemd[1]: Starting extend-filesystems.service...
Dec 13 14:24:20.052885 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Dec 13 14:24:20.055096 systemd[1]: Starting modprobe@drm.service...
Dec 13 14:24:20.064597 systemd[1]: Starting motdgen.service...
Dec 13 14:24:20.076708 systemd[1]: Starting prepare-helm.service...
Dec 13 14:24:20.087106 systemd[1]: Starting ssh-key-proc-cmdline.service...
Dec 13 14:24:20.095861 systemd[1]: Starting sshd-keygen.service...
Dec 13 14:24:20.105191 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 14:24:20.112895 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 14:24:20.113164 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2).
Dec 13 14:24:20.114027 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 14:24:20.115539 systemd[1]: Starting update-engine.service...
Dec 13 14:24:20.125318 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Dec 13 14:24:20.131220 jq[1208]: true
Dec 13 14:24:20.140423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 14:24:20.140826 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Dec 13 14:24:20.141563 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 14:24:20.142829 systemd[1]: Finished modprobe@drm.service.
Dec 13 14:24:20.143462 extend-filesystems[1187]: Found loop1
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda1
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda2
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda3
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found usr
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda4
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda6
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda7
Dec 13 14:24:20.155117 extend-filesystems[1187]: Found sda9
Dec 13 14:24:20.155117 extend-filesystems[1187]: Checking size of /dev/sda9
Dec 13 14:24:20.376949 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks
Dec 13 14:24:20.377018 kernel: EXT4-fs (sda9): resized filesystem to 2538491
Dec 13 14:24:20.151653 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 14:24:20.236018 dbus-daemon[1185]: [system] SELinux support is enabled
Dec 13 14:24:20.377668 extend-filesystems[1187]: Resized partition /dev/sda9
Dec 13 14:24:20.385844 update_engine[1207]: I1213 14:24:20.296802  1207 main.cc:92] Flatcar Update Engine starting
Dec 13 14:24:20.385844 update_engine[1207]: I1213 14:24:20.304586  1207 update_check_scheduler.cc:74] Next update check in 8m17s
Dec 13 14:24:20.153313 systemd[1]: Finished ssh-key-proc-cmdline.service.
Dec 13 14:24:20.386391 tar[1212]: linux-amd64/helm
Dec 13 14:24:20.245997 dbus-daemon[1185]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1023 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Dec 13 14:24:20.387083 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021)
Dec 13 14:24:20.387083 extend-filesystems[1226]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required
Dec 13 14:24:20.387083 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 2
Dec 13 14:24:20.387083 extend-filesystems[1226]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long.
Dec 13 14:24:20.163869 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 14:24:20.286192 dbus-daemon[1185]: [system] Successfully activated service 'org.freedesktop.systemd1'
Dec 13 14:24:20.433482 extend-filesystems[1187]: Resized filesystem in /dev/sda9
Dec 13 14:24:20.458996 kernel: loop2: detected capacity change from 0 to 2097152
Dec 13 14:24:20.164133 systemd[1]: Finished motdgen.service.
Dec 13 14:24:20.459228 jq[1218]: true
Dec 13 14:24:20.459666 env[1219]: time="2024-12-13T14:24:20.434235373Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Dec 13 14:24:20.171792 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 14:24:20.199212 systemd[1]: Finished ensure-sysext.service.
Dec 13 14:24:20.203299 systemd[1]: Reached target network-online.target.
Dec 13 14:24:20.218264 systemd[1]: Starting kubelet.service...
Dec 13 14:24:20.242672 systemd[1]: Starting oem-gce.service...
Dec 13 14:24:20.461587 bash[1249]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 14:24:20.254095 systemd[1]: Starting systemd-logind.service...
Dec 13 14:24:20.261234 systemd[1]: Started dbus.service.
Dec 13 14:24:20.462450 mkfs.ext4[1233]: mke2fs 1.46.5 (30-Dec-2021)
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Discarding device blocks:      0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008             \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Creating filesystem with 262144 4k blocks and 65536 inodes
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Filesystem UUID: 757046cd-9a93-4dfd-b0cb-1f495f4ffbcf
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Superblock backups stored on blocks:
Dec 13 14:24:20.462450 mkfs.ext4[1233]:         32768, 98304, 163840, 229376
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Allocating group tables: 0/8\u0008\u0008\u0008   \u0008\u0008\u0008done
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Writing inode tables: 0/8\u0008\u0008\u0008   \u0008\u0008\u0008done
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Creating journal (8192 blocks): done
Dec 13 14:24:20.462450 mkfs.ext4[1233]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008   \u0008\u0008\u0008done
Dec 13 14:24:20.284022 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 14:24:20.284100 systemd[1]: Reached target system-config.target.
Dec 13 14:24:20.300982 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 14:24:20.301019 systemd[1]: Reached target user-config.target.
Dec 13 14:24:20.464482 umount[1252]: umount: /var/lib/flatcar-oem-gce.img: not mounted.
Dec 13 14:24:20.315657 systemd[1]: Started update-engine.service.
Dec 13 14:24:20.341905 systemd[1]: Started locksmithd.service.
Dec 13 14:24:20.357733 systemd[1]: Starting systemd-hostnamed.service...
Dec 13 14:24:20.403549 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 14:24:20.403823 systemd[1]: Finished extend-filesystems.service.
Dec 13 14:24:20.424692 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Dec 13 14:24:20.510790 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 14:24:20.572018 systemd-logind[1228]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 13 14:24:20.573347 systemd-logind[1228]: Watching system buttons on /dev/input/event3 (Sleep Button)
Dec 13 14:24:20.573522 systemd-logind[1228]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Dec 13 14:24:20.577569 systemd-logind[1228]: New seat seat0.
Dec 13 14:24:20.595979 systemd[1]: Started systemd-logind.service.
Dec 13 14:24:20.615932 env[1219]: time="2024-12-13T14:24:20.615875102Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 14:24:20.618473 env[1219]: time="2024-12-13T14:24:20.618432540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:24:20.627041 env[1219]: time="2024-12-13T14:24:20.626988046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 14:24:20.634408 env[1219]: time="2024-12-13T14:24:20.634314055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:24:20.634935 env[1219]: time="2024-12-13T14:24:20.634901274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 14:24:20.635064 env[1219]: time="2024-12-13T14:24:20.635043267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 14:24:20.635363 env[1219]: time="2024-12-13T14:24:20.635335104Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Dec 13 14:24:20.635462 env[1219]: time="2024-12-13T14:24:20.635443163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 14:24:20.635675 env[1219]: time="2024-12-13T14:24:20.635627304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:24:20.636136 env[1219]: time="2024-12-13T14:24:20.636108057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 14:24:20.642349 env[1219]: time="2024-12-13T14:24:20.642285414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 14:24:20.645838 env[1219]: time="2024-12-13T14:24:20.645803627Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 14:24:20.647340 env[1219]: time="2024-12-13T14:24:20.647309170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Dec 13 14:24:20.647816 env[1219]: time="2024-12-13T14:24:20.647790792Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 14:24:20.657000 coreos-metadata[1184]: Dec 13 14:24:20.656 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1
Dec 13 14:24:20.657854 env[1219]: time="2024-12-13T14:24:20.657807927Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 14:24:20.658005 env[1219]: time="2024-12-13T14:24:20.657983953Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 14:24:20.658124 env[1219]: time="2024-12-13T14:24:20.658104192Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 14:24:20.658363 env[1219]: time="2024-12-13T14:24:20.658253667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.658828 env[1219]: time="2024-12-13T14:24:20.658789464Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.659836 env[1219]: time="2024-12-13T14:24:20.659791541Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.660200 env[1219]: time="2024-12-13T14:24:20.660158916Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.663549 env[1219]: time="2024-12-13T14:24:20.663523064Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.663805 coreos-metadata[1184]: Dec 13 14:24:20.663 INFO Fetch failed with 404: resource not found
Dec 13 14:24:20.665117 env[1219]: time="2024-12-13T14:24:20.665091361Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.665277 coreos-metadata[1184]: Dec 13 14:24:20.663 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1
Dec 13 14:24:20.665428 env[1219]: time="2024-12-13T14:24:20.665403837Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.665799 env[1219]: time="2024-12-13T14:24:20.665772997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.665937 env[1219]: time="2024-12-13T14:24:20.665916650Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 14:24:20.666276 env[1219]: time="2024-12-13T14:24:20.666236004Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 14:24:20.666559 env[1219]: time="2024-12-13T14:24:20.666537198Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 14:24:20.667224 env[1219]: time="2024-12-13T14:24:20.667196091Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 14:24:20.667857 env[1219]: time="2024-12-13T14:24:20.667816057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.668045 env[1219]: time="2024-12-13T14:24:20.667998158Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 14:24:20.668270 env[1219]: time="2024-12-13T14:24:20.668244283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.670956 env[1219]: time="2024-12-13T14:24:20.670912101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671101 env[1219]: time="2024-12-13T14:24:20.671077307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671293 env[1219]: time="2024-12-13T14:24:20.671252221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671417 env[1219]: time="2024-12-13T14:24:20.671397299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671546 env[1219]: time="2024-12-13T14:24:20.671527520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671663 env[1219]: time="2024-12-13T14:24:20.671643771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671816 env[1219]: time="2024-12-13T14:24:20.671785358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.671976 env[1219]: time="2024-12-13T14:24:20.671952602Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 14:24:20.672521 env[1219]: time="2024-12-13T14:24:20.672492166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.672939 coreos-metadata[1184]: Dec 13 14:24:20.672 INFO Fetch successful
Dec 13 14:24:20.673139 coreos-metadata[1184]: Dec 13 14:24:20.673 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1
Dec 13 14:24:20.673830 env[1219]: time="2024-12-13T14:24:20.673790074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.673976 env[1219]: time="2024-12-13T14:24:20.673951354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.674244 env[1219]: time="2024-12-13T14:24:20.674210553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 14:24:20.674430 env[1219]: time="2024-12-13T14:24:20.674403251Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Dec 13 14:24:20.675605 coreos-metadata[1184]: Dec 13 14:24:20.675 INFO Fetch failed with 404: resource not found
Dec 13 14:24:20.675865 coreos-metadata[1184]: Dec 13 14:24:20.675 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1
Dec 13 14:24:20.675993 env[1219]: time="2024-12-13T14:24:20.675968976Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 14:24:20.676143 env[1219]: time="2024-12-13T14:24:20.676120453Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Dec 13 14:24:20.676339 env[1219]: time="2024-12-13T14:24:20.676299864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 14:24:20.677113 coreos-metadata[1184]: Dec 13 14:24:20.676 INFO Fetch failed with 404: resource not found
Dec 13 14:24:20.677309 coreos-metadata[1184]: Dec 13 14:24:20.677 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1
Dec 13 14:24:20.678047 env[1219]: time="2024-12-13T14:24:20.677903091Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 14:24:20.681641 coreos-metadata[1184]: Dec 13 14:24:20.678 INFO Fetch successful
Dec 13 14:24:20.682300 unknown[1184]: wrote ssh authorized keys file for user: core
Dec 13 14:24:20.685001 env[1219]: time="2024-12-13T14:24:20.684053280Z" level=info msg="Connect containerd service"
Dec 13 14:24:20.685001 env[1219]: time="2024-12-13T14:24:20.684147434Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 14:24:20.692158 env[1219]: time="2024-12-13T14:24:20.692108666Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 14:24:20.701783 env[1219]: time="2024-12-13T14:24:20.701721085Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 14:24:20.706596 env[1219]: time="2024-12-13T14:24:20.706557630Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 14:24:20.706911 systemd[1]: Started containerd.service.
Dec 13 14:24:20.707313 env[1219]: time="2024-12-13T14:24:20.707284941Z" level=info msg="containerd successfully booted in 0.277288s"
Dec 13 14:24:20.713150 env[1219]: time="2024-12-13T14:24:20.713088743Z" level=info msg="Start subscribing containerd event"
Dec 13 14:24:20.726844 update-ssh-keys[1263]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 14:24:20.728104 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Dec 13 14:24:20.766630 env[1219]: time="2024-12-13T14:24:20.733828972Z" level=info msg="Start recovering state"
Dec 13 14:24:20.772008 dbus-daemon[1185]: [system] Successfully activated service 'org.freedesktop.hostname1'
Dec 13 14:24:20.772202 systemd[1]: Started systemd-hostnamed.service.
Dec 13 14:24:20.773131 dbus-daemon[1185]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1245 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Dec 13 14:24:20.784825 systemd[1]: Starting polkit.service...
Dec 13 14:24:20.820431 env[1219]: time="2024-12-13T14:24:20.820392549Z" level=info msg="Start event monitor"
Dec 13 14:24:20.833731 env[1219]: time="2024-12-13T14:24:20.833653146Z" level=info msg="Start snapshots syncer"
Dec 13 14:24:20.836963 env[1219]: time="2024-12-13T14:24:20.836910403Z" level=info msg="Start cni network conf syncer for default"
Dec 13 14:24:20.838470 env[1219]: time="2024-12-13T14:24:20.838435825Z" level=info msg="Start streaming server"
Dec 13 14:24:20.929740 polkitd[1265]: Started polkitd version 121
Dec 13 14:24:20.957969 polkitd[1265]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 14:24:20.958248 polkitd[1265]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 14:24:20.965309 polkitd[1265]: Finished loading, compiling and executing 2 rules
Dec 13 14:24:20.967227 dbus-daemon[1185]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Dec 13 14:24:20.967446 systemd[1]: Started polkit.service.
Dec 13 14:24:20.968645 polkitd[1265]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 13 14:24:21.003648 systemd-hostnamed[1245]: Hostname set to <ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal> (transient)
Dec 13 14:24:21.007032 systemd-resolved[1154]: System hostname changed to 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal'.
Dec 13 14:24:22.156906 tar[1212]: linux-amd64/LICENSE
Dec 13 14:24:22.159552 tar[1212]: linux-amd64/README.md
Dec 13 14:24:22.171043 systemd[1]: Finished prepare-helm.service.
Dec 13 14:24:22.406552 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 14:24:22.455101 systemd[1]: Started kubelet.service.
Dec 13 14:24:22.501589 systemd[1]: Finished sshd-keygen.service.
Dec 13 14:24:22.511238 systemd[1]: Starting issuegen.service...
Dec 13 14:24:22.524311 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 14:24:22.524587 systemd[1]: Finished issuegen.service.
Dec 13 14:24:22.534571 systemd[1]: Starting systemd-user-sessions.service...
Dec 13 14:24:22.559642 systemd[1]: Finished systemd-user-sessions.service.
Dec 13 14:24:22.570534 systemd[1]: Started getty@tty1.service.
Dec 13 14:24:22.581820 systemd[1]: Started serial-getty@ttyS0.service.
Dec 13 14:24:22.590281 systemd[1]: Reached target getty.target.
Dec 13 14:24:22.732172 locksmithd[1240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 14:24:23.440319 kubelet[1290]: E1213 14:24:23.440261    1290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:24:23.442857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:24:23.443113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:24:23.443543 systemd[1]: kubelet.service: Consumed 1.350s CPU time.
Dec 13 14:24:25.511002 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully.
Dec 13 14:24:27.777782 kernel: loop2: detected capacity change from 0 to 2097152
Dec 13 14:24:27.798064 systemd-nspawn[1308]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img.
Dec 13 14:24:27.798064 systemd-nspawn[1308]: Press ^] three times within 1s to kill container.
Dec 13 14:24:27.812786 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 14:24:27.897262 systemd[1]: Started oem-gce.service.
Dec 13 14:24:27.897697 systemd[1]: Reached target multi-user.target.
Dec 13 14:24:27.899872 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Dec 13 14:24:27.909938 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 14:24:27.910180 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Dec 13 14:24:27.910426 systemd[1]: Startup finished in 1.064s (kernel) + 9.101s (initrd) + 15.972s (userspace) = 26.138s.
Dec 13 14:24:27.961373 systemd-nspawn[1308]: + '[' -e /etc/default/instance_configs.cfg.template ']'
Dec 13 14:24:27.961373 systemd-nspawn[1308]: + echo -e '[InstanceSetup]\nset_host_keys = false'
Dec 13 14:24:27.961644 systemd-nspawn[1308]: + /usr/bin/google_instance_setup
Dec 13 14:24:28.374979 systemd[1]: Created slice system-sshd.slice.
Dec 13 14:24:28.378367 systemd[1]: Started sshd@0-10.128.0.48:22-139.178.68.195:35750.service.
Dec 13 14:24:28.657218 instance-setup[1314]: INFO Running google_set_multiqueue.
Dec 13 14:24:28.677207 instance-setup[1314]: INFO Set channels for eth0 to 2.
Dec 13 14:24:28.681042 instance-setup[1314]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1.
Dec 13 14:24:28.682458 instance-setup[1314]: INFO /proc/irq/31/smp_affinity_list: real affinity 0
Dec 13 14:24:28.683024 instance-setup[1314]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1.
Dec 13 14:24:28.684261 instance-setup[1314]: INFO /proc/irq/32/smp_affinity_list: real affinity 0
Dec 13 14:24:28.684627 instance-setup[1314]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1.
Dec 13 14:24:28.686037 instance-setup[1314]: INFO /proc/irq/33/smp_affinity_list: real affinity 1
Dec 13 14:24:28.686452 instance-setup[1314]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1.
Dec 13 14:24:28.687864 instance-setup[1314]: INFO /proc/irq/34/smp_affinity_list: real affinity 1
Dec 13 14:24:28.701793 sshd[1316]: Accepted publickey for core from 139.178.68.195 port 35750 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:24:28.704597 instance-setup[1314]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus
Dec 13 14:24:28.704995 instance-setup[1314]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus
Dec 13 14:24:28.706850 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:28.726285 systemd[1]: Created slice user-500.slice.
Dec 13 14:24:28.731498 systemd[1]: Starting user-runtime-dir@500.service...
Dec 13 14:24:28.746975 systemd-logind[1228]: New session 1 of user core.
Dec 13 14:24:28.756406 systemd[1]: Finished user-runtime-dir@500.service.
Dec 13 14:24:28.758837 systemd[1]: Starting user@500.service...
Dec 13 14:24:28.772286 systemd-nspawn[1308]: + /usr/bin/google_metadata_script_runner --script-type startup
Dec 13 14:24:28.776849 (systemd)[1349]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:28.937809 systemd[1349]: Queued start job for default target default.target.
Dec 13 14:24:28.939985 systemd[1349]: Reached target paths.target.
Dec 13 14:24:28.940235 systemd[1349]: Reached target sockets.target.
Dec 13 14:24:28.940368 systemd[1349]: Reached target timers.target.
Dec 13 14:24:28.940512 systemd[1349]: Reached target basic.target.
Dec 13 14:24:28.940804 systemd[1]: Started user@500.service.
Dec 13 14:24:28.942345 systemd[1]: Started session-1.scope.
Dec 13 14:24:28.946015 systemd[1349]: Reached target default.target.
Dec 13 14:24:28.946305 systemd[1349]: Startup finished in 156ms.
Dec 13 14:24:29.174333 systemd[1]: Started sshd@1-10.128.0.48:22-139.178.68.195:35766.service.
Dec 13 14:24:29.203141 startup-script[1350]: INFO Starting startup scripts.
Dec 13 14:24:29.217259 startup-script[1350]: INFO No startup scripts found in metadata.
Dec 13 14:24:29.217416 startup-script[1350]: INFO Finished running startup scripts.
Dec 13 14:24:29.253317 systemd-nspawn[1308]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM
Dec 13 14:24:29.253317 systemd-nspawn[1308]: + daemon_pids=()
Dec 13 14:24:29.254032 systemd-nspawn[1308]: + for d in accounts clock_skew network
Dec 13 14:24:29.254032 systemd-nspawn[1308]: + daemon_pids+=($!)
Dec 13 14:24:29.254032 systemd-nspawn[1308]: + for d in accounts clock_skew network
Dec 13 14:24:29.254225 systemd-nspawn[1308]: + daemon_pids+=($!)
Dec 13 14:24:29.254347 systemd-nspawn[1308]: + for d in accounts clock_skew network
Dec 13 14:24:29.254668 systemd-nspawn[1308]: + daemon_pids+=($!)
Dec 13 14:24:29.254851 systemd-nspawn[1308]: + NOTIFY_SOCKET=/run/systemd/notify
Dec 13 14:24:29.254937 systemd-nspawn[1308]: + /usr/bin/systemd-notify --ready
Dec 13 14:24:29.255226 systemd-nspawn[1308]: + /usr/bin/google_accounts_daemon
Dec 13 14:24:29.255337 systemd-nspawn[1308]: + /usr/bin/google_clock_skew_daemon
Dec 13 14:24:29.255912 systemd-nspawn[1308]: + /usr/bin/google_network_daemon
Dec 13 14:24:29.336114 systemd-nspawn[1308]: + wait -n 36 37 38
Dec 13 14:24:29.484848 sshd[1361]: Accepted publickey for core from 139.178.68.195 port 35766 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:24:29.486464 sshd[1361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:29.495107 systemd[1]: Started session-2.scope.
Dec 13 14:24:29.497679 systemd-logind[1228]: New session 2 of user core.
Dec 13 14:24:29.702717 sshd[1361]: pam_unix(sshd:session): session closed for user core
Dec 13 14:24:29.707044 systemd[1]: sshd@1-10.128.0.48:22-139.178.68.195:35766.service: Deactivated successfully.
Dec 13 14:24:29.708171 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 14:24:29.710064 systemd-logind[1228]: Session 2 logged out. Waiting for processes to exit.
Dec 13 14:24:29.711615 systemd-logind[1228]: Removed session 2.
Dec 13 14:24:29.746872 systemd[1]: Started sshd@2-10.128.0.48:22-139.178.68.195:35768.service.
Dec 13 14:24:29.868164 google-clock-skew[1364]: INFO Starting Google Clock Skew daemon.
Dec 13 14:24:29.886224 google-clock-skew[1364]: INFO Clock drift token has changed: 0.
Dec 13 14:24:29.895393 systemd-nspawn[1308]: hwclock: Cannot access the Hardware Clock via any known method.
Dec 13 14:24:29.895713 systemd-nspawn[1308]: hwclock: Use the --verbose option to see the details of our search for an access method.
Dec 13 14:24:29.896772 google-clock-skew[1364]: WARNING Failed to sync system time with hardware clock.
Dec 13 14:24:30.046025 sshd[1373]: Accepted publickey for core from 139.178.68.195 port 35768 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:24:30.047559 sshd[1373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:30.049432 google-networking[1365]: INFO Starting Google Networking daemon.
Dec 13 14:24:30.056053 systemd[1]: Started session-3.scope.
Dec 13 14:24:30.057333 systemd-logind[1228]: New session 3 of user core.
Dec 13 14:24:30.101322 groupadd[1382]: group added to /etc/group: name=google-sudoers, GID=1000
Dec 13 14:24:30.105710 groupadd[1382]: group added to /etc/gshadow: name=google-sudoers
Dec 13 14:24:30.110381 groupadd[1382]: new group: name=google-sudoers, GID=1000
Dec 13 14:24:30.124201 google-accounts[1363]: INFO Starting Google Accounts daemon.
Dec 13 14:24:30.149507 google-accounts[1363]: WARNING OS Login not installed.
Dec 13 14:24:30.150984 google-accounts[1363]: INFO Creating a new user account for 0.
Dec 13 14:24:30.156963 systemd-nspawn[1308]: useradd: invalid user name '0': use --badname to ignore
Dec 13 14:24:30.157644 google-accounts[1363]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3..
Dec 13 14:24:30.250111 sshd[1373]: pam_unix(sshd:session): session closed for user core
Dec 13 14:24:30.254410 systemd[1]: sshd@2-10.128.0.48:22-139.178.68.195:35768.service: Deactivated successfully.
Dec 13 14:24:30.255457 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 14:24:30.256330 systemd-logind[1228]: Session 3 logged out. Waiting for processes to exit.
Dec 13 14:24:30.257549 systemd-logind[1228]: Removed session 3.
Dec 13 14:24:30.294972 systemd[1]: Started sshd@3-10.128.0.48:22-139.178.68.195:35780.service.
Dec 13 14:24:30.573066 sshd[1395]: Accepted publickey for core from 139.178.68.195 port 35780 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:24:30.575039 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:30.581654 systemd[1]: Started session-4.scope.
Dec 13 14:24:30.582499 systemd-logind[1228]: New session 4 of user core.
Dec 13 14:24:30.783201 sshd[1395]: pam_unix(sshd:session): session closed for user core
Dec 13 14:24:30.787500 systemd[1]: sshd@3-10.128.0.48:22-139.178.68.195:35780.service: Deactivated successfully.
Dec 13 14:24:30.788561 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 14:24:30.789544 systemd-logind[1228]: Session 4 logged out. Waiting for processes to exit.
Dec 13 14:24:30.790908 systemd-logind[1228]: Removed session 4.
Dec 13 14:24:30.830779 systemd[1]: Started sshd@4-10.128.0.48:22-139.178.68.195:35790.service.
Dec 13 14:24:31.116415 sshd[1401]: Accepted publickey for core from 139.178.68.195 port 35790 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:24:31.118511 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:24:31.126119 systemd[1]: Started session-5.scope.
Dec 13 14:24:31.126775 systemd-logind[1228]: New session 5 of user core.
Dec 13 14:24:31.314651 sudo[1404]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 14:24:31.315106 sudo[1404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 14:24:31.348224 systemd[1]: Starting docker.service...
Dec 13 14:24:31.398796 env[1414]: time="2024-12-13T14:24:31.398337145Z" level=info msg="Starting up"
Dec 13 14:24:31.401384 env[1414]: time="2024-12-13T14:24:31.401350618Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 14:24:31.401535 env[1414]: time="2024-12-13T14:24:31.401513735Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 14:24:31.401634 env[1414]: time="2024-12-13T14:24:31.401615793Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 14:24:31.401709 env[1414]: time="2024-12-13T14:24:31.401695854Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 14:24:31.404263 env[1414]: time="2024-12-13T14:24:31.404240409Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 14:24:31.404413 env[1414]: time="2024-12-13T14:24:31.404391310Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 14:24:31.404503 env[1414]: time="2024-12-13T14:24:31.404486074Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 14:24:31.404570 env[1414]: time="2024-12-13T14:24:31.404556863Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 14:24:31.413717 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2767072700-merged.mount: Deactivated successfully.
Dec 13 14:24:31.447936 env[1414]: time="2024-12-13T14:24:31.447884999Z" level=info msg="Loading containers: start."
Dec 13 14:24:31.626790 kernel: Initializing XFRM netlink socket
Dec 13 14:24:31.671232 env[1414]: time="2024-12-13T14:24:31.671096145Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 13 14:24:31.754883 systemd-networkd[1023]: docker0: Link UP
Dec 13 14:24:31.775313 env[1414]: time="2024-12-13T14:24:31.775265739Z" level=info msg="Loading containers: done."
Dec 13 14:24:31.794009 env[1414]: time="2024-12-13T14:24:31.793945359Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 14:24:31.794262 env[1414]: time="2024-12-13T14:24:31.794217479Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Dec 13 14:24:31.794389 env[1414]: time="2024-12-13T14:24:31.794362092Z" level=info msg="Daemon has completed initialization"
Dec 13 14:24:31.817917 systemd[1]: Started docker.service.
Dec 13 14:24:31.830581 env[1414]: time="2024-12-13T14:24:31.830467053Z" level=info msg="API listen on /run/docker.sock"
Dec 13 14:24:32.918061 env[1219]: time="2024-12-13T14:24:32.917991881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\""
Dec 13 14:24:33.423015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896197621.mount: Deactivated successfully.
Dec 13 14:24:33.694452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 14:24:33.694800 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:33.694872 systemd[1]: kubelet.service: Consumed 1.350s CPU time.
Dec 13 14:24:33.697130 systemd[1]: Starting kubelet.service...
Dec 13 14:24:33.922572 systemd[1]: Started kubelet.service.
Dec 13 14:24:33.986689 kubelet[1539]: E1213 14:24:33.986517    1539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:24:33.992551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:24:33.992821 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:24:35.496969 env[1219]: time="2024-12-13T14:24:35.496892379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:35.500041 env[1219]: time="2024-12-13T14:24:35.499984894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:35.502823 env[1219]: time="2024-12-13T14:24:35.502770510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:35.505449 env[1219]: time="2024-12-13T14:24:35.505400767Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:35.506482 env[1219]: time="2024-12-13T14:24:35.506425711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\""
Dec 13 14:24:35.509471 env[1219]: time="2024-12-13T14:24:35.509418065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\""
Dec 13 14:24:37.416574 env[1219]: time="2024-12-13T14:24:37.416492294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:37.423827 env[1219]: time="2024-12-13T14:24:37.423772017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:37.426218 env[1219]: time="2024-12-13T14:24:37.426171278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:37.429328 env[1219]: time="2024-12-13T14:24:37.429282862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:37.430407 env[1219]: time="2024-12-13T14:24:37.430346574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\""
Dec 13 14:24:37.431111 env[1219]: time="2024-12-13T14:24:37.431061338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\""
Dec 13 14:24:38.868730 env[1219]: time="2024-12-13T14:24:38.868652980Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.871556 env[1219]: time="2024-12-13T14:24:38.871507040Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.874286 env[1219]: time="2024-12-13T14:24:38.874238816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.876833 env[1219]: time="2024-12-13T14:24:38.876788891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:38.878927 env[1219]: time="2024-12-13T14:24:38.878876572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\""
Dec 13 14:24:38.881306 env[1219]: time="2024-12-13T14:24:38.881271025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\""
Dec 13 14:24:40.007502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258241964.mount: Deactivated successfully.
Dec 13 14:24:40.764243 env[1219]: time="2024-12-13T14:24:40.764165927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:40.766959 env[1219]: time="2024-12-13T14:24:40.766909631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:40.769325 env[1219]: time="2024-12-13T14:24:40.769278022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:40.771165 env[1219]: time="2024-12-13T14:24:40.771126855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:40.771787 env[1219]: time="2024-12-13T14:24:40.771715505Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\""
Dec 13 14:24:40.772480 env[1219]: time="2024-12-13T14:24:40.772434542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 14:24:41.192304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660731290.mount: Deactivated successfully.
Dec 13 14:24:42.365053 env[1219]: time="2024-12-13T14:24:42.364969344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.368142 env[1219]: time="2024-12-13T14:24:42.368071960Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.371050 env[1219]: time="2024-12-13T14:24:42.371005709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.374721 env[1219]: time="2024-12-13T14:24:42.374658868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.377050 env[1219]: time="2024-12-13T14:24:42.376991102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Dec 13 14:24:42.378723 env[1219]: time="2024-12-13T14:24:42.378671195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Dec 13 14:24:42.781722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505897742.mount: Deactivated successfully.
Dec 13 14:24:42.791982 env[1219]: time="2024-12-13T14:24:42.791909597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.794814 env[1219]: time="2024-12-13T14:24:42.794741835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.797345 env[1219]: time="2024-12-13T14:24:42.797279465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.800035 env[1219]: time="2024-12-13T14:24:42.799987782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:42.800918 env[1219]: time="2024-12-13T14:24:42.800869124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\""
Dec 13 14:24:42.801858 env[1219]: time="2024-12-13T14:24:42.801823988Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Dec 13 14:24:43.224804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911006164.mount: Deactivated successfully.
Dec 13 14:24:44.244103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 14:24:44.244426 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:44.246702 systemd[1]: Starting kubelet.service...
Dec 13 14:24:44.487491 systemd[1]: Started kubelet.service.
Dec 13 14:24:44.564405 kubelet[1548]: E1213 14:24:44.563842    1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 14:24:44.567410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:24:44.567625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:24:45.886511 env[1219]: time="2024-12-13T14:24:45.886422066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:45.889650 env[1219]: time="2024-12-13T14:24:45.889573956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:45.892934 env[1219]: time="2024-12-13T14:24:45.892892924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:45.897394 env[1219]: time="2024-12-13T14:24:45.897326517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:45.899041 env[1219]: time="2024-12-13T14:24:45.898991016Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\""
Dec 13 14:24:49.268123 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:49.272155 systemd[1]: Starting kubelet.service...
Dec 13 14:24:49.323885 systemd[1]: Reloading.
Dec 13 14:24:49.473892 /usr/lib/systemd/system-generators/torcx-generator[1597]: time="2024-12-13T14:24:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:24:49.478393 /usr/lib/systemd/system-generators/torcx-generator[1597]: time="2024-12-13T14:24:49Z" level=info msg="torcx already run"
Dec 13 14:24:49.585577 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:24:49.585605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:24:49.609803 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:24:49.756546 systemd[1]: Started kubelet.service.
Dec 13 14:24:49.763004 systemd[1]: Stopping kubelet.service...
Dec 13 14:24:49.763511 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 14:24:49.763798 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:49.766386 systemd[1]: Starting kubelet.service...
Dec 13 14:24:49.994663 systemd[1]: Started kubelet.service.
Dec 13 14:24:50.062648 kubelet[1647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:50.063161 kubelet[1647]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 14:24:50.063236 kubelet[1647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:50.065464 kubelet[1647]: I1213 14:24:50.065399    1647 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 14:24:50.531455 kubelet[1647]: I1213 14:24:50.531389    1647 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Dec 13 14:24:50.531455 kubelet[1647]: I1213 14:24:50.531430    1647 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 14:24:50.531935 kubelet[1647]: I1213 14:24:50.531897    1647 server.go:929] "Client rotation is on, will bootstrap in background"
Dec 13 14:24:50.587786 kubelet[1647]: E1213 14:24:50.587705    1647 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:50.592026 kubelet[1647]: I1213 14:24:50.591981    1647 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 14:24:50.608342 kubelet[1647]: E1213 14:24:50.608297    1647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Dec 13 14:24:50.608604 kubelet[1647]: I1213 14:24:50.608565    1647 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Dec 13 14:24:50.615947 kubelet[1647]: I1213 14:24:50.615905    1647 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 14:24:50.616133 kubelet[1647]: I1213 14:24:50.616111    1647 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Dec 13 14:24:50.616379 kubelet[1647]: I1213 14:24:50.616338    1647 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 14:24:50.616844 kubelet[1647]: I1213 14:24:50.616390    1647 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Dec 13 14:24:50.617060 kubelet[1647]: I1213 14:24:50.616859    1647 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 14:24:50.617060 kubelet[1647]: I1213 14:24:50.616876    1647 container_manager_linux.go:300] "Creating device plugin manager"
Dec 13 14:24:50.617060 kubelet[1647]: I1213 14:24:50.617018    1647 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:50.627973 kubelet[1647]: I1213 14:24:50.627892    1647 kubelet.go:408] "Attempting to sync node with API server"
Dec 13 14:24:50.627973 kubelet[1647]: I1213 14:24:50.627973    1647 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 14:24:50.628238 kubelet[1647]: I1213 14:24:50.628031    1647 kubelet.go:314] "Adding apiserver pod source"
Dec 13 14:24:50.628238 kubelet[1647]: I1213 14:24:50.628055    1647 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 14:24:50.629332 kubelet[1647]: W1213 14:24:50.629230    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:50.629332 kubelet[1647]: E1213 14:24:50.629323    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:50.647419 kubelet[1647]: W1213 14:24:50.647344    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:50.647694 kubelet[1647]: E1213 14:24:50.647658    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:50.647999 kubelet[1647]: I1213 14:24:50.647975    1647 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 14:24:50.656278 kubelet[1647]: I1213 14:24:50.656248    1647 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 14:24:50.659911 kubelet[1647]: W1213 14:24:50.659881    1647 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 14:24:50.660785 kubelet[1647]: I1213 14:24:50.660726    1647 server.go:1269] "Started kubelet"
Dec 13 14:24:50.671361 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Dec 13 14:24:50.671802 kubelet[1647]: I1213 14:24:50.671732    1647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 14:24:50.681639 kubelet[1647]: I1213 14:24:50.681578    1647 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 14:24:50.682720 kubelet[1647]: I1213 14:24:50.682668    1647 server.go:460] "Adding debug handlers to kubelet server"
Dec 13 14:24:50.683741 kubelet[1647]: I1213 14:24:50.683664    1647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 14:24:50.684007 kubelet[1647]: I1213 14:24:50.683970    1647 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 14:24:50.684300 kubelet[1647]: I1213 14:24:50.684263    1647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Dec 13 14:24:50.687112 kubelet[1647]: I1213 14:24:50.686383    1647 volume_manager.go:289] "Starting Kubelet Volume Manager"
Dec 13 14:24:50.687112 kubelet[1647]: E1213 14:24:50.686655    1647 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" not found"
Dec 13 14:24:50.692970 kubelet[1647]: E1213 14:24:50.685333    1647 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal.1810c2a9b3dfe05a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,UID:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:24:50.660696154 +0000 UTC m=+0.660643604,LastTimestamp:2024-12-13 14:24:50.660696154 +0000 UTC m=+0.660643604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,}"
Dec 13 14:24:50.694431 kubelet[1647]: E1213 14:24:50.694373    1647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="200ms"
Dec 13 14:24:50.695759 kubelet[1647]: I1213 14:24:50.695702    1647 factory.go:221] Registration of the systemd container factory successfully
Dec 13 14:24:50.695903 kubelet[1647]: I1213 14:24:50.695876    1647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 14:24:50.697982 kubelet[1647]: I1213 14:24:50.697942    1647 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Dec 13 14:24:50.698094 kubelet[1647]: I1213 14:24:50.698023    1647 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 14:24:50.699362 kubelet[1647]: E1213 14:24:50.699336    1647 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 14:24:50.699572 kubelet[1647]: I1213 14:24:50.699548    1647 factory.go:221] Registration of the containerd container factory successfully
Dec 13 14:24:50.714817 kubelet[1647]: I1213 14:24:50.714744    1647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 14:24:50.717071 kubelet[1647]: I1213 14:24:50.717021    1647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 14:24:50.717071 kubelet[1647]: I1213 14:24:50.717066    1647 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 14:24:50.717259 kubelet[1647]: I1213 14:24:50.717102    1647 kubelet.go:2321] "Starting kubelet main sync loop"
Dec 13 14:24:50.717259 kubelet[1647]: E1213 14:24:50.717169    1647 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 14:24:50.726770 kubelet[1647]: W1213 14:24:50.726663    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:50.726938 kubelet[1647]: E1213 14:24:50.726794    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:50.729112 kubelet[1647]: W1213 14:24:50.729048    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:50.729247 kubelet[1647]: E1213 14:24:50.729128    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:50.740185 kubelet[1647]: I1213 14:24:50.740150    1647 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 14:24:50.740410 kubelet[1647]: I1213 14:24:50.740393    1647 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 14:24:50.740525 kubelet[1647]: I1213 14:24:50.740510    1647 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:50.745904 kubelet[1647]: I1213 14:24:50.745856    1647 policy_none.go:49] "None policy: Start"
Dec 13 14:24:50.747102 kubelet[1647]: I1213 14:24:50.747069    1647 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 14:24:50.747262 kubelet[1647]: I1213 14:24:50.747250    1647 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 14:24:50.759532 systemd[1]: Created slice kubepods.slice.
Dec 13 14:24:50.766594 systemd[1]: Created slice kubepods-burstable.slice.
Dec 13 14:24:50.770918 systemd[1]: Created slice kubepods-besteffort.slice.
Dec 13 14:24:50.777965 kubelet[1647]: I1213 14:24:50.777927    1647 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 14:24:50.778187 kubelet[1647]: I1213 14:24:50.778166    1647 eviction_manager.go:189] "Eviction manager: starting control loop"
Dec 13 14:24:50.778284 kubelet[1647]: I1213 14:24:50.778191    1647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 14:24:50.778895 kubelet[1647]: I1213 14:24:50.778871    1647 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 14:24:50.782810 kubelet[1647]: E1213 14:24:50.781174    1647 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" not found"
Dec 13 14:24:50.837187 systemd[1]: Created slice kubepods-burstable-pod0de58ea8fff06bead915537b9a371333.slice.
Dec 13 14:24:50.850004 systemd[1]: Created slice kubepods-burstable-pod55eb6ebc55bef5dbda0e27da19be9b72.slice.
Dec 13 14:24:50.856409 systemd[1]: Created slice kubepods-burstable-podc06f3349955eb544ffdf87f960171028.slice.
Dec 13 14:24:50.884112 kubelet[1647]: I1213 14:24:50.884058    1647 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.884509 kubelet[1647]: E1213 14:24:50.884473    1647 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.895101 kubelet[1647]: E1213 14:24:50.895032    1647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="400ms"
Dec 13 14:24:50.899680 kubelet[1647]: I1213 14:24:50.899614    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.899680 kubelet[1647]: I1213 14:24:50.899672    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0de58ea8fff06bead915537b9a371333-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"0de58ea8fff06bead915537b9a371333\") " pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.899956 kubelet[1647]: I1213 14:24:50.899705    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0de58ea8fff06bead915537b9a371333-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"0de58ea8fff06bead915537b9a371333\") " pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.899956 kubelet[1647]: I1213 14:24:50.899733    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.899956 kubelet[1647]: I1213 14:24:50.899791    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.899956 kubelet[1647]: I1213 14:24:50.899826    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.900174 kubelet[1647]: I1213 14:24:50.899857    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06f3349955eb544ffdf87f960171028-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"c06f3349955eb544ffdf87f960171028\") " pod="kube-system/kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.900174 kubelet[1647]: I1213 14:24:50.899881    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0de58ea8fff06bead915537b9a371333-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"0de58ea8fff06bead915537b9a371333\") " pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:50.900174 kubelet[1647]: I1213 14:24:50.899908    1647 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:51.037860 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 14:24:51.091381 kubelet[1647]: I1213 14:24:51.091333    1647 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:51.091931 kubelet[1647]: E1213 14:24:51.091736    1647 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:51.145664 env[1219]: time="2024-12-13T14:24:51.145591388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,Uid:0de58ea8fff06bead915537b9a371333,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:51.155550 env[1219]: time="2024-12-13T14:24:51.155495813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,Uid:55eb6ebc55bef5dbda0e27da19be9b72,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:51.160832 env[1219]: time="2024-12-13T14:24:51.160784356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,Uid:c06f3349955eb544ffdf87f960171028,Namespace:kube-system,Attempt:0,}"
Dec 13 14:24:51.296051 kubelet[1647]: E1213 14:24:51.295903    1647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="800ms"
Dec 13 14:24:51.498312 kubelet[1647]: I1213 14:24:51.498266    1647 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:51.498785 kubelet[1647]: E1213 14:24:51.498713    1647 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:51.577500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522487854.mount: Deactivated successfully.
Dec 13 14:24:51.588167 env[1219]: time="2024-12-13T14:24:51.588109769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.589430 env[1219]: time="2024-12-13T14:24:51.589374979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.593367 env[1219]: time="2024-12-13T14:24:51.593319654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.595419 env[1219]: time="2024-12-13T14:24:51.595357941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.596613 env[1219]: time="2024-12-13T14:24:51.596564071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.599076 env[1219]: time="2024-12-13T14:24:51.599037368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.600823 env[1219]: time="2024-12-13T14:24:51.600776698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.602201 env[1219]: time="2024-12-13T14:24:51.602151628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.604335 env[1219]: time="2024-12-13T14:24:51.604279255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.606289 env[1219]: time="2024-12-13T14:24:51.606249743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.614322 env[1219]: time="2024-12-13T14:24:51.614259092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.615585 env[1219]: time="2024-12-13T14:24:51.615530834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:24:51.657199 env[1219]: time="2024-12-13T14:24:51.648862857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:51.657199 env[1219]: time="2024-12-13T14:24:51.648906681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:51.657199 env[1219]: time="2024-12-13T14:24:51.648926723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:51.657199 env[1219]: time="2024-12-13T14:24:51.649097906Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf008ebe95ad0405551be9d6382afbe988a3958498c53e17f53f03dfc6fe685e pid=1687 runtime=io.containerd.runc.v2
Dec 13 14:24:51.692377 env[1219]: time="2024-12-13T14:24:51.692208328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:51.692615 env[1219]: time="2024-12-13T14:24:51.692332750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:51.692615 env[1219]: time="2024-12-13T14:24:51.692356368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:51.692815 env[1219]: time="2024-12-13T14:24:51.692661208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f65b58425684c744029f461f9cde018f9b59f5300791d686bda20ff1d1221d1b pid=1710 runtime=io.containerd.runc.v2
Dec 13 14:24:51.708295 env[1219]: time="2024-12-13T14:24:51.708167299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:24:51.708295 env[1219]: time="2024-12-13T14:24:51.708249666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:24:51.708295 env[1219]: time="2024-12-13T14:24:51.708268084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:24:51.708962 env[1219]: time="2024-12-13T14:24:51.708894582Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39340ef2361cf6db08cb9a4d56dc728ca1aa769eb409ffaf5b76dde6ea4a97a7 pid=1726 runtime=io.containerd.runc.v2
Dec 13 14:24:51.722129 systemd[1]: Started cri-containerd-bf008ebe95ad0405551be9d6382afbe988a3958498c53e17f53f03dfc6fe685e.scope.
Dec 13 14:24:51.751168 systemd[1]: Started cri-containerd-f65b58425684c744029f461f9cde018f9b59f5300791d686bda20ff1d1221d1b.scope.
Dec 13 14:24:51.769568 systemd[1]: Started cri-containerd-39340ef2361cf6db08cb9a4d56dc728ca1aa769eb409ffaf5b76dde6ea4a97a7.scope.
Dec 13 14:24:51.773714 kubelet[1647]: W1213 14:24:51.772988    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:51.773714 kubelet[1647]: E1213 14:24:51.773145    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:51.834676 kubelet[1647]: W1213 14:24:51.834480    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:51.835000 kubelet[1647]: E1213 14:24:51.834949    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:51.839697 env[1219]: time="2024-12-13T14:24:51.839638615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,Uid:0de58ea8fff06bead915537b9a371333,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf008ebe95ad0405551be9d6382afbe988a3958498c53e17f53f03dfc6fe685e\""
Dec 13 14:24:51.842400 kubelet[1647]: E1213 14:24:51.842356    1647 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-21291"
Dec 13 14:24:51.844157 env[1219]: time="2024-12-13T14:24:51.844101256Z" level=info msg="CreateContainer within sandbox \"bf008ebe95ad0405551be9d6382afbe988a3958498c53e17f53f03dfc6fe685e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 14:24:51.872785 env[1219]: time="2024-12-13T14:24:51.868770046Z" level=info msg="CreateContainer within sandbox \"bf008ebe95ad0405551be9d6382afbe988a3958498c53e17f53f03dfc6fe685e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0bccaf83ab46697b45c45cfbefec676c944bbfb48a212498214b00480cc0d2a4\""
Dec 13 14:24:51.872785 env[1219]: time="2024-12-13T14:24:51.869607986Z" level=info msg="StartContainer for \"0bccaf83ab46697b45c45cfbefec676c944bbfb48a212498214b00480cc0d2a4\""
Dec 13 14:24:51.880214 env[1219]: time="2024-12-13T14:24:51.880151796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,Uid:c06f3349955eb544ffdf87f960171028,Namespace:kube-system,Attempt:0,} returns sandbox id \"39340ef2361cf6db08cb9a4d56dc728ca1aa769eb409ffaf5b76dde6ea4a97a7\""
Dec 13 14:24:51.885128 kubelet[1647]: E1213 14:24:51.885013    1647 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-21291"
Dec 13 14:24:51.887384 env[1219]: time="2024-12-13T14:24:51.886802798Z" level=info msg="CreateContainer within sandbox \"39340ef2361cf6db08cb9a4d56dc728ca1aa769eb409ffaf5b76dde6ea4a97a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 14:24:51.911731 env[1219]: time="2024-12-13T14:24:51.911670707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,Uid:55eb6ebc55bef5dbda0e27da19be9b72,Namespace:kube-system,Attempt:0,} returns sandbox id \"f65b58425684c744029f461f9cde018f9b59f5300791d686bda20ff1d1221d1b\""
Dec 13 14:24:51.913922 kubelet[1647]: E1213 14:24:51.913699    1647 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flat"
Dec 13 14:24:51.915208 env[1219]: time="2024-12-13T14:24:51.915170556Z" level=info msg="CreateContainer within sandbox \"39340ef2361cf6db08cb9a4d56dc728ca1aa769eb409ffaf5b76dde6ea4a97a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"65cd033aafedd0f765cf2baab40ab549196b8101ccac0cd43959c609168070fb\""
Dec 13 14:24:51.916199 env[1219]: time="2024-12-13T14:24:51.916160329Z" level=info msg="CreateContainer within sandbox \"f65b58425684c744029f461f9cde018f9b59f5300791d686bda20ff1d1221d1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 14:24:51.916665 env[1219]: time="2024-12-13T14:24:51.916625443Z" level=info msg="StartContainer for \"65cd033aafedd0f765cf2baab40ab549196b8101ccac0cd43959c609168070fb\""
Dec 13 14:24:51.935603 systemd[1]: Started cri-containerd-0bccaf83ab46697b45c45cfbefec676c944bbfb48a212498214b00480cc0d2a4.scope.
Dec 13 14:24:51.949483 env[1219]: time="2024-12-13T14:24:51.949284569Z" level=info msg="CreateContainer within sandbox \"f65b58425684c744029f461f9cde018f9b59f5300791d686bda20ff1d1221d1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d8b99e1b2de11054bbc5ef25fd4a91935b04b994aa8883967ea80aa31aa460f\""
Dec 13 14:24:51.950245 env[1219]: time="2024-12-13T14:24:51.950158612Z" level=info msg="StartContainer for \"5d8b99e1b2de11054bbc5ef25fd4a91935b04b994aa8883967ea80aa31aa460f\""
Dec 13 14:24:51.999117 systemd[1]: Started cri-containerd-65cd033aafedd0f765cf2baab40ab549196b8101ccac0cd43959c609168070fb.scope.
Dec 13 14:24:52.005235 kubelet[1647]: W1213 14:24:52.005092    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:52.005235 kubelet[1647]: E1213 14:24:52.005192    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:52.027686 systemd[1]: Started cri-containerd-5d8b99e1b2de11054bbc5ef25fd4a91935b04b994aa8883967ea80aa31aa460f.scope.
Dec 13 14:24:52.047210 env[1219]: time="2024-12-13T14:24:52.047150759Z" level=info msg="StartContainer for \"0bccaf83ab46697b45c45cfbefec676c944bbfb48a212498214b00480cc0d2a4\" returns successfully"
Dec 13 14:24:52.096600 kubelet[1647]: E1213 14:24:52.096443    1647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="1.6s"
Dec 13 14:24:52.102391 kubelet[1647]: W1213 14:24:52.102307    1647 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused
Dec 13 14:24:52.102600 kubelet[1647]: E1213 14:24:52.102407    1647 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError"
Dec 13 14:24:52.151190 env[1219]: time="2024-12-13T14:24:52.151117790Z" level=info msg="StartContainer for \"65cd033aafedd0f765cf2baab40ab549196b8101ccac0cd43959c609168070fb\" returns successfully"
Dec 13 14:24:52.177688 env[1219]: time="2024-12-13T14:24:52.177625834Z" level=info msg="StartContainer for \"5d8b99e1b2de11054bbc5ef25fd4a91935b04b994aa8883967ea80aa31aa460f\" returns successfully"
Dec 13 14:24:52.311944 kubelet[1647]: I1213 14:24:52.311892    1647 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:55.742374 kubelet[1647]: E1213 14:24:55.742324    1647 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:55.763011 kubelet[1647]: E1213 14:24:55.762873    1647 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal.1810c2a9b3dfe05a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,UID:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:24:50.660696154 +0000 UTC m=+0.660643604,LastTimestamp:2024-12-13 14:24:50.660696154 +0000 UTC m=+0.660643604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,}"
Dec 13 14:24:55.775286 kubelet[1647]: I1213 14:24:55.775231    1647 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:55.775286 kubelet[1647]: E1213 14:24:55.775280    1647 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\": node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" not found"
Dec 13 14:24:55.818981 kubelet[1647]: E1213 14:24:55.818843    1647 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal.1810c2a9b62d3a82  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,UID:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:24:50.699319938 +0000 UTC m=+0.699267390,LastTimestamp:2024-12-13 14:24:50.699319938 +0000 UTC m=+0.699267390,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,}"
Dec 13 14:24:55.879042 kubelet[1647]: E1213 14:24:55.878837    1647 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal.1810c2a9b889cd16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,UID:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:24:50.738941206 +0000 UTC m=+0.738888655,LastTimestamp:2024-12-13 14:24:50.738941206 +0000 UTC m=+0.738888655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,}"
Dec 13 14:24:55.935115 kubelet[1647]: E1213 14:24:55.934798    1647 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal.1810c2a9b88a16d8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,UID:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:24:50.738960088 +0000 UTC m=+0.738907536,LastTimestamp:2024-12-13 14:24:50.738960088 +0000 UTC m=+0.738907536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal,}"
Dec 13 14:24:56.509362 kubelet[1647]: W1213 14:24:56.509308    1647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Dec 13 14:24:56.635215 kubelet[1647]: I1213 14:24:56.635175    1647 apiserver.go:52] "Watching apiserver"
Dec 13 14:24:56.699090 kubelet[1647]: I1213 14:24:56.699046    1647 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 14:24:57.482701 systemd[1]: Reloading.
Dec 13 14:24:57.623993 /usr/lib/systemd/system-generators/torcx-generator[1941]: time="2024-12-13T14:24:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 14:24:57.632862 /usr/lib/systemd/system-generators/torcx-generator[1941]: time="2024-12-13T14:24:57Z" level=info msg="torcx already run"
Dec 13 14:24:57.721083 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 14:24:57.721115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 14:24:57.748260 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 14:24:57.912942 systemd[1]: Stopping kubelet.service...
Dec 13 14:24:57.937709 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 14:24:57.938030 systemd[1]: Stopped kubelet.service.
Dec 13 14:24:57.938114 systemd[1]: kubelet.service: Consumed 1.119s CPU time.
Dec 13 14:24:57.941316 systemd[1]: Starting kubelet.service...
Dec 13 14:24:58.197045 systemd[1]: Started kubelet.service.
Dec 13 14:24:58.300663 kubelet[1988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:58.300663 kubelet[1988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 14:24:58.300663 kubelet[1988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 14:24:58.301323 kubelet[1988]: I1213 14:24:58.300784    1988 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 14:24:58.312721 kubelet[1988]: I1213 14:24:58.312664    1988 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Dec 13 14:24:58.312721 kubelet[1988]: I1213 14:24:58.312697    1988 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 14:24:58.313104 kubelet[1988]: I1213 14:24:58.313066    1988 server.go:929] "Client rotation is on, will bootstrap in background"
Dec 13 14:24:58.317622 kubelet[1988]: I1213 14:24:58.316061    1988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 14:24:58.319627 kubelet[1988]: I1213 14:24:58.319088    1988 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 14:24:58.342980 kubelet[1988]: E1213 14:24:58.342922    1988 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Dec 13 14:24:58.342980 kubelet[1988]: I1213 14:24:58.342978    1988 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Dec 13 14:24:58.350576 kubelet[1988]: I1213 14:24:58.350487    1988 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 14:24:58.350799 kubelet[1988]: I1213 14:24:58.350775    1988 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Dec 13 14:24:58.351089 kubelet[1988]: I1213 14:24:58.351034    1988 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 14:24:58.351360 kubelet[1988]: I1213 14:24:58.351091    1988 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Dec 13 14:24:58.351540 kubelet[1988]: I1213 14:24:58.351373    1988 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 14:24:58.351540 kubelet[1988]: I1213 14:24:58.351392    1988 container_manager_linux.go:300] "Creating device plugin manager"
Dec 13 14:24:58.351540 kubelet[1988]: I1213 14:24:58.351440    1988 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:58.351701 kubelet[1988]: I1213 14:24:58.351612    1988 kubelet.go:408] "Attempting to sync node with API server"
Dec 13 14:24:58.352513 kubelet[1988]: I1213 14:24:58.351634    1988 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 14:24:58.352660 kubelet[1988]: I1213 14:24:58.352571    1988 kubelet.go:314] "Adding apiserver pod source"
Dec 13 14:24:58.359856 kubelet[1988]: I1213 14:24:58.359816    1988 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 14:24:58.361445 kubelet[1988]: I1213 14:24:58.361406    1988 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 14:24:58.365189 kubelet[1988]: I1213 14:24:58.365150    1988 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 14:24:58.367550 kubelet[1988]: I1213 14:24:58.367520    1988 server.go:1269] "Started kubelet"
Dec 13 14:24:58.370635 kubelet[1988]: I1213 14:24:58.370600    1988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 14:24:58.391101 kubelet[1988]: I1213 14:24:58.391042    1988 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 14:24:58.392828 kubelet[1988]: I1213 14:24:58.392791    1988 server.go:460] "Adding debug handlers to kubelet server"
Dec 13 14:24:58.394390 kubelet[1988]: I1213 14:24:58.394318    1988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 14:24:58.394630 kubelet[1988]: I1213 14:24:58.394606    1988 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 14:24:58.394987 kubelet[1988]: I1213 14:24:58.394961    1988 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Dec 13 14:24:58.400096 kubelet[1988]: I1213 14:24:58.400063    1988 volume_manager.go:289] "Starting Kubelet Volume Manager"
Dec 13 14:24:58.400434 kubelet[1988]: E1213 14:24:58.400402    1988 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" not found"
Dec 13 14:24:58.402390 kubelet[1988]: I1213 14:24:58.402359    1988 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Dec 13 14:24:58.402607 kubelet[1988]: I1213 14:24:58.402568    1988 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 14:24:58.414642 kubelet[1988]: I1213 14:24:58.414593    1988 factory.go:221] Registration of the systemd container factory successfully
Dec 13 14:24:58.414863 kubelet[1988]: I1213 14:24:58.414732    1988 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 14:24:58.430784 kubelet[1988]: I1213 14:24:58.428347    1988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 14:24:58.430784 kubelet[1988]: E1213 14:24:58.428234    1988 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 14:24:58.430784 kubelet[1988]: I1213 14:24:58.428779    1988 factory.go:221] Registration of the containerd container factory successfully
Dec 13 14:24:58.437061 kubelet[1988]: I1213 14:24:58.435604    1988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 14:24:58.437061 kubelet[1988]: I1213 14:24:58.435651    1988 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 14:24:58.437061 kubelet[1988]: I1213 14:24:58.435675    1988 kubelet.go:2321] "Starting kubelet main sync loop"
Dec 13 14:24:58.437820 kubelet[1988]: E1213 14:24:58.435744    1988 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 14:24:58.541242 sudo[2018]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Dec 13 14:24:58.541714 sudo[2018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 14:24:58.546277 kubelet[1988]: E1213 14:24:58.544581    1988 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Dec 13 14:24:58.570690 kubelet[1988]: I1213 14:24:58.570657    1988 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 14:24:58.570961 kubelet[1988]: I1213 14:24:58.570941    1988 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 14:24:58.571113 kubelet[1988]: I1213 14:24:58.571099    1988 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 14:24:58.571519 kubelet[1988]: I1213 14:24:58.571499    1988 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 14:24:58.571706 kubelet[1988]: I1213 14:24:58.571666    1988 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 14:24:58.571877 kubelet[1988]: I1213 14:24:58.571862    1988 policy_none.go:49] "None policy: Start"
Dec 13 14:24:58.574944 kubelet[1988]: I1213 14:24:58.574908    1988 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 14:24:58.575141 kubelet[1988]: I1213 14:24:58.575127    1988 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 14:24:58.575564 kubelet[1988]: I1213 14:24:58.575548    1988 state_mem.go:75] "Updated machine memory state"
Dec 13 14:24:58.591227 kubelet[1988]: I1213 14:24:58.591183    1988 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 14:24:58.597357 kubelet[1988]: I1213 14:24:58.597329    1988 eviction_manager.go:189] "Eviction manager: starting control loop"
Dec 13 14:24:58.597613 kubelet[1988]: I1213 14:24:58.597553    1988 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 14:24:58.598199 kubelet[1988]: I1213 14:24:58.598174    1988 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 14:24:58.717209 kubelet[1988]: I1213 14:24:58.717161    1988 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.733848 kubelet[1988]: I1213 14:24:58.733801    1988 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.734192 kubelet[1988]: I1213 14:24:58.734174    1988 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.762233 kubelet[1988]: W1213 14:24:58.762190    1988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Dec 13 14:24:58.773302 kubelet[1988]: W1213 14:24:58.773135    1988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Dec 13 14:24:58.773493 kubelet[1988]: W1213 14:24:58.773380    1988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Dec 13 14:24:58.773493 kubelet[1988]: E1213 14:24:58.773447    1988 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.804397 kubelet[1988]: I1213 14:24:58.804264    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.804667 kubelet[1988]: I1213 14:24:58.804633    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.804855 kubelet[1988]: I1213 14:24:58.804830    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.805002 kubelet[1988]: I1213 14:24:58.804980    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.805126 kubelet[1988]: I1213 14:24:58.805107    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06f3349955eb544ffdf87f960171028-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"c06f3349955eb544ffdf87f960171028\") " pod="kube-system/kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.805251 kubelet[1988]: I1213 14:24:58.805230    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0de58ea8fff06bead915537b9a371333-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"0de58ea8fff06bead915537b9a371333\") " pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.805390 kubelet[1988]: I1213 14:24:58.805349    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0de58ea8fff06bead915537b9a371333-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"0de58ea8fff06bead915537b9a371333\") " pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.805517 kubelet[1988]: I1213 14:24:58.805497    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0de58ea8fff06bead915537b9a371333-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"0de58ea8fff06bead915537b9a371333\") " pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:58.805635 kubelet[1988]: I1213 14:24:58.805617    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55eb6ebc55bef5dbda0e27da19be9b72-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" (UID: \"55eb6ebc55bef5dbda0e27da19be9b72\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:59.341093 sudo[2018]: pam_unix(sudo:session): session closed for user root
Dec 13 14:24:59.360663 kubelet[1988]: I1213 14:24:59.360609    1988 apiserver.go:52] "Watching apiserver"
Dec 13 14:24:59.402947 kubelet[1988]: I1213 14:24:59.402905    1988 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Dec 13 14:24:59.501743 kubelet[1988]: W1213 14:24:59.501699    1988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Dec 13 14:24:59.501948 kubelet[1988]: E1213 14:24:59.501811    1988 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:59.502422 kubelet[1988]: W1213 14:24:59.502394    1988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Dec 13 14:24:59.502562 kubelet[1988]: E1213 14:24:59.502482    1988 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal"
Dec 13 14:24:59.545517 kubelet[1988]: I1213 14:24:59.545435    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" podStartSLOduration=1.545408138 podStartE2EDuration="1.545408138s" podCreationTimestamp="2024-12-13 14:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:59.52780793 +0000 UTC m=+1.323689501" watchObservedRunningTime="2024-12-13 14:24:59.545408138 +0000 UTC m=+1.341289705"
Dec 13 14:24:59.559068 kubelet[1988]: I1213 14:24:59.558970    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" podStartSLOduration=3.558942867 podStartE2EDuration="3.558942867s" podCreationTimestamp="2024-12-13 14:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:59.546796295 +0000 UTC m=+1.342677862" watchObservedRunningTime="2024-12-13 14:24:59.558942867 +0000 UTC m=+1.354824427"
Dec 13 14:24:59.575769 kubelet[1988]: I1213 14:24:59.575681    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" podStartSLOduration=1.575660449 podStartE2EDuration="1.575660449s" podCreationTimestamp="2024-12-13 14:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:24:59.560029509 +0000 UTC m=+1.355911079" watchObservedRunningTime="2024-12-13 14:24:59.575660449 +0000 UTC m=+1.371542008"
Dec 13 14:25:02.355701 sudo[1404]: pam_unix(sudo:session): session closed for user root
Dec 13 14:25:02.399858 sshd[1401]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:02.405154 systemd-logind[1228]: Session 5 logged out. Waiting for processes to exit.
Dec 13 14:25:02.405429 systemd[1]: sshd@4-10.128.0.48:22-139.178.68.195:35790.service: Deactivated successfully.
Dec 13 14:25:02.406638 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 14:25:02.406938 systemd[1]: session-5.scope: Consumed 7.067s CPU time.
Dec 13 14:25:02.408088 systemd-logind[1228]: Removed session 5.
Dec 13 14:25:02.865150 kubelet[1988]: I1213 14:25:02.865105    1988 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 14:25:02.866330 env[1219]: time="2024-12-13T14:25:02.866276419Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 14:25:02.866920 kubelet[1988]: I1213 14:25:02.866556    1988 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 14:25:03.478808 systemd[1]: Created slice kubepods-besteffort-pod300be866_935b_46a8_a0a5_e260fa6c0fe4.slice.
Dec 13 14:25:03.512955 systemd[1]: Created slice kubepods-burstable-pod19081c00_6c70_4f02_9884_7e07fd459593.slice.
Dec 13 14:25:03.536899 kubelet[1988]: I1213 14:25:03.536852    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/300be866-935b-46a8-a0a5-e260fa6c0fe4-kube-proxy\") pod \"kube-proxy-dq8g2\" (UID: \"300be866-935b-46a8-a0a5-e260fa6c0fe4\") " pod="kube-system/kube-proxy-dq8g2"
Dec 13 14:25:03.537241 kubelet[1988]: I1213 14:25:03.537190    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19081c00-6c70-4f02-9884-7e07fd459593-clustermesh-secrets\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.537430 kubelet[1988]: I1213 14:25:03.537408    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/300be866-935b-46a8-a0a5-e260fa6c0fe4-xtables-lock\") pod \"kube-proxy-dq8g2\" (UID: \"300be866-935b-46a8-a0a5-e260fa6c0fe4\") " pod="kube-system/kube-proxy-dq8g2"
Dec 13 14:25:03.537592 kubelet[1988]: I1213 14:25:03.537569    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-run\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.537781 kubelet[1988]: I1213 14:25:03.537737    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-etc-cni-netd\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.537949 kubelet[1988]: I1213 14:25:03.537926    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hjc\" (UniqueName: \"kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-kube-api-access-r2hjc\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.538122 kubelet[1988]: I1213 14:25:03.538101    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cni-path\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.538414 kubelet[1988]: I1213 14:25:03.538356    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-hubble-tls\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.538640 kubelet[1988]: I1213 14:25:03.538617    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-bpf-maps\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.538823 kubelet[1988]: I1213 14:25:03.538799    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-hostproc\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.539023 kubelet[1988]: I1213 14:25:03.538985    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/300be866-935b-46a8-a0a5-e260fa6c0fe4-lib-modules\") pod \"kube-proxy-dq8g2\" (UID: \"300be866-935b-46a8-a0a5-e260fa6c0fe4\") " pod="kube-system/kube-proxy-dq8g2"
Dec 13 14:25:03.539301 kubelet[1988]: I1213 14:25:03.539261    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-xtables-lock\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.539537 kubelet[1988]: I1213 14:25:03.539500    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57qb\" (UniqueName: \"kubernetes.io/projected/300be866-935b-46a8-a0a5-e260fa6c0fe4-kube-api-access-p57qb\") pod \"kube-proxy-dq8g2\" (UID: \"300be866-935b-46a8-a0a5-e260fa6c0fe4\") " pod="kube-system/kube-proxy-dq8g2"
Dec 13 14:25:03.539707 kubelet[1988]: I1213 14:25:03.539684    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-lib-modules\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.539889 kubelet[1988]: I1213 14:25:03.539862    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19081c00-6c70-4f02-9884-7e07fd459593-cilium-config-path\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.540200 kubelet[1988]: I1213 14:25:03.540148    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-cgroup\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.540403 kubelet[1988]: I1213 14:25:03.540380    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-kernel\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.540589 kubelet[1988]: I1213 14:25:03.540557    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-net\") pod \"cilium-ljjjt\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") " pod="kube-system/cilium-ljjjt"
Dec 13 14:25:03.642980 kubelet[1988]: I1213 14:25:03.642935    1988 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Dec 13 14:25:03.793226 env[1219]: time="2024-12-13T14:25:03.792470714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dq8g2,Uid:300be866-935b-46a8-a0a5-e260fa6c0fe4,Namespace:kube-system,Attempt:0,}"
Dec 13 14:25:03.821368 env[1219]: time="2024-12-13T14:25:03.821312043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljjjt,Uid:19081c00-6c70-4f02-9884-7e07fd459593,Namespace:kube-system,Attempt:0,}"
Dec 13 14:25:03.825999 env[1219]: time="2024-12-13T14:25:03.825425768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:03.826240 env[1219]: time="2024-12-13T14:25:03.825970273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:03.826396 env[1219]: time="2024-12-13T14:25:03.826221573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:03.827115 env[1219]: time="2024-12-13T14:25:03.827022804Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfeaaf2badcfbf96c24500edecf13ab4af7b404d3460c273e3bf3b222844e97d pid=2066 runtime=io.containerd.runc.v2
Dec 13 14:25:03.853557 systemd[1]: Started cri-containerd-cfeaaf2badcfbf96c24500edecf13ab4af7b404d3460c273e3bf3b222844e97d.scope.
Dec 13 14:25:03.890863 env[1219]: time="2024-12-13T14:25:03.890734505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:03.891471 env[1219]: time="2024-12-13T14:25:03.890817558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:03.891471 env[1219]: time="2024-12-13T14:25:03.890859397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:03.891471 env[1219]: time="2024-12-13T14:25:03.891193071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7 pid=2092 runtime=io.containerd.runc.v2
Dec 13 14:25:03.936078 systemd[1]: Started cri-containerd-a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7.scope.
Dec 13 14:25:03.940788 env[1219]: time="2024-12-13T14:25:03.940209783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dq8g2,Uid:300be866-935b-46a8-a0a5-e260fa6c0fe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfeaaf2badcfbf96c24500edecf13ab4af7b404d3460c273e3bf3b222844e97d\""
Dec 13 14:25:03.951998 env[1219]: time="2024-12-13T14:25:03.951937977Z" level=info msg="CreateContainer within sandbox \"cfeaaf2badcfbf96c24500edecf13ab4af7b404d3460c273e3bf3b222844e97d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 14:25:03.982335 env[1219]: time="2024-12-13T14:25:03.982245514Z" level=info msg="CreateContainer within sandbox \"cfeaaf2badcfbf96c24500edecf13ab4af7b404d3460c273e3bf3b222844e97d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"06d32b970fb953072bf845ef01d4c15ffc2a6e2965204a71574c981832ebd1c7\""
Dec 13 14:25:03.983726 env[1219]: time="2024-12-13T14:25:03.983672808Z" level=info msg="StartContainer for \"06d32b970fb953072bf845ef01d4c15ffc2a6e2965204a71574c981832ebd1c7\""
Dec 13 14:25:04.021620 systemd[1]: Started cri-containerd-06d32b970fb953072bf845ef01d4c15ffc2a6e2965204a71574c981832ebd1c7.scope.
Dec 13 14:25:04.054950 env[1219]: time="2024-12-13T14:25:04.054810468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ljjjt,Uid:19081c00-6c70-4f02-9884-7e07fd459593,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\""
Dec 13 14:25:04.060151 env[1219]: time="2024-12-13T14:25:04.060100834Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Dec 13 14:25:04.083732 systemd[1]: Created slice kubepods-besteffort-poddb364bba_f5a4_4bdb_a7d6_d151ebbe6f04.slice.
Dec 13 14:25:04.130543 env[1219]: time="2024-12-13T14:25:04.130471672Z" level=info msg="StartContainer for \"06d32b970fb953072bf845ef01d4c15ffc2a6e2965204a71574c981832ebd1c7\" returns successfully"
Dec 13 14:25:04.147685 kubelet[1988]: I1213 14:25:04.147617    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-cilium-config-path\") pod \"cilium-operator-5d85765b45-gb2pf\" (UID: \"db364bba-f5a4-4bdb-a7d6-d151ebbe6f04\") " pod="kube-system/cilium-operator-5d85765b45-gb2pf"
Dec 13 14:25:04.148504 kubelet[1988]: I1213 14:25:04.148471    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-kube-api-access-zddmv\") pod \"cilium-operator-5d85765b45-gb2pf\" (UID: \"db364bba-f5a4-4bdb-a7d6-d151ebbe6f04\") " pod="kube-system/cilium-operator-5d85765b45-gb2pf"
Dec 13 14:25:04.387488 env[1219]: time="2024-12-13T14:25:04.387349867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gb2pf,Uid:db364bba-f5a4-4bdb-a7d6-d151ebbe6f04,Namespace:kube-system,Attempt:0,}"
Dec 13 14:25:04.410794 env[1219]: time="2024-12-13T14:25:04.410676145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:04.411047 env[1219]: time="2024-12-13T14:25:04.410734762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:04.411047 env[1219]: time="2024-12-13T14:25:04.410765738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:04.411841 env[1219]: time="2024-12-13T14:25:04.411396085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896 pid=2219 runtime=io.containerd.runc.v2
Dec 13 14:25:04.434693 systemd[1]: Started cri-containerd-0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896.scope.
Dec 13 14:25:04.536113 env[1219]: time="2024-12-13T14:25:04.536056791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gb2pf,Uid:db364bba-f5a4-4bdb-a7d6-d151ebbe6f04,Namespace:kube-system,Attempt:0,} returns sandbox id \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\""
Dec 13 14:25:05.772897 update_engine[1207]: I1213 14:25:05.772835  1207 update_attempter.cc:509] Updating boot flags...
Dec 13 14:25:06.878065 kubelet[1988]: I1213 14:25:06.877962    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dq8g2" podStartSLOduration=3.877936535 podStartE2EDuration="3.877936535s" podCreationTimestamp="2024-12-13 14:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:04.550047697 +0000 UTC m=+6.345929267" watchObservedRunningTime="2024-12-13 14:25:06.877936535 +0000 UTC m=+8.673818104"
Dec 13 14:25:13.049280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063431643.mount: Deactivated successfully.
Dec 13 14:25:16.495116 env[1219]: time="2024-12-13T14:25:16.495037659Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:25:16.498057 env[1219]: time="2024-12-13T14:25:16.498011772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:25:16.501083 env[1219]: time="2024-12-13T14:25:16.501019158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:25:16.502049 env[1219]: time="2024-12-13T14:25:16.502001271Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Dec 13 14:25:16.505167 env[1219]: time="2024-12-13T14:25:16.505121666Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Dec 13 14:25:16.507640 env[1219]: time="2024-12-13T14:25:16.507575619Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 14:25:16.525791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1456857090.mount: Deactivated successfully.
Dec 13 14:25:16.537716 env[1219]: time="2024-12-13T14:25:16.537666880Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\""
Dec 13 14:25:16.540971 env[1219]: time="2024-12-13T14:25:16.540923300Z" level=info msg="StartContainer for \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\""
Dec 13 14:25:16.577169 systemd[1]: Started cri-containerd-41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898.scope.
Dec 13 14:25:16.621793 env[1219]: time="2024-12-13T14:25:16.621615177Z" level=info msg="StartContainer for \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\" returns successfully"
Dec 13 14:25:16.634029 systemd[1]: cri-containerd-41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898.scope: Deactivated successfully.
Dec 13 14:25:17.521607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898-rootfs.mount: Deactivated successfully.
Dec 13 14:25:18.480170 env[1219]: time="2024-12-13T14:25:18.480103070Z" level=info msg="shim disconnected" id=41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898
Dec 13 14:25:18.480170 env[1219]: time="2024-12-13T14:25:18.480168098Z" level=warning msg="cleaning up after shim disconnected" id=41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898 namespace=k8s.io
Dec 13 14:25:18.480877 env[1219]: time="2024-12-13T14:25:18.480181824Z" level=info msg="cleaning up dead shim"
Dec 13 14:25:18.492354 env[1219]: time="2024-12-13T14:25:18.492295788Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2403 runtime=io.containerd.runc.v2\n"
Dec 13 14:25:18.559634 env[1219]: time="2024-12-13T14:25:18.558346598Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 14:25:18.584107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065964235.mount: Deactivated successfully.
Dec 13 14:25:18.597264 env[1219]: time="2024-12-13T14:25:18.597181690Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\""
Dec 13 14:25:18.598372 env[1219]: time="2024-12-13T14:25:18.598322268Z" level=info msg="StartContainer for \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\""
Dec 13 14:25:18.636025 systemd[1]: Started cri-containerd-ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb.scope.
Dec 13 14:25:18.704266 env[1219]: time="2024-12-13T14:25:18.704066722Z" level=info msg="StartContainer for \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\" returns successfully"
Dec 13 14:25:18.720868 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 14:25:18.722200 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 14:25:18.722710 systemd[1]: Stopping systemd-sysctl.service...
Dec 13 14:25:18.727908 systemd[1]: Starting systemd-sysctl.service...
Dec 13 14:25:18.729625 systemd[1]: cri-containerd-ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb.scope: Deactivated successfully.
Dec 13 14:25:18.744986 systemd[1]: Finished systemd-sysctl.service.
Dec 13 14:25:18.772454 env[1219]: time="2024-12-13T14:25:18.772390696Z" level=info msg="shim disconnected" id=ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb
Dec 13 14:25:18.772454 env[1219]: time="2024-12-13T14:25:18.772442133Z" level=warning msg="cleaning up after shim disconnected" id=ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb namespace=k8s.io
Dec 13 14:25:18.772454 env[1219]: time="2024-12-13T14:25:18.772456734Z" level=info msg="cleaning up dead shim"
Dec 13 14:25:18.784389 env[1219]: time="2024-12-13T14:25:18.784321634Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2469 runtime=io.containerd.runc.v2\n"
Dec 13 14:25:19.578950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb-rootfs.mount: Deactivated successfully.
Dec 13 14:25:19.585906 env[1219]: time="2024-12-13T14:25:19.585852893Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 14:25:19.622784 env[1219]: time="2024-12-13T14:25:19.618121412Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\""
Dec 13 14:25:19.622784 env[1219]: time="2024-12-13T14:25:19.619314284Z" level=info msg="StartContainer for \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\""
Dec 13 14:25:19.659956 systemd[1]: Started cri-containerd-4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc.scope.
Dec 13 14:25:19.736112 systemd[1]: cri-containerd-4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc.scope: Deactivated successfully.
Dec 13 14:25:19.739398 env[1219]: time="2024-12-13T14:25:19.738714175Z" level=info msg="StartContainer for \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\" returns successfully"
Dec 13 14:25:19.865667 env[1219]: time="2024-12-13T14:25:19.864964360Z" level=info msg="shim disconnected" id=4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc
Dec 13 14:25:19.865667 env[1219]: time="2024-12-13T14:25:19.865035450Z" level=warning msg="cleaning up after shim disconnected" id=4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc namespace=k8s.io
Dec 13 14:25:19.865667 env[1219]: time="2024-12-13T14:25:19.865050934Z" level=info msg="cleaning up dead shim"
Dec 13 14:25:19.896098 env[1219]: time="2024-12-13T14:25:19.896036129Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2526 runtime=io.containerd.runc.v2\n"
Dec 13 14:25:20.129739 env[1219]: time="2024-12-13T14:25:20.129190633Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:25:20.133047 env[1219]: time="2024-12-13T14:25:20.132989699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:25:20.137077 env[1219]: time="2024-12-13T14:25:20.137034567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 14:25:20.137482 env[1219]: time="2024-12-13T14:25:20.137431076Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Dec 13 14:25:20.141632 env[1219]: time="2024-12-13T14:25:20.141589051Z" level=info msg="CreateContainer within sandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Dec 13 14:25:20.158632 env[1219]: time="2024-12-13T14:25:20.158562664Z" level=info msg="CreateContainer within sandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\""
Dec 13 14:25:20.159926 env[1219]: time="2024-12-13T14:25:20.159884107Z" level=info msg="StartContainer for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\""
Dec 13 14:25:20.185045 systemd[1]: Started cri-containerd-9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4.scope.
Dec 13 14:25:20.242397 env[1219]: time="2024-12-13T14:25:20.242312097Z" level=info msg="StartContainer for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" returns successfully"
Dec 13 14:25:20.569004 env[1219]: time="2024-12-13T14:25:20.568935230Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 14:25:20.581198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc-rootfs.mount: Deactivated successfully.
Dec 13 14:25:20.611017 env[1219]: time="2024-12-13T14:25:20.610950766Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\""
Dec 13 14:25:20.612593 env[1219]: time="2024-12-13T14:25:20.612552325Z" level=info msg="StartContainer for \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\""
Dec 13 14:25:20.660867 systemd[1]: Started cri-containerd-ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050.scope.
Dec 13 14:25:20.741193 systemd[1]: cri-containerd-ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050.scope: Deactivated successfully.
Dec 13 14:25:20.743281 env[1219]: time="2024-12-13T14:25:20.743229818Z" level=info msg="StartContainer for \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\" returns successfully"
Dec 13 14:25:20.868796 env[1219]: time="2024-12-13T14:25:20.868607553Z" level=info msg="shim disconnected" id=ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050
Dec 13 14:25:20.869223 env[1219]: time="2024-12-13T14:25:20.869187647Z" level=warning msg="cleaning up after shim disconnected" id=ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050 namespace=k8s.io
Dec 13 14:25:20.869388 env[1219]: time="2024-12-13T14:25:20.869360210Z" level=info msg="cleaning up dead shim"
Dec 13 14:25:20.892986 env[1219]: time="2024-12-13T14:25:20.892922103Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2617 runtime=io.containerd.runc.v2\n"
Dec 13 14:25:21.578910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050-rootfs.mount: Deactivated successfully.
Dec 13 14:25:21.592033 env[1219]: time="2024-12-13T14:25:21.591807893Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 14:25:21.619564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337580357.mount: Deactivated successfully.
Dec 13 14:25:21.624621 kubelet[1988]: I1213 14:25:21.624546    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gb2pf" podStartSLOduration=3.024203318 podStartE2EDuration="18.624520055s" podCreationTimestamp="2024-12-13 14:25:03 +0000 UTC" firstStartedPulling="2024-12-13 14:25:04.53878605 +0000 UTC m=+6.334667598" lastFinishedPulling="2024-12-13 14:25:20.139102774 +0000 UTC m=+21.934984335" observedRunningTime="2024-12-13 14:25:20.690777017 +0000 UTC m=+22.486658583" watchObservedRunningTime="2024-12-13 14:25:21.624520055 +0000 UTC m=+23.420401624"
Dec 13 14:25:21.628774 env[1219]: time="2024-12-13T14:25:21.628703583Z" level=info msg="CreateContainer within sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\""
Dec 13 14:25:21.629910 env[1219]: time="2024-12-13T14:25:21.629869172Z" level=info msg="StartContainer for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\""
Dec 13 14:25:21.666479 systemd[1]: Started cri-containerd-65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be.scope.
Dec 13 14:25:21.727546 env[1219]: time="2024-12-13T14:25:21.727480364Z" level=info msg="StartContainer for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" returns successfully"
Dec 13 14:25:22.007349 kubelet[1988]: I1213 14:25:22.006362    1988 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Dec 13 14:25:22.062852 systemd[1]: Created slice kubepods-burstable-podfd3247b7_5c27_4e18_a0e8_3cb23894fc04.slice.
Dec 13 14:25:22.078584 systemd[1]: Created slice kubepods-burstable-podf9dd7887_a449_4781_a7ba_cb96b3bd0b92.slice.
Dec 13 14:25:22.111245 kubelet[1988]: I1213 14:25:22.111129    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhcf\" (UniqueName: \"kubernetes.io/projected/fd3247b7-5c27-4e18-a0e8-3cb23894fc04-kube-api-access-hbhcf\") pod \"coredns-6f6b679f8f-dhjgc\" (UID: \"fd3247b7-5c27-4e18-a0e8-3cb23894fc04\") " pod="kube-system/coredns-6f6b679f8f-dhjgc"
Dec 13 14:25:22.111597 kubelet[1988]: I1213 14:25:22.111531    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9dd7887-a449-4781-a7ba-cb96b3bd0b92-config-volume\") pod \"coredns-6f6b679f8f-2dgvv\" (UID: \"f9dd7887-a449-4781-a7ba-cb96b3bd0b92\") " pod="kube-system/coredns-6f6b679f8f-2dgvv"
Dec 13 14:25:22.111820 kubelet[1988]: I1213 14:25:22.111795    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4lnd\" (UniqueName: \"kubernetes.io/projected/f9dd7887-a449-4781-a7ba-cb96b3bd0b92-kube-api-access-d4lnd\") pod \"coredns-6f6b679f8f-2dgvv\" (UID: \"f9dd7887-a449-4781-a7ba-cb96b3bd0b92\") " pod="kube-system/coredns-6f6b679f8f-2dgvv"
Dec 13 14:25:22.112102 kubelet[1988]: I1213 14:25:22.112080    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd3247b7-5c27-4e18-a0e8-3cb23894fc04-config-volume\") pod \"coredns-6f6b679f8f-dhjgc\" (UID: \"fd3247b7-5c27-4e18-a0e8-3cb23894fc04\") " pod="kube-system/coredns-6f6b679f8f-dhjgc"
Dec 13 14:25:22.369473 env[1219]: time="2024-12-13T14:25:22.368908860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dhjgc,Uid:fd3247b7-5c27-4e18-a0e8-3cb23894fc04,Namespace:kube-system,Attempt:0,}"
Dec 13 14:25:22.404397 env[1219]: time="2024-12-13T14:25:22.404335842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2dgvv,Uid:f9dd7887-a449-4781-a7ba-cb96b3bd0b92,Namespace:kube-system,Attempt:0,}"
Dec 13 14:25:24.260680 systemd-networkd[1023]: cilium_host: Link UP
Dec 13 14:25:24.263072 systemd-networkd[1023]: cilium_net: Link UP
Dec 13 14:25:24.263082 systemd-networkd[1023]: cilium_net: Gained carrier
Dec 13 14:25:24.271789 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Dec 13 14:25:24.274324 systemd-networkd[1023]: cilium_host: Gained carrier
Dec 13 14:25:24.424685 systemd-networkd[1023]: cilium_vxlan: Link UP
Dec 13 14:25:24.424697 systemd-networkd[1023]: cilium_vxlan: Gained carrier
Dec 13 14:25:24.567327 systemd-networkd[1023]: cilium_host: Gained IPv6LL
Dec 13 14:25:24.705800 kernel: NET: Registered PF_ALG protocol family
Dec 13 14:25:24.983005 systemd-networkd[1023]: cilium_net: Gained IPv6LL
Dec 13 14:25:25.548235 systemd-networkd[1023]: lxc_health: Link UP
Dec 13 14:25:25.562160 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 14:25:25.563428 systemd-networkd[1023]: lxc_health: Gained carrier
Dec 13 14:25:25.855472 kubelet[1988]: I1213 14:25:25.855276    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ljjjt" podStartSLOduration=10.410392809 podStartE2EDuration="22.855233291s" podCreationTimestamp="2024-12-13 14:25:03 +0000 UTC" firstStartedPulling="2024-12-13 14:25:04.059028098 +0000 UTC m=+5.854909658" lastFinishedPulling="2024-12-13 14:25:16.503868582 +0000 UTC m=+18.299750140" observedRunningTime="2024-12-13 14:25:22.628867485 +0000 UTC m=+24.424749058" watchObservedRunningTime="2024-12-13 14:25:25.855233291 +0000 UTC m=+27.651114863"
Dec 13 14:25:25.937357 systemd-networkd[1023]: lxc5226c905a178: Link UP
Dec 13 14:25:25.955834 kernel: eth0: renamed from tmp225ca
Dec 13 14:25:25.970986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5226c905a178: link becomes ready
Dec 13 14:25:25.971318 systemd-networkd[1023]: lxc5226c905a178: Gained carrier
Dec 13 14:25:25.990317 systemd-networkd[1023]: lxcaa18b9acd35f: Link UP
Dec 13 14:25:25.997023 kernel: eth0: renamed from tmped747
Dec 13 14:25:26.017841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaa18b9acd35f: link becomes ready
Dec 13 14:25:26.020990 systemd-networkd[1023]: lxcaa18b9acd35f: Gained carrier
Dec 13 14:25:26.391011 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL
Dec 13 14:25:27.094971 systemd-networkd[1023]: lxcaa18b9acd35f: Gained IPv6LL
Dec 13 14:25:27.158973 systemd-networkd[1023]: lxc_health: Gained IPv6LL
Dec 13 14:25:27.799389 systemd-networkd[1023]: lxc5226c905a178: Gained IPv6LL
Dec 13 14:25:31.108284 env[1219]: time="2024-12-13T14:25:31.108168873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:31.108907 env[1219]: time="2024-12-13T14:25:31.108224158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:31.108907 env[1219]: time="2024-12-13T14:25:31.108261569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:31.108907 env[1219]: time="2024-12-13T14:25:31.108511033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed7478653465283553d75e637595c887e6e59113831815268fa80bfc981d1733 pid=3157 runtime=io.containerd.runc.v2
Dec 13 14:25:31.131801 env[1219]: time="2024-12-13T14:25:31.129938692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:25:31.131801 env[1219]: time="2024-12-13T14:25:31.130040053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:25:31.131801 env[1219]: time="2024-12-13T14:25:31.130092011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:25:31.131801 env[1219]: time="2024-12-13T14:25:31.130302495Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/225caf293928cc4acab941b21bedbba2cfd0ded148c75ba3e5e2cf7182578790 pid=3174 runtime=io.containerd.runc.v2
Dec 13 14:25:31.194339 systemd[1]: Started cri-containerd-ed7478653465283553d75e637595c887e6e59113831815268fa80bfc981d1733.scope.
Dec 13 14:25:31.208607 systemd[1]: Started cri-containerd-225caf293928cc4acab941b21bedbba2cfd0ded148c75ba3e5e2cf7182578790.scope.
Dec 13 14:25:31.300253 env[1219]: time="2024-12-13T14:25:31.300190689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2dgvv,Uid:f9dd7887-a449-4781-a7ba-cb96b3bd0b92,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed7478653465283553d75e637595c887e6e59113831815268fa80bfc981d1733\""
Dec 13 14:25:31.306527 env[1219]: time="2024-12-13T14:25:31.306476909Z" level=info msg="CreateContainer within sandbox \"ed7478653465283553d75e637595c887e6e59113831815268fa80bfc981d1733\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 14:25:31.341051 env[1219]: time="2024-12-13T14:25:31.340989127Z" level=info msg="CreateContainer within sandbox \"ed7478653465283553d75e637595c887e6e59113831815268fa80bfc981d1733\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"074bb5e75c8a3553052a39e3c9192d32facdf8e84146a9446a08c610e0490b84\""
Dec 13 14:25:31.342295 env[1219]: time="2024-12-13T14:25:31.342253026Z" level=info msg="StartContainer for \"074bb5e75c8a3553052a39e3c9192d32facdf8e84146a9446a08c610e0490b84\""
Dec 13 14:25:31.347087 env[1219]: time="2024-12-13T14:25:31.347038733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dhjgc,Uid:fd3247b7-5c27-4e18-a0e8-3cb23894fc04,Namespace:kube-system,Attempt:0,} returns sandbox id \"225caf293928cc4acab941b21bedbba2cfd0ded148c75ba3e5e2cf7182578790\""
Dec 13 14:25:31.350611 env[1219]: time="2024-12-13T14:25:31.350561825Z" level=info msg="CreateContainer within sandbox \"225caf293928cc4acab941b21bedbba2cfd0ded148c75ba3e5e2cf7182578790\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 14:25:31.374361 env[1219]: time="2024-12-13T14:25:31.373080280Z" level=info msg="CreateContainer within sandbox \"225caf293928cc4acab941b21bedbba2cfd0ded148c75ba3e5e2cf7182578790\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f662ef7e7bb95b9368ae80350ad37441523981f47b842a01156685e6f112b5ff\""
Dec 13 14:25:31.375533 env[1219]: time="2024-12-13T14:25:31.375490720Z" level=info msg="StartContainer for \"f662ef7e7bb95b9368ae80350ad37441523981f47b842a01156685e6f112b5ff\""
Dec 13 14:25:31.399108 systemd[1]: Started cri-containerd-074bb5e75c8a3553052a39e3c9192d32facdf8e84146a9446a08c610e0490b84.scope.
Dec 13 14:25:31.441924 systemd[1]: Started cri-containerd-f662ef7e7bb95b9368ae80350ad37441523981f47b842a01156685e6f112b5ff.scope.
Dec 13 14:25:31.486214 env[1219]: time="2024-12-13T14:25:31.486158918Z" level=info msg="StartContainer for \"074bb5e75c8a3553052a39e3c9192d32facdf8e84146a9446a08c610e0490b84\" returns successfully"
Dec 13 14:25:31.517490 env[1219]: time="2024-12-13T14:25:31.517418790Z" level=info msg="StartContainer for \"f662ef7e7bb95b9368ae80350ad37441523981f47b842a01156685e6f112b5ff\" returns successfully"
Dec 13 14:25:31.647206 kubelet[1988]: I1213 14:25:31.646503    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2dgvv" podStartSLOduration=28.646477347 podStartE2EDuration="28.646477347s" podCreationTimestamp="2024-12-13 14:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:31.64535125 +0000 UTC m=+33.441232821" watchObservedRunningTime="2024-12-13 14:25:31.646477347 +0000 UTC m=+33.442358909"
Dec 13 14:25:31.662011 kubelet[1988]: I1213 14:25:31.661940    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dhjgc" podStartSLOduration=28.661906409 podStartE2EDuration="28.661906409s" podCreationTimestamp="2024-12-13 14:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:31.660637546 +0000 UTC m=+33.456519115" watchObservedRunningTime="2024-12-13 14:25:31.661906409 +0000 UTC m=+33.457787979"
Dec 13 14:25:32.118696 systemd[1]: run-containerd-runc-k8s.io-ed7478653465283553d75e637595c887e6e59113831815268fa80bfc981d1733-runc.ys8665.mount: Deactivated successfully.
Dec 13 14:25:52.020452 systemd[1]: Started sshd@5-10.128.0.48:22-139.178.68.195:60138.service.
Dec 13 14:25:52.304955 sshd[3317]: Accepted publickey for core from 139.178.68.195 port 60138 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:25:52.307490 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:52.316295 systemd-logind[1228]: New session 6 of user core.
Dec 13 14:25:52.317334 systemd[1]: Started session-6.scope.
Dec 13 14:25:52.611615 sshd[3317]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:52.616260 systemd[1]: sshd@5-10.128.0.48:22-139.178.68.195:60138.service: Deactivated successfully.
Dec 13 14:25:52.617509 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 14:25:52.618445 systemd-logind[1228]: Session 6 logged out. Waiting for processes to exit.
Dec 13 14:25:52.619727 systemd-logind[1228]: Removed session 6.
Dec 13 14:25:57.659907 systemd[1]: Started sshd@6-10.128.0.48:22-139.178.68.195:38606.service.
Dec 13 14:25:57.944687 sshd[3331]: Accepted publickey for core from 139.178.68.195 port 38606 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:25:57.946925 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:25:57.954484 systemd[1]: Started session-7.scope.
Dec 13 14:25:57.955857 systemd-logind[1228]: New session 7 of user core.
Dec 13 14:25:58.236174 sshd[3331]: pam_unix(sshd:session): session closed for user core
Dec 13 14:25:58.242088 systemd[1]: sshd@6-10.128.0.48:22-139.178.68.195:38606.service: Deactivated successfully.
Dec 13 14:25:58.243241 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 14:25:58.244830 systemd-logind[1228]: Session 7 logged out. Waiting for processes to exit.
Dec 13 14:25:58.246167 systemd-logind[1228]: Removed session 7.
Dec 13 14:26:03.283504 systemd[1]: Started sshd@7-10.128.0.48:22-139.178.68.195:38610.service.
Dec 13 14:26:03.567475 sshd[3348]: Accepted publickey for core from 139.178.68.195 port 38610 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:03.569472 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:03.576582 systemd-logind[1228]: New session 8 of user core.
Dec 13 14:26:03.577507 systemd[1]: Started session-8.scope.
Dec 13 14:26:03.855407 sshd[3348]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:03.860285 systemd[1]: sshd@7-10.128.0.48:22-139.178.68.195:38610.service: Deactivated successfully.
Dec 13 14:26:03.861542 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 14:26:03.862527 systemd-logind[1228]: Session 8 logged out. Waiting for processes to exit.
Dec 13 14:26:03.864264 systemd-logind[1228]: Removed session 8.
Dec 13 14:26:08.903147 systemd[1]: Started sshd@8-10.128.0.48:22-139.178.68.195:34496.service.
Dec 13 14:26:09.188888 sshd[3363]: Accepted publickey for core from 139.178.68.195 port 34496 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:09.191158 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:09.198833 systemd[1]: Started session-9.scope.
Dec 13 14:26:09.199644 systemd-logind[1228]: New session 9 of user core.
Dec 13 14:26:09.481362 sshd[3363]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:09.486350 systemd[1]: sshd@8-10.128.0.48:22-139.178.68.195:34496.service: Deactivated successfully.
Dec 13 14:26:09.487572 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 14:26:09.488630 systemd-logind[1228]: Session 9 logged out. Waiting for processes to exit.
Dec 13 14:26:09.490056 systemd-logind[1228]: Removed session 9.
Dec 13 14:26:14.527489 systemd[1]: Started sshd@9-10.128.0.48:22-139.178.68.195:34508.service.
Dec 13 14:26:14.814416 sshd[3376]: Accepted publickey for core from 139.178.68.195 port 34508 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:14.816450 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:14.823893 systemd[1]: Started session-10.scope.
Dec 13 14:26:14.824529 systemd-logind[1228]: New session 10 of user core.
Dec 13 14:26:15.104201 sshd[3376]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:15.108820 systemd[1]: sshd@9-10.128.0.48:22-139.178.68.195:34508.service: Deactivated successfully.
Dec 13 14:26:15.110015 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 14:26:15.110983 systemd-logind[1228]: Session 10 logged out. Waiting for processes to exit.
Dec 13 14:26:15.112268 systemd-logind[1228]: Removed session 10.
Dec 13 14:26:15.151555 systemd[1]: Started sshd@10-10.128.0.48:22-139.178.68.195:34518.service.
Dec 13 14:26:15.437720 sshd[3388]: Accepted publickey for core from 139.178.68.195 port 34518 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:15.439612 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:15.447877 systemd-logind[1228]: New session 11 of user core.
Dec 13 14:26:15.447913 systemd[1]: Started session-11.scope.
Dec 13 14:26:15.781604 sshd[3388]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:15.787733 systemd-logind[1228]: Session 11 logged out. Waiting for processes to exit.
Dec 13 14:26:15.788205 systemd[1]: sshd@10-10.128.0.48:22-139.178.68.195:34518.service: Deactivated successfully.
Dec 13 14:26:15.789362 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 14:26:15.791182 systemd-logind[1228]: Removed session 11.
Dec 13 14:26:15.828087 systemd[1]: Started sshd@11-10.128.0.48:22-139.178.68.195:34530.service.
Dec 13 14:26:16.112489 sshd[3398]: Accepted publickey for core from 139.178.68.195 port 34530 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:16.114550 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:16.121696 systemd[1]: Started session-12.scope.
Dec 13 14:26:16.122355 systemd-logind[1228]: New session 12 of user core.
Dec 13 14:26:16.411049 sshd[3398]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:16.415538 systemd[1]: sshd@11-10.128.0.48:22-139.178.68.195:34530.service: Deactivated successfully.
Dec 13 14:26:16.416652 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 14:26:16.417689 systemd-logind[1228]: Session 12 logged out. Waiting for processes to exit.
Dec 13 14:26:16.419040 systemd-logind[1228]: Removed session 12.
Dec 13 14:26:21.458098 systemd[1]: Started sshd@12-10.128.0.48:22-139.178.68.195:50774.service.
Dec 13 14:26:21.742623 sshd[3410]: Accepted publickey for core from 139.178.68.195 port 50774 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:21.745092 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:21.753926 systemd[1]: Started session-13.scope.
Dec 13 14:26:21.754786 systemd-logind[1228]: New session 13 of user core.
Dec 13 14:26:22.044069 sshd[3410]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:22.048836 systemd[1]: sshd@12-10.128.0.48:22-139.178.68.195:50774.service: Deactivated successfully.
Dec 13 14:26:22.050180 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 14:26:22.051055 systemd-logind[1228]: Session 13 logged out. Waiting for processes to exit.
Dec 13 14:26:22.052429 systemd-logind[1228]: Removed session 13.
Dec 13 14:26:27.091381 systemd[1]: Started sshd@13-10.128.0.48:22-139.178.68.195:54636.service.
Dec 13 14:26:27.379123 sshd[3422]: Accepted publickey for core from 139.178.68.195 port 54636 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:27.381379 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:27.388470 systemd[1]: Started session-14.scope.
Dec 13 14:26:27.389417 systemd-logind[1228]: New session 14 of user core.
Dec 13 14:26:27.673271 sshd[3422]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:27.677910 systemd[1]: sshd@13-10.128.0.48:22-139.178.68.195:54636.service: Deactivated successfully.
Dec 13 14:26:27.679135 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 14:26:27.680112 systemd-logind[1228]: Session 14 logged out. Waiting for processes to exit.
Dec 13 14:26:27.681711 systemd-logind[1228]: Removed session 14.
Dec 13 14:26:27.719489 systemd[1]: Started sshd@14-10.128.0.48:22-139.178.68.195:54652.service.
Dec 13 14:26:28.001038 sshd[3433]: Accepted publickey for core from 139.178.68.195 port 54652 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:28.002711 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:28.010083 systemd[1]: Started session-15.scope.
Dec 13 14:26:28.011217 systemd-logind[1228]: New session 15 of user core.
Dec 13 14:26:28.354218 sshd[3433]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:28.358821 systemd[1]: sshd@14-10.128.0.48:22-139.178.68.195:54652.service: Deactivated successfully.
Dec 13 14:26:28.360092 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 14:26:28.361315 systemd-logind[1228]: Session 15 logged out. Waiting for processes to exit.
Dec 13 14:26:28.363105 systemd-logind[1228]: Removed session 15.
Dec 13 14:26:28.401710 systemd[1]: Started sshd@15-10.128.0.48:22-139.178.68.195:54656.service.
Dec 13 14:26:28.685958 sshd[3442]: Accepted publickey for core from 139.178.68.195 port 54656 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:28.687476 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:28.694697 systemd[1]: Started session-16.scope.
Dec 13 14:26:28.695613 systemd-logind[1228]: New session 16 of user core.
Dec 13 14:26:30.569429 sshd[3442]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:30.575066 systemd[1]: sshd@15-10.128.0.48:22-139.178.68.195:54656.service: Deactivated successfully.
Dec 13 14:26:30.576118 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 14:26:30.577457 systemd-logind[1228]: Session 16 logged out. Waiting for processes to exit.
Dec 13 14:26:30.579072 systemd-logind[1228]: Removed session 16.
Dec 13 14:26:30.616559 systemd[1]: Started sshd@16-10.128.0.48:22-139.178.68.195:54664.service.
Dec 13 14:26:30.904200 sshd[3459]: Accepted publickey for core from 139.178.68.195 port 54664 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:30.905777 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:30.912873 systemd-logind[1228]: New session 17 of user core.
Dec 13 14:26:30.913591 systemd[1]: Started session-17.scope.
Dec 13 14:26:31.334354 sshd[3459]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:31.338740 systemd[1]: sshd@16-10.128.0.48:22-139.178.68.195:54664.service: Deactivated successfully.
Dec 13 14:26:31.339988 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 14:26:31.341050 systemd-logind[1228]: Session 17 logged out. Waiting for processes to exit.
Dec 13 14:26:31.342292 systemd-logind[1228]: Removed session 17.
Dec 13 14:26:31.381908 systemd[1]: Started sshd@17-10.128.0.48:22-139.178.68.195:54676.service.
Dec 13 14:26:31.671377 sshd[3470]: Accepted publickey for core from 139.178.68.195 port 54676 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:31.673714 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:31.680989 systemd[1]: Started session-18.scope.
Dec 13 14:26:31.682401 systemd-logind[1228]: New session 18 of user core.
Dec 13 14:26:31.959086 sshd[3470]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:31.964236 systemd[1]: sshd@17-10.128.0.48:22-139.178.68.195:54676.service: Deactivated successfully.
Dec 13 14:26:31.965659 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 14:26:31.967264 systemd-logind[1228]: Session 18 logged out. Waiting for processes to exit.
Dec 13 14:26:31.968644 systemd-logind[1228]: Removed session 18.
Dec 13 14:26:37.005189 systemd[1]: Started sshd@18-10.128.0.48:22-139.178.68.195:49844.service.
Dec 13 14:26:37.291544 sshd[3487]: Accepted publickey for core from 139.178.68.195 port 49844 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:37.293927 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:37.301463 systemd[1]: Started session-19.scope.
Dec 13 14:26:37.302113 systemd-logind[1228]: New session 19 of user core.
Dec 13 14:26:37.574632 sshd[3487]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:37.579358 systemd[1]: sshd@18-10.128.0.48:22-139.178.68.195:49844.service: Deactivated successfully.
Dec 13 14:26:37.580568 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 14:26:37.581598 systemd-logind[1228]: Session 19 logged out. Waiting for processes to exit.
Dec 13 14:26:37.582912 systemd-logind[1228]: Removed session 19.
Dec 13 14:26:42.624886 systemd[1]: Started sshd@19-10.128.0.48:22-139.178.68.195:49848.service.
Dec 13 14:26:42.917560 sshd[3499]: Accepted publickey for core from 139.178.68.195 port 49848 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:42.919847 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:42.926855 systemd-logind[1228]: New session 20 of user core.
Dec 13 14:26:42.927928 systemd[1]: Started session-20.scope.
Dec 13 14:26:43.204898 sshd[3499]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:43.209521 systemd[1]: sshd@19-10.128.0.48:22-139.178.68.195:49848.service: Deactivated successfully.
Dec 13 14:26:43.210691 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 14:26:43.211616 systemd-logind[1228]: Session 20 logged out. Waiting for processes to exit.
Dec 13 14:26:43.212987 systemd-logind[1228]: Removed session 20.
Dec 13 14:26:48.250425 systemd[1]: Started sshd@20-10.128.0.48:22-139.178.68.195:52186.service.
Dec 13 14:26:48.533612 sshd[3511]: Accepted publickey for core from 139.178.68.195 port 52186 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:48.535517 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:48.542949 systemd[1]: Started session-21.scope.
Dec 13 14:26:48.544088 systemd-logind[1228]: New session 21 of user core.
Dec 13 14:26:48.829650 sshd[3511]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:48.834541 systemd[1]: sshd@20-10.128.0.48:22-139.178.68.195:52186.service: Deactivated successfully.
Dec 13 14:26:48.835660 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 14:26:48.836550 systemd-logind[1228]: Session 21 logged out. Waiting for processes to exit.
Dec 13 14:26:48.838134 systemd-logind[1228]: Removed session 21.
Dec 13 14:26:48.878019 systemd[1]: Started sshd@21-10.128.0.48:22-139.178.68.195:52200.service.
Dec 13 14:26:49.165260 sshd[3523]: Accepted publickey for core from 139.178.68.195 port 52200 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:49.167547 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:49.174350 systemd[1]: Started session-22.scope.
Dec 13 14:26:49.175407 systemd-logind[1228]: New session 22 of user core.
Dec 13 14:26:50.792811 env[1219]: time="2024-12-13T14:26:50.788282057Z" level=info msg="StopContainer for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" with timeout 30 (s)"
Dec 13 14:26:50.795913 systemd[1]: run-containerd-runc-k8s.io-65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be-runc.bk6Ge1.mount: Deactivated successfully.
Dec 13 14:26:50.796532 env[1219]: time="2024-12-13T14:26:50.796468637Z" level=info msg="Stop container \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" with signal terminated"
Dec 13 14:26:50.816305 systemd[1]: cri-containerd-9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4.scope: Deactivated successfully.
Dec 13 14:26:50.832659 env[1219]: time="2024-12-13T14:26:50.832576413Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 14:26:50.845853 env[1219]: time="2024-12-13T14:26:50.845720197Z" level=info msg="StopContainer for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" with timeout 2 (s)"
Dec 13 14:26:50.846318 env[1219]: time="2024-12-13T14:26:50.846262958Z" level=info msg="Stop container \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" with signal terminated"
Dec 13 14:26:50.860641 systemd-networkd[1023]: lxc_health: Link DOWN
Dec 13 14:26:50.860654 systemd-networkd[1023]: lxc_health: Lost carrier
Dec 13 14:26:50.870547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4-rootfs.mount: Deactivated successfully.
Dec 13 14:26:50.886716 systemd[1]: cri-containerd-65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be.scope: Deactivated successfully.
Dec 13 14:26:50.887119 systemd[1]: cri-containerd-65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be.scope: Consumed 9.389s CPU time.
Dec 13 14:26:50.907129 env[1219]: time="2024-12-13T14:26:50.907062032Z" level=info msg="shim disconnected" id=9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4
Dec 13 14:26:50.907129 env[1219]: time="2024-12-13T14:26:50.907125185Z" level=warning msg="cleaning up after shim disconnected" id=9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4 namespace=k8s.io
Dec 13 14:26:50.907129 env[1219]: time="2024-12-13T14:26:50.907139082Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:50.922935 env[1219]: time="2024-12-13T14:26:50.922881814Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3590 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:50.926482 env[1219]: time="2024-12-13T14:26:50.926422227Z" level=info msg="StopContainer for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" returns successfully"
Dec 13 14:26:50.927953 env[1219]: time="2024-12-13T14:26:50.927910347Z" level=info msg="StopPodSandbox for \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\""
Dec 13 14:26:50.928359 env[1219]: time="2024-12-13T14:26:50.928304259Z" level=info msg="Container to stop \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:50.931365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896-shm.mount: Deactivated successfully.
Dec 13 14:26:50.951615 systemd[1]: cri-containerd-0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896.scope: Deactivated successfully.
Dec 13 14:26:50.953474 env[1219]: time="2024-12-13T14:26:50.953413326Z" level=info msg="shim disconnected" id=65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be
Dec 13 14:26:50.954326 env[1219]: time="2024-12-13T14:26:50.954274594Z" level=warning msg="cleaning up after shim disconnected" id=65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be namespace=k8s.io
Dec 13 14:26:50.954524 env[1219]: time="2024-12-13T14:26:50.954499359Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:50.971744 env[1219]: time="2024-12-13T14:26:50.971668503Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3617 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:50.974898 env[1219]: time="2024-12-13T14:26:50.974839499Z" level=info msg="StopContainer for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" returns successfully"
Dec 13 14:26:50.975547 env[1219]: time="2024-12-13T14:26:50.975490902Z" level=info msg="StopPodSandbox for \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\""
Dec 13 14:26:50.975688 env[1219]: time="2024-12-13T14:26:50.975585969Z" level=info msg="Container to stop \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:50.975688 env[1219]: time="2024-12-13T14:26:50.975610782Z" level=info msg="Container to stop \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:50.975688 env[1219]: time="2024-12-13T14:26:50.975629876Z" level=info msg="Container to stop \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:50.975688 env[1219]: time="2024-12-13T14:26:50.975658437Z" level=info msg="Container to stop \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:50.975688 env[1219]: time="2024-12-13T14:26:50.975676318Z" level=info msg="Container to stop \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 14:26:50.986212 systemd[1]: cri-containerd-a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7.scope: Deactivated successfully.
Dec 13 14:26:51.006987 env[1219]: time="2024-12-13T14:26:51.006928491Z" level=info msg="shim disconnected" id=0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896
Dec 13 14:26:51.007871 env[1219]: time="2024-12-13T14:26:51.007827126Z" level=warning msg="cleaning up after shim disconnected" id=0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896 namespace=k8s.io
Dec 13 14:26:51.007871 env[1219]: time="2024-12-13T14:26:51.007863416Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:51.022565 env[1219]: time="2024-12-13T14:26:51.022495175Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:51.023003 env[1219]: time="2024-12-13T14:26:51.022963418Z" level=info msg="TearDown network for sandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" successfully"
Dec 13 14:26:51.023133 env[1219]: time="2024-12-13T14:26:51.023004621Z" level=info msg="StopPodSandbox for \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" returns successfully"
Dec 13 14:26:51.040237 env[1219]: time="2024-12-13T14:26:51.039618370Z" level=info msg="shim disconnected" id=a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7
Dec 13 14:26:51.040772 env[1219]: time="2024-12-13T14:26:51.040708634Z" level=warning msg="cleaning up after shim disconnected" id=a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7 namespace=k8s.io
Dec 13 14:26:51.041015 env[1219]: time="2024-12-13T14:26:51.040990928Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:51.056884 env[1219]: time="2024-12-13T14:26:51.056697332Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3674 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:51.058174 env[1219]: time="2024-12-13T14:26:51.058124476Z" level=info msg="TearDown network for sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" successfully"
Dec 13 14:26:51.058389 env[1219]: time="2024-12-13T14:26:51.058358039Z" level=info msg="StopPodSandbox for \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" returns successfully"
Dec 13 14:26:51.117137 kubelet[1988]: I1213 14:26:51.117069    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-hubble-tls\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117137 kubelet[1988]: I1213 14:26:51.117134    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-cgroup\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117137 kubelet[1988]: I1213 14:26:51.117176    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-etc-cni-netd\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117972 kubelet[1988]: I1213 14:26:51.117208    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cni-path\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117972 kubelet[1988]: I1213 14:26:51.117232    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-lib-modules\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117972 kubelet[1988]: I1213 14:26:51.117257    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-run\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117972 kubelet[1988]: I1213 14:26:51.117279    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-net\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.117972 kubelet[1988]: I1213 14:26:51.117313    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-kube-api-access-zddmv\") pod \"db364bba-f5a4-4bdb-a7d6-d151ebbe6f04\" (UID: \"db364bba-f5a4-4bdb-a7d6-d151ebbe6f04\") "
Dec 13 14:26:51.117972 kubelet[1988]: I1213 14:26:51.117338    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-hostproc\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118308 kubelet[1988]: I1213 14:26:51.117369    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-bpf-maps\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118308 kubelet[1988]: I1213 14:26:51.117395    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-xtables-lock\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118308 kubelet[1988]: I1213 14:26:51.117447    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19081c00-6c70-4f02-9884-7e07fd459593-cilium-config-path\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118308 kubelet[1988]: I1213 14:26:51.117477    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-cilium-config-path\") pod \"db364bba-f5a4-4bdb-a7d6-d151ebbe6f04\" (UID: \"db364bba-f5a4-4bdb-a7d6-d151ebbe6f04\") "
Dec 13 14:26:51.118308 kubelet[1988]: I1213 14:26:51.117527    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19081c00-6c70-4f02-9884-7e07fd459593-clustermesh-secrets\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118308 kubelet[1988]: I1213 14:26:51.117565    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2hjc\" (UniqueName: \"kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-kube-api-access-r2hjc\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118637 kubelet[1988]: I1213 14:26:51.117592    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-kernel\") pod \"19081c00-6c70-4f02-9884-7e07fd459593\" (UID: \"19081c00-6c70-4f02-9884-7e07fd459593\") "
Dec 13 14:26:51.118637 kubelet[1988]: I1213 14:26:51.117698    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.118964 kubelet[1988]: I1213 14:26:51.118917    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-hostproc" (OuterVolumeSpecName: "hostproc") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.119085 kubelet[1988]: I1213 14:26:51.118980    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.119085 kubelet[1988]: I1213 14:26:51.119008    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.120113 kubelet[1988]: I1213 14:26:51.120067    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.120316 kubelet[1988]: I1213 14:26:51.120292    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.120479 kubelet[1988]: I1213 14:26:51.120458    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cni-path" (OuterVolumeSpecName: "cni-path") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.120628 kubelet[1988]: I1213 14:26:51.120608    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.120805 kubelet[1988]: I1213 14:26:51.120784    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.120965 kubelet[1988]: I1213 14:26:51.120944    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:51.122036 kubelet[1988]: I1213 14:26:51.121996    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19081c00-6c70-4f02-9884-7e07fd459593-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 14:26:51.127107 kubelet[1988]: I1213 14:26:51.127063    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:51.130207 kubelet[1988]: I1213 14:26:51.130162    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db364bba-f5a4-4bdb-a7d6-d151ebbe6f04" (UID: "db364bba-f5a4-4bdb-a7d6-d151ebbe6f04"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 14:26:51.130365 kubelet[1988]: I1213 14:26:51.130298    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-kube-api-access-zddmv" (OuterVolumeSpecName: "kube-api-access-zddmv") pod "db364bba-f5a4-4bdb-a7d6-d151ebbe6f04" (UID: "db364bba-f5a4-4bdb-a7d6-d151ebbe6f04"). InnerVolumeSpecName "kube-api-access-zddmv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:51.132436 kubelet[1988]: I1213 14:26:51.132398    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19081c00-6c70-4f02-9884-7e07fd459593-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 14:26:51.133706 kubelet[1988]: I1213 14:26:51.133651    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-kube-api-access-r2hjc" (OuterVolumeSpecName: "kube-api-access-r2hjc") pod "19081c00-6c70-4f02-9884-7e07fd459593" (UID: "19081c00-6c70-4f02-9884-7e07fd459593"). InnerVolumeSpecName "kube-api-access-r2hjc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:51.218201 kubelet[1988]: I1213 14:26:51.218149    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-run\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218201 kubelet[1988]: I1213 14:26:51.218209    1988 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-net\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218229    1988 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-kube-api-access-zddmv\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218245    1988 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-hostproc\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218260    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04-cilium-config-path\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218274    1988 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-bpf-maps\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218286    1988 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-xtables-lock\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218301    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19081c00-6c70-4f02-9884-7e07fd459593-cilium-config-path\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218521 kubelet[1988]: I1213 14:26:51.218320    1988 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19081c00-6c70-4f02-9884-7e07fd459593-clustermesh-secrets\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218335    1988 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r2hjc\" (UniqueName: \"kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-kube-api-access-r2hjc\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218351    1988 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-host-proc-sys-kernel\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218367    1988 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19081c00-6c70-4f02-9884-7e07fd459593-hubble-tls\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218382    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cilium-cgroup\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218396    1988 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-etc-cni-netd\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218410    1988 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-cni-path\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.218785 kubelet[1988]: I1213 14:26:51.218424    1988 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19081c00-6c70-4f02-9884-7e07fd459593-lib-modules\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:51.780516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be-rootfs.mount: Deactivated successfully.
Dec 13 14:26:51.781043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896-rootfs.mount: Deactivated successfully.
Dec 13 14:26:51.781291 systemd[1]: var-lib-kubelet-pods-db364bba\x2df5a4\x2d4bdb\x2da7d6\x2dd151ebbe6f04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzddmv.mount: Deactivated successfully.
Dec 13 14:26:51.781518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7-rootfs.mount: Deactivated successfully.
Dec 13 14:26:51.781634 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7-shm.mount: Deactivated successfully.
Dec 13 14:26:51.781744 systemd[1]: var-lib-kubelet-pods-19081c00\x2d6c70\x2d4f02\x2d9884\x2d7e07fd459593-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr2hjc.mount: Deactivated successfully.
Dec 13 14:26:51.781880 systemd[1]: var-lib-kubelet-pods-19081c00\x2d6c70\x2d4f02\x2d9884\x2d7e07fd459593-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 14:26:51.781984 systemd[1]: var-lib-kubelet-pods-19081c00\x2d6c70\x2d4f02\x2d9884\x2d7e07fd459593-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 14:26:51.829622 kubelet[1988]: I1213 14:26:51.829570    1988 scope.go:117] "RemoveContainer" containerID="65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be"
Dec 13 14:26:51.832481 env[1219]: time="2024-12-13T14:26:51.832429016Z" level=info msg="RemoveContainer for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\""
Dec 13 14:26:51.839062 systemd[1]: Removed slice kubepods-burstable-pod19081c00_6c70_4f02_9884_7e07fd459593.slice.
Dec 13 14:26:51.839252 systemd[1]: kubepods-burstable-pod19081c00_6c70_4f02_9884_7e07fd459593.slice: Consumed 9.542s CPU time.
Dec 13 14:26:51.841683 env[1219]: time="2024-12-13T14:26:51.841632663Z" level=info msg="RemoveContainer for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" returns successfully"
Dec 13 14:26:51.846535 kubelet[1988]: I1213 14:26:51.845662    1988 scope.go:117] "RemoveContainer" containerID="ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050"
Dec 13 14:26:51.848349 systemd[1]: Removed slice kubepods-besteffort-poddb364bba_f5a4_4bdb_a7d6_d151ebbe6f04.slice.
Dec 13 14:26:51.849684 env[1219]: time="2024-12-13T14:26:51.849065725Z" level=info msg="RemoveContainer for \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\""
Dec 13 14:26:51.855635 env[1219]: time="2024-12-13T14:26:51.855585454Z" level=info msg="RemoveContainer for \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\" returns successfully"
Dec 13 14:26:51.855909 kubelet[1988]: I1213 14:26:51.855871    1988 scope.go:117] "RemoveContainer" containerID="4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc"
Dec 13 14:26:51.859657 env[1219]: time="2024-12-13T14:26:51.859262493Z" level=info msg="RemoveContainer for \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\""
Dec 13 14:26:51.864568 env[1219]: time="2024-12-13T14:26:51.864525055Z" level=info msg="RemoveContainer for \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\" returns successfully"
Dec 13 14:26:51.864834 kubelet[1988]: I1213 14:26:51.864806    1988 scope.go:117] "RemoveContainer" containerID="ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb"
Dec 13 14:26:51.873357 env[1219]: time="2024-12-13T14:26:51.872468913Z" level=info msg="RemoveContainer for \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\""
Dec 13 14:26:51.878115 env[1219]: time="2024-12-13T14:26:51.878021308Z" level=info msg="RemoveContainer for \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\" returns successfully"
Dec 13 14:26:51.879398 kubelet[1988]: I1213 14:26:51.879367    1988 scope.go:117] "RemoveContainer" containerID="41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898"
Dec 13 14:26:51.892578 env[1219]: time="2024-12-13T14:26:51.892521389Z" level=info msg="RemoveContainer for \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\""
Dec 13 14:26:51.898147 env[1219]: time="2024-12-13T14:26:51.898091681Z" level=info msg="RemoveContainer for \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\" returns successfully"
Dec 13 14:26:51.898903 kubelet[1988]: I1213 14:26:51.898711    1988 scope.go:117] "RemoveContainer" containerID="65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be"
Dec 13 14:26:51.899696 env[1219]: time="2024-12-13T14:26:51.899506900Z" level=error msg="ContainerStatus for \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\": not found"
Dec 13 14:26:51.899922 kubelet[1988]: E1213 14:26:51.899889    1988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\": not found" containerID="65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be"
Dec 13 14:26:51.900054 kubelet[1988]: I1213 14:26:51.899939    1988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be"} err="failed to get container status \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\": rpc error: code = NotFound desc = an error occurred when try to find container \"65d6cbafa3ce644130f47b6ac6c74093595c996583db1d402b26a6e17688c8be\": not found"
Dec 13 14:26:51.900157 kubelet[1988]: I1213 14:26:51.900062    1988 scope.go:117] "RemoveContainer" containerID="ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050"
Dec 13 14:26:51.900399 env[1219]: time="2024-12-13T14:26:51.900320553Z" level=error msg="ContainerStatus for \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\": not found"
Dec 13 14:26:51.900685 kubelet[1988]: E1213 14:26:51.900653    1988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\": not found" containerID="ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050"
Dec 13 14:26:51.900819 kubelet[1988]: I1213 14:26:51.900693    1988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050"} err="failed to get container status \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccdd2e38fe82036410dffd3d79aa006aa0d9d358a85c08b251e86cc22dea4050\": not found"
Dec 13 14:26:51.900819 kubelet[1988]: I1213 14:26:51.900723    1988 scope.go:117] "RemoveContainer" containerID="4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc"
Dec 13 14:26:51.901089 env[1219]: time="2024-12-13T14:26:51.901015397Z" level=error msg="ContainerStatus for \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\": not found"
Dec 13 14:26:51.901257 kubelet[1988]: E1213 14:26:51.901228    1988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\": not found" containerID="4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc"
Dec 13 14:26:51.901344 kubelet[1988]: I1213 14:26:51.901264    1988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc"} err="failed to get container status \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a6c95971cbd4ca3d7a193c81617236a6e0adcb40cc2af08b5405f9b807767bc\": not found"
Dec 13 14:26:51.901344 kubelet[1988]: I1213 14:26:51.901288    1988 scope.go:117] "RemoveContainer" containerID="ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb"
Dec 13 14:26:51.901582 env[1219]: time="2024-12-13T14:26:51.901517037Z" level=error msg="ContainerStatus for \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\": not found"
Dec 13 14:26:51.901744 kubelet[1988]: E1213 14:26:51.901715    1988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\": not found" containerID="ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb"
Dec 13 14:26:51.901869 kubelet[1988]: I1213 14:26:51.901767    1988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb"} err="failed to get container status \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec65696db3a091462b82736aa4a62475479202947679a1ffe1eea0c6a08a74eb\": not found"
Dec 13 14:26:51.901869 kubelet[1988]: I1213 14:26:51.901791    1988 scope.go:117] "RemoveContainer" containerID="41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898"
Dec 13 14:26:51.902095 env[1219]: time="2024-12-13T14:26:51.902016180Z" level=error msg="ContainerStatus for \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\": not found"
Dec 13 14:26:51.902236 kubelet[1988]: E1213 14:26:51.902206    1988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\": not found" containerID="41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898"
Dec 13 14:26:51.902317 kubelet[1988]: I1213 14:26:51.902233    1988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898"} err="failed to get container status \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\": rpc error: code = NotFound desc = an error occurred when try to find container \"41a306674da54c135b6c5caef3c59aa23ff427157756d9b1f29c594fa206d898\": not found"
Dec 13 14:26:51.902317 kubelet[1988]: I1213 14:26:51.902255    1988 scope.go:117] "RemoveContainer" containerID="9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4"
Dec 13 14:26:51.903671 env[1219]: time="2024-12-13T14:26:51.903634977Z" level=info msg="RemoveContainer for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\""
Dec 13 14:26:51.908133 env[1219]: time="2024-12-13T14:26:51.908081474Z" level=info msg="RemoveContainer for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" returns successfully"
Dec 13 14:26:51.908398 kubelet[1988]: I1213 14:26:51.908368    1988 scope.go:117] "RemoveContainer" containerID="9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4"
Dec 13 14:26:51.908861 env[1219]: time="2024-12-13T14:26:51.908790977Z" level=error msg="ContainerStatus for \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\": not found"
Dec 13 14:26:51.909082 kubelet[1988]: E1213 14:26:51.909063    1988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\": not found" containerID="9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4"
Dec 13 14:26:51.909202 kubelet[1988]: I1213 14:26:51.909180    1988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4"} err="failed to get container status \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e47ffaaf7cc1646c6c200242377475cd35c9eb5ac59a29692123db439035ed4\": not found"
Dec 13 14:26:52.440469 kubelet[1988]: I1213 14:26:52.440417    1988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19081c00-6c70-4f02-9884-7e07fd459593" path="/var/lib/kubelet/pods/19081c00-6c70-4f02-9884-7e07fd459593/volumes"
Dec 13 14:26:52.441556 kubelet[1988]: I1213 14:26:52.441514    1988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db364bba-f5a4-4bdb-a7d6-d151ebbe6f04" path="/var/lib/kubelet/pods/db364bba-f5a4-4bdb-a7d6-d151ebbe6f04/volumes"
Dec 13 14:26:52.733659 sshd[3523]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:52.738527 systemd[1]: sshd@21-10.128.0.48:22-139.178.68.195:52200.service: Deactivated successfully.
Dec 13 14:26:52.739681 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 14:26:52.740666 systemd-logind[1228]: Session 22 logged out. Waiting for processes to exit.
Dec 13 14:26:52.742263 systemd-logind[1228]: Removed session 22.
Dec 13 14:26:52.784976 systemd[1]: Started sshd@22-10.128.0.48:22-139.178.68.195:52210.service.
Dec 13 14:26:53.074642 sshd[3694]: Accepted publickey for core from 139.178.68.195 port 52210 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:53.076681 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:53.084335 systemd[1]: Started session-23.scope.
Dec 13 14:26:53.085506 systemd-logind[1228]: New session 23 of user core.
Dec 13 14:26:53.631843 kubelet[1988]: E1213 14:26:53.631791    1988 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 14:26:54.492857 kubelet[1988]: E1213 14:26:54.492794    1988 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19081c00-6c70-4f02-9884-7e07fd459593" containerName="apply-sysctl-overwrites"
Dec 13 14:26:54.492857 kubelet[1988]: E1213 14:26:54.492857    1988 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19081c00-6c70-4f02-9884-7e07fd459593" containerName="mount-bpf-fs"
Dec 13 14:26:54.492857 kubelet[1988]: E1213 14:26:54.492869    1988 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19081c00-6c70-4f02-9884-7e07fd459593" containerName="clean-cilium-state"
Dec 13 14:26:54.493231 kubelet[1988]: E1213 14:26:54.492880    1988 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19081c00-6c70-4f02-9884-7e07fd459593" containerName="mount-cgroup"
Dec 13 14:26:54.493231 kubelet[1988]: E1213 14:26:54.492889    1988 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db364bba-f5a4-4bdb-a7d6-d151ebbe6f04" containerName="cilium-operator"
Dec 13 14:26:54.493231 kubelet[1988]: E1213 14:26:54.492899    1988 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19081c00-6c70-4f02-9884-7e07fd459593" containerName="cilium-agent"
Dec 13 14:26:54.493231 kubelet[1988]: I1213 14:26:54.492992    1988 memory_manager.go:354] "RemoveStaleState removing state" podUID="19081c00-6c70-4f02-9884-7e07fd459593" containerName="cilium-agent"
Dec 13 14:26:54.493231 kubelet[1988]: I1213 14:26:54.493022    1988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db364bba-f5a4-4bdb-a7d6-d151ebbe6f04" containerName="cilium-operator"
Dec 13 14:26:54.504242 systemd[1]: Created slice kubepods-burstable-pod217696b9_7758_4afb_8d65_d12d8131ff0c.slice.
Dec 13 14:26:54.509029 sshd[3694]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:54.513225 systemd[1]: sshd@22-10.128.0.48:22-139.178.68.195:52210.service: Deactivated successfully.
Dec 13 14:26:54.514361 systemd[1]: session-23.scope: Deactivated successfully.
Dec 13 14:26:54.514582 systemd[1]: session-23.scope: Consumed 1.175s CPU time.
Dec 13 14:26:54.516347 kubelet[1988]: W1213 14:26:54.516304    1988 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:54.516491 kubelet[1988]: E1213 14:26:54.516371    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:54.516491 kubelet[1988]: W1213 14:26:54.516456    1988 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:54.516491 kubelet[1988]: E1213 14:26:54.516474    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:54.516823 kubelet[1988]: W1213 14:26:54.516524    1988 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:54.516823 kubelet[1988]: E1213 14:26:54.516542    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:54.516823 kubelet[1988]: W1213 14:26:54.516588    1988 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:54.516823 kubelet[1988]: E1213 14:26:54.516604    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:54.519220 systemd-logind[1228]: Session 23 logged out. Waiting for processes to exit.
Dec 13 14:26:54.522531 systemd-logind[1228]: Removed session 23.
Dec 13 14:26:54.539365 kubelet[1988]: I1213 14:26:54.539268    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-kernel\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.539968 kubelet[1988]: I1213 14:26:54.539921    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-cgroup\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.540208 kubelet[1988]: I1213 14:26:54.540172    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-xtables-lock\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.540457 kubelet[1988]: I1213 14:26:54.540432    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-ipsec-secrets\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.540701 kubelet[1988]: I1213 14:26:54.540638    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cni-path\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.540996 kubelet[1988]: I1213 14:26:54.540925    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-clustermesh-secrets\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.541391 kubelet[1988]: I1213 14:26:54.541329    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-config-path\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.541651 kubelet[1988]: I1213 14:26:54.541628    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-net\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.541841 kubelet[1988]: I1213 14:26:54.541819    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-run\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.542077 kubelet[1988]: I1213 14:26:54.542025    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-bpf-maps\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.542270 kubelet[1988]: I1213 14:26:54.542229    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-etc-cni-netd\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.542491 kubelet[1988]: I1213 14:26:54.542436    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-lib-modules\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.542683 kubelet[1988]: I1213 14:26:54.542648    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-hubble-tls\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.542928 kubelet[1988]: I1213 14:26:54.542901    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzsj8\" (UniqueName: \"kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-kube-api-access-dzsj8\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.543157 kubelet[1988]: I1213 14:26:54.543125    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-hostproc\") pod \"cilium-wsgtf\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") " pod="kube-system/cilium-wsgtf"
Dec 13 14:26:54.558928 systemd[1]: Started sshd@23-10.128.0.48:22-139.178.68.195:52222.service.
Dec 13 14:26:54.859650 sshd[3704]: Accepted publickey for core from 139.178.68.195 port 52222 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:54.861777 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:54.868742 systemd[1]: Started session-24.scope.
Dec 13 14:26:54.869614 systemd-logind[1228]: New session 24 of user core.
Dec 13 14:26:55.145794 kubelet[1988]: E1213 14:26:55.145622    1988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-wsgtf" podUID="217696b9-7758-4afb-8d65-d12d8131ff0c"
Dec 13 14:26:55.171235 sshd[3704]: pam_unix(sshd:session): session closed for user core
Dec 13 14:26:55.175767 systemd[1]: sshd@23-10.128.0.48:22-139.178.68.195:52222.service: Deactivated successfully.
Dec 13 14:26:55.176871 systemd[1]: session-24.scope: Deactivated successfully.
Dec 13 14:26:55.177862 systemd-logind[1228]: Session 24 logged out. Waiting for processes to exit.
Dec 13 14:26:55.179172 systemd-logind[1228]: Removed session 24.
Dec 13 14:26:55.219268 systemd[1]: Started sshd@24-10.128.0.48:22-139.178.68.195:52232.service.
Dec 13 14:26:55.510465 sshd[3717]: Accepted publickey for core from 139.178.68.195 port 52232 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk
Dec 13 14:26:55.510993 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 14:26:55.519347 systemd[1]: Started session-25.scope.
Dec 13 14:26:55.519973 systemd-logind[1228]: New session 25 of user core.
Dec 13 14:26:55.644945 kubelet[1988]: E1213 14:26:55.644857    1988 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition
Dec 13 14:26:55.645169 kubelet[1988]: E1213 14:26:55.645007    1988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-ipsec-secrets podName:217696b9-7758-4afb-8d65-d12d8131ff0c nodeName:}" failed. No retries permitted until 2024-12-13 14:26:56.144958135 +0000 UTC m=+117.940839687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-ipsec-secrets") pod "cilium-wsgtf" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c") : failed to sync secret cache: timed out waiting for the condition
Dec 13 14:26:55.645169 kubelet[1988]: E1213 14:26:55.644849    1988 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition
Dec 13 14:26:55.645169 kubelet[1988]: E1213 14:26:55.645063    1988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-clustermesh-secrets podName:217696b9-7758-4afb-8d65-d12d8131ff0c nodeName:}" failed. No retries permitted until 2024-12-13 14:26:56.14505315 +0000 UTC m=+117.940934698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-clustermesh-secrets") pod "cilium-wsgtf" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c") : failed to sync secret cache: timed out waiting for the condition
Dec 13 14:26:55.953572 kubelet[1988]: I1213 14:26:55.953483    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-kernel\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953572 kubelet[1988]: I1213 14:26:55.953549    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cni-path\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953572 kubelet[1988]: I1213 14:26:55.953575    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-etc-cni-netd\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953972 kubelet[1988]: I1213 14:26:55.953599    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-cgroup\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953972 kubelet[1988]: I1213 14:26:55.953632    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-config-path\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953972 kubelet[1988]: I1213 14:26:55.953652    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-lib-modules\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953972 kubelet[1988]: I1213 14:26:55.953673    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-run\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953972 kubelet[1988]: I1213 14:26:55.953704    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-xtables-lock\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.953972 kubelet[1988]: I1213 14:26:55.953733    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzsj8\" (UniqueName: \"kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-kube-api-access-dzsj8\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.954287 kubelet[1988]: I1213 14:26:55.953774    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-net\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.954287 kubelet[1988]: I1213 14:26:55.953813    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-hostproc\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.954287 kubelet[1988]: I1213 14:26:55.953837    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-bpf-maps\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.954287 kubelet[1988]: I1213 14:26:55.953864    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-hubble-tls\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:55.954592 kubelet[1988]: I1213 14:26:55.954558    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.954768 kubelet[1988]: I1213 14:26:55.954725    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.954903 kubelet[1988]: I1213 14:26:55.954883    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cni-path" (OuterVolumeSpecName: "cni-path") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.955047 kubelet[1988]: I1213 14:26:55.955026    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.955175 kubelet[1988]: I1213 14:26:55.955156    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.958128 kubelet[1988]: I1213 14:26:55.958081    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 14:26:55.961672 systemd[1]: var-lib-kubelet-pods-217696b9\x2d7758\x2d4afb\x2d8d65\x2dd12d8131ff0c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 14:26:55.963367 kubelet[1988]: I1213 14:26:55.963166    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.963510 kubelet[1988]: I1213 14:26:55.963212    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.963669 kubelet[1988]: I1213 14:26:55.963640    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-hostproc" (OuterVolumeSpecName: "hostproc") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.964008 kubelet[1988]: I1213 14:26:55.963812    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.964144 kubelet[1988]: I1213 14:26:55.963887    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:55.964307 kubelet[1988]: I1213 14:26:55.963911    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 14:26:55.970543 systemd[1]: var-lib-kubelet-pods-217696b9\x2d7758\x2d4afb\x2d8d65\x2dd12d8131ff0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzsj8.mount: Deactivated successfully.
Dec 13 14:26:55.971985 kubelet[1988]: I1213 14:26:55.971229    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-kube-api-access-dzsj8" (OuterVolumeSpecName: "kube-api-access-dzsj8") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "kube-api-access-dzsj8". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 14:26:56.054780 kubelet[1988]: I1213 14:26:56.054708    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-cgroup\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054790    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-config-path\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054811    1988 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-lib-modules\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054828    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-run\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054843    1988 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-xtables-lock\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054857    1988 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dzsj8\" (UniqueName: \"kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-kube-api-access-dzsj8\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054871    1988 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-net\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055009 kubelet[1988]: I1213 14:26:56.054886    1988 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-hostproc\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055329 kubelet[1988]: I1213 14:26:56.054900    1988 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-bpf-maps\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055329 kubelet[1988]: I1213 14:26:56.054913    1988 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/217696b9-7758-4afb-8d65-d12d8131ff0c-hubble-tls\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055329 kubelet[1988]: I1213 14:26:56.054931    1988 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-host-proc-sys-kernel\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055329 kubelet[1988]: I1213 14:26:56.054946    1988 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-cni-path\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.055329 kubelet[1988]: I1213 14:26:56.054960    1988 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/217696b9-7758-4afb-8d65-d12d8131ff0c-etc-cni-netd\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.258280 kubelet[1988]: I1213 14:26:56.256051    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-ipsec-secrets\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:56.259000 kubelet[1988]: I1213 14:26:56.258970    1988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-clustermesh-secrets\") pod \"217696b9-7758-4afb-8d65-d12d8131ff0c\" (UID: \"217696b9-7758-4afb-8d65-d12d8131ff0c\") "
Dec 13 14:26:56.265100 systemd[1]: var-lib-kubelet-pods-217696b9\x2d7758\x2d4afb\x2d8d65\x2dd12d8131ff0c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Dec 13 14:26:56.267641 kubelet[1988]: I1213 14:26:56.267592    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 14:26:56.268429 kubelet[1988]: I1213 14:26:56.268394    1988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "217696b9-7758-4afb-8d65-d12d8131ff0c" (UID: "217696b9-7758-4afb-8d65-d12d8131ff0c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 14:26:56.360128 kubelet[1988]: I1213 14:26:56.360074    1988 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-cilium-ipsec-secrets\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.360128 kubelet[1988]: I1213 14:26:56.360119    1988 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/217696b9-7758-4afb-8d65-d12d8131ff0c-clustermesh-secrets\") on node \"ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" DevicePath \"\""
Dec 13 14:26:56.446938 systemd[1]: Removed slice kubepods-burstable-pod217696b9_7758_4afb_8d65_d12d8131ff0c.slice.
Dec 13 14:26:56.961550 systemd[1]: var-lib-kubelet-pods-217696b9\x2d7758\x2d4afb\x2d8d65\x2dd12d8131ff0c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 14:26:57.054734 kubelet[1988]: W1213 14:26:57.054697    1988 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:57.054987 kubelet[1988]: E1213 14:26:57.054955    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:57.055253 kubelet[1988]: W1213 14:26:57.055236    1988 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:57.055410 kubelet[1988]: E1213 14:26:57.055385    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:57.059793 kubelet[1988]: W1213 14:26:57.058930    1988 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:57.059793 kubelet[1988]: E1213 14:26:57.058980    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:57.059793 kubelet[1988]: W1213 14:26:57.059046    1988 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object
Dec 13 14:26:57.059793 kubelet[1988]: E1213 14:26:57.059067    1988 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal' and this object" logger="UnhandledError"
Dec 13 14:26:57.059532 systemd[1]: Created slice kubepods-burstable-pod1e3725ae_3920_4208_93b7_5e63c0e90d98.slice.
Dec 13 14:26:57.166329 kubelet[1988]: I1213 14:26:57.166270    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-cgroup\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166564 kubelet[1988]: I1213 14:26:57.166345    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e3725ae-3920-4208-93b7-5e63c0e90d98-hubble-tls\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166564 kubelet[1988]: I1213 14:26:57.166392    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-host-proc-sys-net\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166564 kubelet[1988]: I1213 14:26:57.166419    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-run\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166564 kubelet[1988]: I1213 14:26:57.166445    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-etc-cni-netd\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166564 kubelet[1988]: I1213 14:26:57.166470    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-lib-modules\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166564 kubelet[1988]: I1213 14:26:57.166498    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-cni-path\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166967 kubelet[1988]: I1213 14:26:57.166525    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-bpf-maps\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166967 kubelet[1988]: I1213 14:26:57.166548    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-config-path\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166967 kubelet[1988]: I1213 14:26:57.166570    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-ipsec-secrets\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166967 kubelet[1988]: I1213 14:26:57.166598    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-xtables-lock\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166967 kubelet[1988]: I1213 14:26:57.166627    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-hostproc\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.166967 kubelet[1988]: I1213 14:26:57.166652    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw6bc\" (UniqueName: \"kubernetes.io/projected/1e3725ae-3920-4208-93b7-5e63c0e90d98-kube-api-access-fw6bc\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.167171 kubelet[1988]: I1213 14:26:57.166693    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e3725ae-3920-4208-93b7-5e63c0e90d98-clustermesh-secrets\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:57.167171 kubelet[1988]: I1213 14:26:57.166719    1988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e3725ae-3920-4208-93b7-5e63c0e90d98-host-proc-sys-kernel\") pod \"cilium-6fsgb\" (UID: \"1e3725ae-3920-4208-93b7-5e63c0e90d98\") " pod="kube-system/cilium-6fsgb"
Dec 13 14:26:58.268220 kubelet[1988]: E1213 14:26:58.268165    1988 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition
Dec 13 14:26:58.269001 kubelet[1988]: E1213 14:26:58.268194    1988 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition
Dec 13 14:26:58.269210 kubelet[1988]: E1213 14:26:58.269186    1988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-ipsec-secrets podName:1e3725ae-3920-4208-93b7-5e63c0e90d98 nodeName:}" failed. No retries permitted until 2024-12-13 14:26:58.769152693 +0000 UTC m=+120.565034271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-ipsec-secrets") pod "cilium-6fsgb" (UID: "1e3725ae-3920-4208-93b7-5e63c0e90d98") : failed to sync secret cache: timed out waiting for the condition
Dec 13 14:26:58.269445 kubelet[1988]: E1213 14:26:58.269423    1988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-config-path podName:1e3725ae-3920-4208-93b7-5e63c0e90d98 nodeName:}" failed. No retries permitted until 2024-12-13 14:26:58.769401097 +0000 UTC m=+120.565282654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1e3725ae-3920-4208-93b7-5e63c0e90d98-cilium-config-path") pod "cilium-6fsgb" (UID: "1e3725ae-3920-4208-93b7-5e63c0e90d98") : failed to sync configmap cache: timed out waiting for the condition
Dec 13 14:26:58.419663 env[1219]: time="2024-12-13T14:26:58.419607529Z" level=info msg="StopPodSandbox for \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\""
Dec 13 14:26:58.420267 env[1219]: time="2024-12-13T14:26:58.419741227Z" level=info msg="TearDown network for sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" successfully"
Dec 13 14:26:58.420267 env[1219]: time="2024-12-13T14:26:58.419855865Z" level=info msg="StopPodSandbox for \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" returns successfully"
Dec 13 14:26:58.423103 env[1219]: time="2024-12-13T14:26:58.420582354Z" level=info msg="RemovePodSandbox for \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\""
Dec 13 14:26:58.423103 env[1219]: time="2024-12-13T14:26:58.420627674Z" level=info msg="Forcibly stopping sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\""
Dec 13 14:26:58.423103 env[1219]: time="2024-12-13T14:26:58.420733699Z" level=info msg="TearDown network for sandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" successfully"
Dec 13 14:26:58.425967 env[1219]: time="2024-12-13T14:26:58.425909602Z" level=info msg="RemovePodSandbox \"a8a19da2b36e8e4142f22c91dd4f8e8aa3c52650cd0578c4669a81b25fbbbbd7\" returns successfully"
Dec 13 14:26:58.426582 env[1219]: time="2024-12-13T14:26:58.426522693Z" level=info msg="StopPodSandbox for \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\""
Dec 13 14:26:58.426696 env[1219]: time="2024-12-13T14:26:58.426629949Z" level=info msg="TearDown network for sandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" successfully"
Dec 13 14:26:58.426696 env[1219]: time="2024-12-13T14:26:58.426678892Z" level=info msg="StopPodSandbox for \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" returns successfully"
Dec 13 14:26:58.427123 env[1219]: time="2024-12-13T14:26:58.427085838Z" level=info msg="RemovePodSandbox for \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\""
Dec 13 14:26:58.427255 env[1219]: time="2024-12-13T14:26:58.427128575Z" level=info msg="Forcibly stopping sandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\""
Dec 13 14:26:58.427331 env[1219]: time="2024-12-13T14:26:58.427228968Z" level=info msg="TearDown network for sandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" successfully"
Dec 13 14:26:58.432195 env[1219]: time="2024-12-13T14:26:58.432132229Z" level=info msg="RemovePodSandbox \"0878bcd8557b0ecb797fb533e81f84b99a717d7e7744197a0219854e67e62896\" returns successfully"
Dec 13 14:26:58.437080 kubelet[1988]: E1213 14:26:58.437012    1988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-dhjgc" podUID="fd3247b7-5c27-4e18-a0e8-3cb23894fc04"
Dec 13 14:26:58.442884 kubelet[1988]: I1213 14:26:58.442824    1988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217696b9-7758-4afb-8d65-d12d8131ff0c" path="/var/lib/kubelet/pods/217696b9-7758-4afb-8d65-d12d8131ff0c/volumes"
Dec 13 14:26:58.632856 kubelet[1988]: E1213 14:26:58.632779    1988 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 14:26:58.868208 env[1219]: time="2024-12-13T14:26:58.868066994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fsgb,Uid:1e3725ae-3920-4208-93b7-5e63c0e90d98,Namespace:kube-system,Attempt:0,}"
Dec 13 14:26:58.894395 env[1219]: time="2024-12-13T14:26:58.894170567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 14:26:58.894395 env[1219]: time="2024-12-13T14:26:58.894248261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 14:26:58.894395 env[1219]: time="2024-12-13T14:26:58.894267415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 14:26:58.895142 env[1219]: time="2024-12-13T14:26:58.895072054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73 pid=3748 runtime=io.containerd.runc.v2
Dec 13 14:26:58.919072 systemd[1]: Started cri-containerd-ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73.scope.
Dec 13 14:26:58.999147 env[1219]: time="2024-12-13T14:26:58.999086133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fsgb,Uid:1e3725ae-3920-4208-93b7-5e63c0e90d98,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\""
Dec 13 14:26:59.013038 env[1219]: time="2024-12-13T14:26:59.012982909Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 14:26:59.030088 env[1219]: time="2024-12-13T14:26:59.030016492Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61\""
Dec 13 14:26:59.031481 env[1219]: time="2024-12-13T14:26:59.031435977Z" level=info msg="StartContainer for \"cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61\""
Dec 13 14:26:59.062162 systemd[1]: Started cri-containerd-cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61.scope.
Dec 13 14:26:59.117167 env[1219]: time="2024-12-13T14:26:59.117093007Z" level=info msg="StartContainer for \"cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61\" returns successfully"
Dec 13 14:26:59.130683 systemd[1]: cri-containerd-cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61.scope: Deactivated successfully.
Dec 13 14:26:59.165030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61-rootfs.mount: Deactivated successfully.
Dec 13 14:26:59.168462 env[1219]: time="2024-12-13T14:26:59.168399815Z" level=info msg="shim disconnected" id=cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61
Dec 13 14:26:59.168661 env[1219]: time="2024-12-13T14:26:59.168464769Z" level=warning msg="cleaning up after shim disconnected" id=cd599fa85ff151ff5433afede6562da818404f356478d723fa83b7cb6ea71c61 namespace=k8s.io
Dec 13 14:26:59.168661 env[1219]: time="2024-12-13T14:26:59.168479900Z" level=info msg="cleaning up dead shim"
Dec 13 14:26:59.180684 env[1219]: time="2024-12-13T14:26:59.180613435Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3832 runtime=io.containerd.runc.v2\n"
Dec 13 14:26:59.865134 env[1219]: time="2024-12-13T14:26:59.865057576Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 14:26:59.893335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263855337.mount: Deactivated successfully.
Dec 13 14:26:59.899687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896426917.mount: Deactivated successfully.
Dec 13 14:26:59.908260 env[1219]: time="2024-12-13T14:26:59.908184673Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a\""
Dec 13 14:26:59.908936 env[1219]: time="2024-12-13T14:26:59.908898438Z" level=info msg="StartContainer for \"8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a\""
Dec 13 14:26:59.933397 systemd[1]: Started cri-containerd-8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a.scope.
Dec 13 14:26:59.983787 env[1219]: time="2024-12-13T14:26:59.982809813Z" level=info msg="StartContainer for \"8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a\" returns successfully"
Dec 13 14:26:59.987079 systemd[1]: cri-containerd-8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a.scope: Deactivated successfully.
Dec 13 14:27:00.017582 env[1219]: time="2024-12-13T14:27:00.017513889Z" level=info msg="shim disconnected" id=8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a
Dec 13 14:27:00.017582 env[1219]: time="2024-12-13T14:27:00.017584227Z" level=warning msg="cleaning up after shim disconnected" id=8a27355e545bd99e635d2d85b2330bd54e06f99c2b0dddf1c13bc50e85cc658a namespace=k8s.io
Dec 13 14:27:00.017982 env[1219]: time="2024-12-13T14:27:00.017599570Z" level=info msg="cleaning up dead shim"
Dec 13 14:27:00.030173 env[1219]: time="2024-12-13T14:27:00.030098603Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\n"
Dec 13 14:27:00.437624 kubelet[1988]: E1213 14:27:00.436858    1988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-dhjgc" podUID="fd3247b7-5c27-4e18-a0e8-3cb23894fc04"
Dec 13 14:27:00.868398 env[1219]: time="2024-12-13T14:27:00.868060456Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 14:27:00.895138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638999926.mount: Deactivated successfully.
Dec 13 14:27:00.915478 env[1219]: time="2024-12-13T14:27:00.915403268Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b\""
Dec 13 14:27:00.916356 env[1219]: time="2024-12-13T14:27:00.916308228Z" level=info msg="StartContainer for \"205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b\""
Dec 13 14:27:00.946886 systemd[1]: Started cri-containerd-205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b.scope.
Dec 13 14:27:00.965856 kubelet[1988]: I1213 14:27:00.964244    1988 setters.go:600] "Node became not ready" node="ci-3510-3-6-d5176765b37cdf5b515a.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:27:00Z","lastTransitionTime":"2024-12-13T14:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Dec 13 14:27:01.007964 env[1219]: time="2024-12-13T14:27:01.007904519Z" level=info msg="StartContainer for \"205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b\" returns successfully"
Dec 13 14:27:01.013263 systemd[1]: cri-containerd-205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b.scope: Deactivated successfully.
Dec 13 14:27:01.045087 env[1219]: time="2024-12-13T14:27:01.045024130Z" level=info msg="shim disconnected" id=205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b
Dec 13 14:27:01.045087 env[1219]: time="2024-12-13T14:27:01.045087869Z" level=warning msg="cleaning up after shim disconnected" id=205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b namespace=k8s.io
Dec 13 14:27:01.045503 env[1219]: time="2024-12-13T14:27:01.045102180Z" level=info msg="cleaning up dead shim"
Dec 13 14:27:01.057000 env[1219]: time="2024-12-13T14:27:01.056922059Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3955 runtime=io.containerd.runc.v2\n"
Dec 13 14:27:01.096249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-205e608c6745adcae784ba3afddafff2a7c47bfe4fb1c1b2980fd2044952b26b-rootfs.mount: Deactivated successfully.
Dec 13 14:27:01.874613 env[1219]: time="2024-12-13T14:27:01.874101909Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 14:27:01.928050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180185853.mount: Deactivated successfully.
Dec 13 14:27:01.949260 env[1219]: time="2024-12-13T14:27:01.949068112Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309\""
Dec 13 14:27:01.951063 env[1219]: time="2024-12-13T14:27:01.949810299Z" level=info msg="StartContainer for \"494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309\""
Dec 13 14:27:01.990200 systemd[1]: Started cri-containerd-494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309.scope.
Dec 13 14:27:02.033361 systemd[1]: cri-containerd-494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309.scope: Deactivated successfully.
Dec 13 14:27:02.037060 env[1219]: time="2024-12-13T14:27:02.037007217Z" level=info msg="StartContainer for \"494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309\" returns successfully"
Dec 13 14:27:02.068220 env[1219]: time="2024-12-13T14:27:02.068150029Z" level=info msg="shim disconnected" id=494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309
Dec 13 14:27:02.068220 env[1219]: time="2024-12-13T14:27:02.068220235Z" level=warning msg="cleaning up after shim disconnected" id=494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309 namespace=k8s.io
Dec 13 14:27:02.068588 env[1219]: time="2024-12-13T14:27:02.068234413Z" level=info msg="cleaning up dead shim"
Dec 13 14:27:02.080920 env[1219]: time="2024-12-13T14:27:02.080857676Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4011 runtime=io.containerd.runc.v2\n"
Dec 13 14:27:02.096316 systemd[1]: run-containerd-runc-k8s.io-494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309-runc.MgJPwe.mount: Deactivated successfully.
Dec 13 14:27:02.096463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-494206bd54f8ff7501e235788a57b465adff00ecc39f924eb6218521a3bca309-rootfs.mount: Deactivated successfully.
Dec 13 14:27:02.437049 kubelet[1988]: E1213 14:27:02.436985    1988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-dhjgc" podUID="fd3247b7-5c27-4e18-a0e8-3cb23894fc04"
Dec 13 14:27:02.880148 env[1219]: time="2024-12-13T14:27:02.880092095Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 14:27:02.907651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272997002.mount: Deactivated successfully.
Dec 13 14:27:02.916410 env[1219]: time="2024-12-13T14:27:02.913621095Z" level=info msg="CreateContainer within sandbox \"ddf7cdd81f0d51323b7fe52fb13f52ecf1b6c28e77f57d6eb4af5a5f1affae73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4493dd44af1e5e5314163f11cd68e3f61fa53203e1d52c73bffc45d660419cc\""
Dec 13 14:27:02.916410 env[1219]: time="2024-12-13T14:27:02.914853206Z" level=info msg="StartContainer for \"e4493dd44af1e5e5314163f11cd68e3f61fa53203e1d52c73bffc45d660419cc\""
Dec 13 14:27:02.964132 systemd[1]: Started cri-containerd-e4493dd44af1e5e5314163f11cd68e3f61fa53203e1d52c73bffc45d660419cc.scope.
Dec 13 14:27:03.014791 env[1219]: time="2024-12-13T14:27:03.014716876Z" level=info msg="StartContainer for \"e4493dd44af1e5e5314163f11cd68e3f61fa53203e1d52c73bffc45d660419cc\" returns successfully"
Dec 13 14:27:03.477899 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Dec 13 14:27:03.905304 kubelet[1988]: I1213 14:27:03.905229    1988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6fsgb" podStartSLOduration=6.905204058 podStartE2EDuration="6.905204058s" podCreationTimestamp="2024-12-13 14:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:03.904828644 +0000 UTC m=+125.700710225" watchObservedRunningTime="2024-12-13 14:27:03.905204058 +0000 UTC m=+125.701085630"
Dec 13 14:27:04.113047 systemd[1]: run-containerd-runc-k8s.io-e4493dd44af1e5e5314163f11cd68e3f61fa53203e1d52c73bffc45d660419cc-runc.8WbKun.mount: Deactivated successfully.
Dec 13 14:27:06.384199 kubelet[1988]: E1213 14:27:06.384136    1988 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48332->127.0.0.1:34035: write tcp 127.0.0.1:48332->127.0.0.1:34035: write: broken pipe
Dec 13 14:27:06.632118 systemd-networkd[1023]: lxc_health: Link UP
Dec 13 14:27:06.648796 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 14:27:06.657089 systemd-networkd[1023]: lxc_health: Gained carrier
Dec 13 14:27:08.215661 systemd-networkd[1023]: lxc_health: Gained IPv6LL
Dec 13 14:27:10.818139 systemd[1]: run-containerd-runc-k8s.io-e4493dd44af1e5e5314163f11cd68e3f61fa53203e1d52c73bffc45d660419cc-runc.1hIMDI.mount: Deactivated successfully.
Dec 13 14:27:13.180362 sshd[3717]: pam_unix(sshd:session): session closed for user core
Dec 13 14:27:13.185055 systemd[1]: sshd@24-10.128.0.48:22-139.178.68.195:52232.service: Deactivated successfully.
Dec 13 14:27:13.186219 systemd[1]: session-25.scope: Deactivated successfully.
Dec 13 14:27:13.187340 systemd-logind[1228]: Session 25 logged out. Waiting for processes to exit.
Dec 13 14:27:13.188844 systemd-logind[1228]: Removed session 25.