Dec 13 02:18:21.149987 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024
Dec 13 02:18:21.150022 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 02:18:21.150037 kernel: BIOS-provided physical RAM map:
Dec 13 02:18:21.150047 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 02:18:21.150057 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 02:18:21.150068 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 02:18:21.150083 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Dec 13 02:18:21.150094 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Dec 13 02:18:21.150105 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Dec 13 02:18:21.150115 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 02:18:21.150125 kernel: NX (Execute Disable) protection: active
Dec 13 02:18:21.150136 kernel: SMBIOS 2.7 present.
Dec 13 02:18:21.150146 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Dec 13 02:18:21.150157 kernel: Hypervisor detected: KVM
Dec 13 02:18:21.150174 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 02:18:21.150186 kernel: kvm-clock: cpu 0, msr 7a19b001, primary cpu clock
Dec 13 02:18:21.150206 kernel: kvm-clock: using sched offset of 7451532397 cycles
Dec 13 02:18:21.150235 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 02:18:21.150246 kernel: tsc: Detected 2499.996 MHz processor
Dec 13 02:18:21.150256 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 02:18:21.150269 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 02:18:21.150280 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Dec 13 02:18:21.150290 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 02:18:21.150300 kernel: Using GB pages for direct mapping
Dec 13 02:18:21.150311 kernel: ACPI: Early table checksum verification disabled
Dec 13 02:18:21.150323 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Dec 13 02:18:21.150333 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Dec 13 02:18:21.150344 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Dec 13 02:18:21.150355 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Dec 13 02:18:21.150369 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Dec 13 02:18:21.150381 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Dec 13 02:18:21.150392 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Dec 13 02:18:21.150403 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Dec 13 02:18:21.150413 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Dec 13 02:18:21.150425 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Dec 13 02:18:21.150437 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Dec 13 02:18:21.150447 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Dec 13 02:18:21.150463 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Dec 13 02:18:21.150475 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Dec 13 02:18:21.150489 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Dec 13 02:18:21.150507 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Dec 13 02:18:21.150521 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Dec 13 02:18:21.150534 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Dec 13 02:18:21.150550 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Dec 13 02:18:21.150567 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Dec 13 02:18:21.150579 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Dec 13 02:18:21.150590 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Dec 13 02:18:21.150602 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Dec 13 02:18:21.150614 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Dec 13 02:18:21.150627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Dec 13 02:18:21.150641 kernel: NUMA: Initialized distance table, cnt=1
Dec 13 02:18:21.150653 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Dec 13 02:18:21.150670 kernel: Zone ranges:
Dec 13 02:18:21.150685 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 02:18:21.150699 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Dec 13 02:18:21.150713 kernel:   Normal   empty
Dec 13 02:18:21.150727 kernel: Movable zone start for each node
Dec 13 02:18:21.150742 kernel: Early memory node ranges
Dec 13 02:18:21.150757 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 02:18:21.150771 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Dec 13 02:18:21.150786 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Dec 13 02:18:21.150804 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 02:18:21.150819 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 02:18:21.150834 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Dec 13 02:18:21.150847 kernel: ACPI: PM-Timer IO Port: 0xb008
Dec 13 02:18:21.150860 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 02:18:21.150874 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Dec 13 02:18:21.150888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 02:18:21.150901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 02:18:21.150913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 02:18:21.150929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 02:18:21.150942 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 02:18:21.150955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Dec 13 02:18:21.150966 kernel: TSC deadline timer available
Dec 13 02:18:21.150979 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Dec 13 02:18:21.150990 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Dec 13 02:18:21.151002 kernel: Booting paravirtualized kernel on KVM
Dec 13 02:18:21.151015 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 02:18:21.151028 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Dec 13 02:18:21.151044 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576
Dec 13 02:18:21.151056 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152
Dec 13 02:18:21.151068 kernel: pcpu-alloc: [0] 0 1 
Dec 13 02:18:21.151082 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0
Dec 13 02:18:21.151095 kernel: kvm-guest: PV spinlocks enabled
Dec 13 02:18:21.151108 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Dec 13 02:18:21.151121 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Dec 13 02:18:21.151134 kernel: Policy zone: DMA32
Dec 13 02:18:21.151151 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 02:18:21.151168 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 02:18:21.151182 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 02:18:21.151196 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 02:18:21.151527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 02:18:21.151549 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved)
Dec 13 02:18:21.151561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Dec 13 02:18:21.151573 kernel: Kernel/User page tables isolation: enabled
Dec 13 02:18:21.151585 kernel: ftrace: allocating 34549 entries in 135 pages
Dec 13 02:18:21.151602 kernel: ftrace: allocated 135 pages with 4 groups
Dec 13 02:18:21.151613 kernel: rcu: Hierarchical RCU implementation.
Dec 13 02:18:21.151626 kernel: rcu:         RCU event tracing is enabled.
Dec 13 02:18:21.151638 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Dec 13 02:18:21.151651 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 02:18:21.151664 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 02:18:21.151677 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 02:18:21.151692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Dec 13 02:18:21.151705 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Dec 13 02:18:21.151721 kernel: random: crng init done
Dec 13 02:18:21.151733 kernel: Console: colour VGA+ 80x25
Dec 13 02:18:21.151744 kernel: printk: console [ttyS0] enabled
Dec 13 02:18:21.151755 kernel: ACPI: Core revision 20210730
Dec 13 02:18:21.151767 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Dec 13 02:18:21.151779 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 02:18:21.151790 kernel: x2apic enabled
Dec 13 02:18:21.151802 kernel: Switched APIC routing to physical x2apic.
Dec 13 02:18:21.151814 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns
Dec 13 02:18:21.151829 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996)
Dec 13 02:18:21.151840 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Dec 13 02:18:21.151852 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Dec 13 02:18:21.151863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 02:18:21.151883 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 02:18:21.151900 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 02:18:21.151913 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 13 02:18:21.151926 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Dec 13 02:18:21.151938 kernel: RETBleed: Vulnerable
Dec 13 02:18:21.151949 kernel: Speculative Store Bypass: Vulnerable
Dec 13 02:18:21.151962 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 02:18:21.151972 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 02:18:21.151984 kernel: GDS: Unknown: Dependent on hypervisor status
Dec 13 02:18:21.151997 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 02:18:21.152014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 02:18:21.152029 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 02:18:21.152041 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Dec 13 02:18:21.152053 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Dec 13 02:18:21.152065 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Dec 13 02:18:21.152081 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Dec 13 02:18:21.152092 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Dec 13 02:18:21.152104 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Dec 13 02:18:21.152117 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 02:18:21.152130 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Dec 13 02:18:21.152207 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Dec 13 02:18:21.152334 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Dec 13 02:18:21.152347 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Dec 13 02:18:21.152359 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Dec 13 02:18:21.152371 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Dec 13 02:18:21.152385 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Dec 13 02:18:21.152397 kernel: Freeing SMP alternatives memory: 32K
Dec 13 02:18:21.152415 kernel: pid_max: default: 32768 minimum: 301
Dec 13 02:18:21.152428 kernel: LSM: Security Framework initializing
Dec 13 02:18:21.152441 kernel: SELinux:  Initializing.
Dec 13 02:18:21.152456 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 02:18:21.152468 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 02:18:21.152481 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Dec 13 02:18:21.152495 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Dec 13 02:18:21.152509 kernel: signal: max sigframe size: 3632
Dec 13 02:18:21.152523 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 02:18:21.152537 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Dec 13 02:18:21.152554 kernel: smp: Bringing up secondary CPUs ...
Dec 13 02:18:21.152568 kernel: x86: Booting SMP configuration:
Dec 13 02:18:21.152582 kernel: .... node  #0, CPUs:      #1
Dec 13 02:18:21.152597 kernel: kvm-clock: cpu 1, msr 7a19b041, secondary cpu clock
Dec 13 02:18:21.152611 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0
Dec 13 02:18:21.152625 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Dec 13 02:18:21.152642 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Dec 13 02:18:21.152657 kernel: smp: Brought up 1 node, 2 CPUs
Dec 13 02:18:21.152672 kernel: smpboot: Max logical packages: 1
Dec 13 02:18:21.152688 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS)
Dec 13 02:18:21.152703 kernel: devtmpfs: initialized
Dec 13 02:18:21.152718 kernel: x86/mm: Memory block size: 128MB
Dec 13 02:18:21.152733 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 02:18:21.152748 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Dec 13 02:18:21.152762 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 02:18:21.152776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 02:18:21.152791 kernel: audit: initializing netlink subsys (disabled)
Dec 13 02:18:21.152805 kernel: audit: type=2000 audit(1734056299.769:1): state=initialized audit_enabled=0 res=1
Dec 13 02:18:21.152823 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 02:18:21.152839 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 02:18:21.152855 kernel: cpuidle: using governor menu
Dec 13 02:18:21.152870 kernel: ACPI: bus type PCI registered
Dec 13 02:18:21.152886 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 02:18:21.152903 kernel: dca service started, version 1.12.1
Dec 13 02:18:21.152919 kernel: PCI: Using configuration type 1 for base access
Dec 13 02:18:21.152935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 02:18:21.152949 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 02:18:21.152967 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 02:18:21.152983 kernel: ACPI: Added _OSI(Module Device)
Dec 13 02:18:21.152996 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 02:18:21.153011 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 02:18:21.153027 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 02:18:21.153043 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 13 02:18:21.153056 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 13 02:18:21.153073 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 13 02:18:21.153090 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Dec 13 02:18:21.153108 kernel: ACPI: Interpreter enabled
Dec 13 02:18:21.153123 kernel: ACPI: PM: (supports S0 S5)
Dec 13 02:18:21.153136 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 02:18:21.153150 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 02:18:21.153164 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Dec 13 02:18:21.153178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 02:18:21.153530 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 02:18:21.153656 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Dec 13 02:18:21.153676 kernel: acpiphp: Slot [3] registered
Dec 13 02:18:21.153688 kernel: acpiphp: Slot [4] registered
Dec 13 02:18:21.153701 kernel: acpiphp: Slot [5] registered
Dec 13 02:18:21.153713 kernel: acpiphp: Slot [6] registered
Dec 13 02:18:21.153726 kernel: acpiphp: Slot [7] registered
Dec 13 02:18:21.153738 kernel: acpiphp: Slot [8] registered
Dec 13 02:18:21.153750 kernel: acpiphp: Slot [9] registered
Dec 13 02:18:21.153844 kernel: acpiphp: Slot [10] registered
Dec 13 02:18:21.153862 kernel: acpiphp: Slot [11] registered
Dec 13 02:18:21.153879 kernel: acpiphp: Slot [12] registered
Dec 13 02:18:21.153892 kernel: acpiphp: Slot [13] registered
Dec 13 02:18:21.153903 kernel: acpiphp: Slot [14] registered
Dec 13 02:18:21.153915 kernel: acpiphp: Slot [15] registered
Dec 13 02:18:21.153928 kernel: acpiphp: Slot [16] registered
Dec 13 02:18:21.153940 kernel: acpiphp: Slot [17] registered
Dec 13 02:18:21.153954 kernel: acpiphp: Slot [18] registered
Dec 13 02:18:21.153966 kernel: acpiphp: Slot [19] registered
Dec 13 02:18:21.153978 kernel: acpiphp: Slot [20] registered
Dec 13 02:18:21.153993 kernel: acpiphp: Slot [21] registered
Dec 13 02:18:21.154004 kernel: acpiphp: Slot [22] registered
Dec 13 02:18:21.154017 kernel: acpiphp: Slot [23] registered
Dec 13 02:18:21.154029 kernel: acpiphp: Slot [24] registered
Dec 13 02:18:21.154042 kernel: acpiphp: Slot [25] registered
Dec 13 02:18:21.154055 kernel: acpiphp: Slot [26] registered
Dec 13 02:18:21.154066 kernel: acpiphp: Slot [27] registered
Dec 13 02:18:21.154078 kernel: acpiphp: Slot [28] registered
Dec 13 02:18:21.154090 kernel: acpiphp: Slot [29] registered
Dec 13 02:18:21.154102 kernel: acpiphp: Slot [30] registered
Dec 13 02:18:21.154117 kernel: acpiphp: Slot [31] registered
Dec 13 02:18:21.154130 kernel: PCI host bridge to bus 0000:00
Dec 13 02:18:21.154410 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 02:18:21.154522 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 02:18:21.154628 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 02:18:21.154731 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Dec 13 02:18:21.154832 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 02:18:21.154967 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 13 02:18:21.155093 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Dec 13 02:18:21.155349 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Dec 13 02:18:21.155480 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Dec 13 02:18:21.155606 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Dec 13 02:18:21.155720 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Dec 13 02:18:21.155834 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Dec 13 02:18:21.155951 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Dec 13 02:18:21.156064 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Dec 13 02:18:21.156346 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Dec 13 02:18:21.156479 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Dec 13 02:18:21.156618 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Dec 13 02:18:21.156744 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Dec 13 02:18:21.156869 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Dec 13 02:18:21.156997 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 02:18:21.157129 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Dec 13 02:18:21.157543 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Dec 13 02:18:21.157697 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Dec 13 02:18:21.157836 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Dec 13 02:18:21.157857 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 02:18:21.157877 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 02:18:21.157892 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 02:18:21.157907 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 02:18:21.157922 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 02:18:21.157937 kernel: iommu: Default domain type: Translated 
Dec 13 02:18:21.157952 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 13 02:18:21.158082 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Dec 13 02:18:21.158318 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 02:18:21.158459 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Dec 13 02:18:21.158532 kernel: vgaarb: loaded
Dec 13 02:18:21.158549 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 02:18:21.158565 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 02:18:21.158580 kernel: PTP clock support registered
Dec 13 02:18:21.158595 kernel: PCI: Using ACPI for IRQ routing
Dec 13 02:18:21.158609 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 02:18:21.158625 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 13 02:18:21.158639 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Dec 13 02:18:21.158658 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Dec 13 02:18:21.158673 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Dec 13 02:18:21.158688 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 02:18:21.158702 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 02:18:21.158718 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 02:18:21.158732 kernel: pnp: PnP ACPI init
Dec 13 02:18:21.158747 kernel: pnp: PnP ACPI: found 5 devices
Dec 13 02:18:21.158762 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 02:18:21.158777 kernel: NET: Registered PF_INET protocol family
Dec 13 02:18:21.158795 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 13 02:18:21.158810 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Dec 13 02:18:21.158825 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 02:18:21.158840 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 02:18:21.158855 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Dec 13 02:18:21.158870 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Dec 13 02:18:21.158885 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 02:18:21.158900 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 02:18:21.158915 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 02:18:21.158933 kernel: NET: Registered PF_XDP protocol family
Dec 13 02:18:21.159066 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 02:18:21.159182 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 02:18:21.159718 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 02:18:21.159852 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Dec 13 02:18:21.159987 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 02:18:21.160119 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Dec 13 02:18:21.160308 kernel: PCI: CLS 0 bytes, default 64
Dec 13 02:18:21.160350 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Dec 13 02:18:21.160362 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns
Dec 13 02:18:21.160375 kernel: clocksource: Switched to clocksource tsc
Dec 13 02:18:21.160389 kernel: Initialise system trusted keyrings
Dec 13 02:18:21.160402 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Dec 13 02:18:21.160416 kernel: Key type asymmetric registered
Dec 13 02:18:21.160430 kernel: Asymmetric key parser 'x509' registered
Dec 13 02:18:21.160443 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Dec 13 02:18:21.160462 kernel: io scheduler mq-deadline registered
Dec 13 02:18:21.160477 kernel: io scheduler kyber registered
Dec 13 02:18:21.160490 kernel: io scheduler bfq registered
Dec 13 02:18:21.160505 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 02:18:21.160519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 02:18:21.160532 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 02:18:21.160546 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 02:18:21.160560 kernel: i8042: Warning: Keylock active
Dec 13 02:18:21.160572 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 02:18:21.160588 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 02:18:21.160741 kernel: rtc_cmos 00:00: RTC can wake from S4
Dec 13 02:18:21.160863 kernel: rtc_cmos 00:00: registered as rtc0
Dec 13 02:18:21.160981 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:18:20 UTC (1734056300)
Dec 13 02:18:21.161098 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Dec 13 02:18:21.161117 kernel: intel_pstate: CPU model not supported
Dec 13 02:18:21.161133 kernel: NET: Registered PF_INET6 protocol family
Dec 13 02:18:21.161149 kernel: Segment Routing with IPv6
Dec 13 02:18:21.161167 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 02:18:21.161183 kernel: NET: Registered PF_PACKET protocol family
Dec 13 02:18:21.161198 kernel: Key type dns_resolver registered
Dec 13 02:18:21.161213 kernel: IPI shorthand broadcast: enabled
Dec 13 02:18:21.161241 kernel: sched_clock: Marking stable (476007266, 240371403)->(812999786, -96621117)
Dec 13 02:18:21.161254 kernel: registered taskstats version 1
Dec 13 02:18:21.161269 kernel: Loading compiled-in X.509 certificates
Dec 13 02:18:21.161282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e'
Dec 13 02:18:21.161297 kernel: Key type .fscrypt registered
Dec 13 02:18:21.161337 kernel: Key type fscrypt-provisioning registered
Dec 13 02:18:21.161352 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 02:18:21.161367 kernel: ima: Allocated hash algorithm: sha1
Dec 13 02:18:21.161382 kernel: ima: No architecture policies found
Dec 13 02:18:21.161397 kernel: clk: Disabling unused clocks
Dec 13 02:18:21.161412 kernel: Freeing unused kernel image (initmem) memory: 47476K
Dec 13 02:18:21.161427 kernel: Write protecting the kernel read-only data: 28672k
Dec 13 02:18:21.161443 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 13 02:18:21.161458 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K
Dec 13 02:18:21.161476 kernel: Run /init as init process
Dec 13 02:18:21.161491 kernel:   with arguments:
Dec 13 02:18:21.161506 kernel:     /init
Dec 13 02:18:21.161520 kernel:   with environment:
Dec 13 02:18:21.161534 kernel:     HOME=/
Dec 13 02:18:21.161549 kernel:     TERM=linux
Dec 13 02:18:21.161563 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 02:18:21.161583 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 02:18:21.161606 systemd[1]: Detected virtualization amazon.
Dec 13 02:18:21.161622 systemd[1]: Detected architecture x86-64.
Dec 13 02:18:21.161637 systemd[1]: Running in initrd.
Dec 13 02:18:21.161654 systemd[1]: No hostname configured, using default hostname.
Dec 13 02:18:21.161684 systemd[1]: Hostname set to <localhost>.
Dec 13 02:18:21.161707 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 02:18:21.161723 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Dec 13 02:18:21.161740 systemd[1]: Queued start job for default target initrd.target.
Dec 13 02:18:21.161754 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 02:18:21.161828 systemd[1]: Reached target cryptsetup.target.
Dec 13 02:18:21.161843 systemd[1]: Reached target paths.target.
Dec 13 02:18:21.161859 systemd[1]: Reached target slices.target.
Dec 13 02:18:21.161906 systemd[1]: Reached target swap.target.
Dec 13 02:18:21.161921 systemd[1]: Reached target timers.target.
Dec 13 02:18:21.161943 systemd[1]: Listening on iscsid.socket.
Dec 13 02:18:21.161987 systemd[1]: Listening on iscsiuio.socket.
Dec 13 02:18:21.162004 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 02:18:21.162020 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 02:18:21.162037 systemd[1]: Listening on systemd-journald.socket.
Dec 13 02:18:21.162084 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 02:18:21.162101 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 02:18:21.162118 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 02:18:21.162165 systemd[1]: Reached target sockets.target.
Dec 13 02:18:21.162181 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 02:18:21.162197 systemd[1]: Finished network-cleanup.service.
Dec 13 02:18:21.162214 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 02:18:21.162290 systemd[1]: Starting systemd-journald.service...
Dec 13 02:18:21.162304 systemd[1]: Starting systemd-modules-load.service...
Dec 13 02:18:21.163096 systemd[1]: Starting systemd-resolved.service...
Dec 13 02:18:21.163111 systemd[1]: Starting systemd-vconsole-setup.service...
Dec 13 02:18:21.163127 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 02:18:21.163154 systemd-journald[185]: Journal started
Dec 13 02:18:21.163245 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2673cb3970df0d0e1769e39acf13a6) is 4.8M, max 38.7M, 33.9M free.
Dec 13 02:18:21.194331 systemd-modules-load[186]: Inserted module 'overlay'
Dec 13 02:18:21.334946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 02:18:21.334980 kernel: Bridge firewalling registered
Dec 13 02:18:21.334998 kernel: SCSI subsystem initialized
Dec 13 02:18:21.335013 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 02:18:21.335033 kernel: device-mapper: uevent: version 1.0.3
Dec 13 02:18:21.335053 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Dec 13 02:18:21.204325 systemd-resolved[187]: Positive Trust Anchors:
Dec 13 02:18:21.348724 systemd[1]: Started systemd-journald.service.
Dec 13 02:18:21.348776 kernel: audit: type=1130 audit(1734056301.335:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.348792 kernel: audit: type=1130 audit(1734056301.341:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.204340 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 02:18:21.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.204390 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 02:18:21.366656 kernel: audit: type=1130 audit(1734056301.347:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.366689 kernel: audit: type=1130 audit(1734056301.353:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.208637 systemd-resolved[187]: Defaulting to hostname 'linux'.
Dec 13 02:18:21.260655 systemd-modules-load[186]: Inserted module 'br_netfilter'
Dec 13 02:18:21.306812 systemd-modules-load[186]: Inserted module 'dm_multipath'
Dec 13 02:18:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.375448 kernel: audit: type=1130 audit(1734056301.368:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.342684 systemd[1]: Started systemd-resolved.service.
Dec 13 02:18:21.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.349110 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 02:18:21.382105 kernel: audit: type=1130 audit(1734056301.374:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.367702 systemd[1]: Finished systemd-modules-load.service.
Dec 13 02:18:21.369800 systemd[1]: Finished systemd-vconsole-setup.service.
Dec 13 02:18:21.375611 systemd[1]: Reached target nss-lookup.target.
Dec 13 02:18:21.383173 systemd[1]: Starting dracut-cmdline-ask.service...
Dec 13 02:18:21.384811 systemd[1]: Starting systemd-sysctl.service...
Dec 13 02:18:21.391145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 02:18:21.398474 systemd[1]: Finished systemd-sysctl.service.
Dec 13 02:18:21.402274 kernel: audit: type=1130 audit(1734056301.397:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.411252 kernel: audit: type=1130 audit(1734056301.406:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.407207 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 02:18:21.422978 systemd[1]: Finished dracut-cmdline-ask.service.
Dec 13 02:18:21.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.440866 kernel: audit: type=1130 audit(1734056301.423:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.430669 systemd[1]: Starting dracut-cmdline.service...
Dec 13 02:18:21.449702 dracut-cmdline[206]: dracut-dracut-053
Dec 13 02:18:21.452865 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 02:18:21.537247 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 02:18:21.559255 kernel: iscsi: registered transport (tcp)
Dec 13 02:18:21.585284 kernel: iscsi: registered transport (qla4xxx)
Dec 13 02:18:21.585359 kernel: QLogic iSCSI HBA Driver
Dec 13 02:18:21.617731 systemd[1]: Finished dracut-cmdline.service.
Dec 13 02:18:21.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:21.620894 systemd[1]: Starting dracut-pre-udev.service...
Dec 13 02:18:21.676273 kernel: raid6: avx512x4 gen() 16920 MB/s
Dec 13 02:18:21.693278 kernel: raid6: avx512x4 xor()  7422 MB/s
Dec 13 02:18:21.710273 kernel: raid6: avx512x2 gen() 13321 MB/s
Dec 13 02:18:21.727284 kernel: raid6: avx512x2 xor() 23233 MB/s
Dec 13 02:18:21.744375 kernel: raid6: avx512x1 gen() 14630 MB/s
Dec 13 02:18:21.761275 kernel: raid6: avx512x1 xor() 20830 MB/s
Dec 13 02:18:21.778294 kernel: raid6: avx2x4   gen() 16788 MB/s
Dec 13 02:18:21.795276 kernel: raid6: avx2x4   xor()  7115 MB/s
Dec 13 02:18:21.812329 kernel: raid6: avx2x2   gen() 14293 MB/s
Dec 13 02:18:21.829281 kernel: raid6: avx2x2   xor() 16149 MB/s
Dec 13 02:18:21.846308 kernel: raid6: avx2x1   gen() 12314 MB/s
Dec 13 02:18:21.863266 kernel: raid6: avx2x1   xor() 15292 MB/s
Dec 13 02:18:21.880273 kernel: raid6: sse2x4   gen()  9395 MB/s
Dec 13 02:18:21.897282 kernel: raid6: sse2x4   xor()  5800 MB/s
Dec 13 02:18:21.914265 kernel: raid6: sse2x2   gen() 10448 MB/s
Dec 13 02:18:21.931271 kernel: raid6: sse2x2   xor()  5775 MB/s
Dec 13 02:18:21.948283 kernel: raid6: sse2x1   gen()  8444 MB/s
Dec 13 02:18:21.965766 kernel: raid6: sse2x1   xor()  4571 MB/s
Dec 13 02:18:21.965856 kernel: raid6: using algorithm avx512x4 gen() 16920 MB/s
Dec 13 02:18:21.965875 kernel: raid6: .... xor() 7422 MB/s, rmw enabled
Dec 13 02:18:21.966501 kernel: raid6: using avx512x2 recovery algorithm
Dec 13 02:18:21.981320 kernel: xor: automatically using best checksumming function   avx       
Dec 13 02:18:22.093252 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Dec 13 02:18:22.103217 systemd[1]: Finished dracut-pre-udev.service.
Dec 13 02:18:22.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:22.104000 audit: BPF prog-id=7 op=LOAD
Dec 13 02:18:22.104000 audit: BPF prog-id=8 op=LOAD
Dec 13 02:18:22.105724 systemd[1]: Starting systemd-udevd.service...
Dec 13 02:18:22.124812 systemd-udevd[384]: Using default interface naming scheme 'v252'.
Dec 13 02:18:22.131882 systemd[1]: Started systemd-udevd.service.
Dec 13 02:18:22.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:22.133469 systemd[1]: Starting dracut-pre-trigger.service...
Dec 13 02:18:22.170008 dracut-pre-trigger[385]: rd.md=0: removing MD RAID activation
Dec 13 02:18:22.215257 systemd[1]: Finished dracut-pre-trigger.service.
Dec 13 02:18:22.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:22.216484 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 02:18:22.302983 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 02:18:22.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:22.403280 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 02:18:22.429864 kernel: ena 0000:00:05.0: ENA device version: 0.10
Dec 13 02:18:22.445426 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Dec 13 02:18:22.445711 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Dec 13 02:18:22.445894 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ab:c5:09:b3:eb
Dec 13 02:18:22.448523 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:18:22.650309 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 02:18:22.650347 kernel: AES CTR mode by8 optimization enabled
Dec 13 02:18:22.650363 kernel: nvme nvme0: pci function 0000:00:04.0
Dec 13 02:18:22.650573 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 13 02:18:22.650595 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Dec 13 02:18:22.650721 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 02:18:22.650738 kernel: GPT:9289727 != 16777215
Dec 13 02:18:22.650753 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 02:18:22.650769 kernel: GPT:9289727 != 16777215
Dec 13 02:18:22.650784 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 02:18:22.650799 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Dec 13 02:18:22.650815 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (433)
Dec 13 02:18:22.646336 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Dec 13 02:18:22.667067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Dec 13 02:18:22.670796 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Dec 13 02:18:22.676595 systemd[1]: Starting disk-uuid.service...
Dec 13 02:18:22.691285 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Dec 13 02:18:22.695709 disk-uuid[593]: Primary Header is updated.
Dec 13 02:18:22.695709 disk-uuid[593]: Secondary Entries is updated.
Dec 13 02:18:22.695709 disk-uuid[593]: Secondary Header is updated.
Dec 13 02:18:22.703011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 02:18:22.707241 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Dec 13 02:18:22.713250 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Dec 13 02:18:23.715453 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Dec 13 02:18:23.715527 disk-uuid[594]: The operation has completed successfully.
Dec 13 02:18:23.868881 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 02:18:23.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:23.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:23.868996 systemd[1]: Finished disk-uuid.service.
Dec 13 02:18:23.870810 systemd[1]: Starting verity-setup.service...
Dec 13 02:18:23.889244 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 02:18:24.009631 systemd[1]: Found device dev-mapper-usr.device.
Dec 13 02:18:24.018985 systemd[1]: Mounting sysusr-usr.mount...
Dec 13 02:18:24.027727 systemd[1]: Finished verity-setup.service.
Dec 13 02:18:24.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.139460 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 02:18:24.140280 systemd[1]: Mounted sysusr-usr.mount.
Dec 13 02:18:24.142506 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Dec 13 02:18:24.145327 systemd[1]: Starting ignition-setup.service...
Dec 13 02:18:24.147341 systemd[1]: Starting parse-ip-for-networkd.service...
Dec 13 02:18:24.181790 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 02:18:24.181874 kernel: BTRFS info (device nvme0n1p6): using free space tree
Dec 13 02:18:24.181893 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Dec 13 02:18:24.192252 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Dec 13 02:18:24.208021 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 02:18:24.228058 systemd[1]: Finished ignition-setup.service.
Dec 13 02:18:24.231069 systemd[1]: Starting ignition-fetch-offline.service...
Dec 13 02:18:24.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.261445 systemd[1]: Finished parse-ip-for-networkd.service.
Dec 13 02:18:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.263000 audit: BPF prog-id=9 op=LOAD
Dec 13 02:18:24.265066 systemd[1]: Starting systemd-networkd.service...
Dec 13 02:18:24.305516 systemd-networkd[1022]: lo: Link UP
Dec 13 02:18:24.305528 systemd-networkd[1022]: lo: Gained carrier
Dec 13 02:18:24.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.307391 systemd-networkd[1022]: Enumeration completed
Dec 13 02:18:24.307711 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 02:18:24.309968 systemd[1]: Started systemd-networkd.service.
Dec 13 02:18:24.311408 systemd[1]: Reached target network.target.
Dec 13 02:18:24.316886 systemd-networkd[1022]: eth0: Link UP
Dec 13 02:18:24.316892 systemd-networkd[1022]: eth0: Gained carrier
Dec 13 02:18:24.321178 systemd[1]: Starting iscsiuio.service...
Dec 13 02:18:24.342071 systemd[1]: Started iscsiuio.service.
Dec 13 02:18:24.343350 systemd-networkd[1022]: eth0: DHCPv4 address 172.31.16.8/20, gateway 172.31.16.1 acquired from 172.31.16.1
Dec 13 02:18:24.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.348174 systemd[1]: Starting iscsid.service...
Dec 13 02:18:24.355504 iscsid[1027]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 02:18:24.355504 iscsid[1027]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 13 02:18:24.355504 iscsid[1027]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 13 02:18:24.355504 iscsid[1027]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 13 02:18:24.355504 iscsid[1027]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 02:18:24.355504 iscsid[1027]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 13 02:18:24.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.355894 systemd[1]: Started iscsid.service.
Dec 13 02:18:24.371818 systemd[1]: Starting dracut-initqueue.service...
Dec 13 02:18:24.387851 systemd[1]: Finished dracut-initqueue.service.
Dec 13 02:18:24.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.391405 systemd[1]: Reached target remote-fs-pre.target.
Dec 13 02:18:24.394779 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 02:18:24.396086 systemd[1]: Reached target remote-fs.target.
Dec 13 02:18:24.397976 systemd[1]: Starting dracut-pre-mount.service...
Dec 13 02:18:24.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.415957 systemd[1]: Finished dracut-pre-mount.service.
Dec 13 02:18:24.848312 ignition[996]: Ignition 2.14.0
Dec 13 02:18:24.848331 ignition[996]: Stage: fetch-offline
Dec 13 02:18:24.849673 ignition[996]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:24.849706 ignition[996]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:24.870003 ignition[996]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:24.871216 ignition[996]: Ignition finished successfully
Dec 13 02:18:24.872685 systemd[1]: Finished ignition-fetch-offline.service.
Dec 13 02:18:24.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.877600 systemd[1]: Starting ignition-fetch.service...
Dec 13 02:18:24.897234 ignition[1046]: Ignition 2.14.0
Dec 13 02:18:24.897255 ignition[1046]: Stage: fetch
Dec 13 02:18:24.897717 ignition[1046]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:24.897750 ignition[1046]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:24.909919 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:24.911668 ignition[1046]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:24.927090 ignition[1046]: INFO     : PUT result: OK
Dec 13 02:18:24.936346 ignition[1046]: DEBUG    : parsed url from cmdline: ""
Dec 13 02:18:24.939365 ignition[1046]: INFO     : no config URL provided
Dec 13 02:18:24.939365 ignition[1046]: INFO     : reading system config file "/usr/lib/ignition/user.ign"
Dec 13 02:18:24.939365 ignition[1046]: INFO     : no config at "/usr/lib/ignition/user.ign"
Dec 13 02:18:24.939365 ignition[1046]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:24.945294 ignition[1046]: INFO     : PUT result: OK
Dec 13 02:18:24.945294 ignition[1046]: INFO     : GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Dec 13 02:18:24.945294 ignition[1046]: INFO     : GET result: OK
Dec 13 02:18:24.945294 ignition[1046]: DEBUG    : parsing config with SHA512: b6d201549ccda03e59b69b3ea2b6728b13fed2eb96bba01de9577926af9f2e54694ec56b40b170d1e637e22db4276e6e26af61e7d9891de08af010537bc0644d
Dec 13 02:18:24.953421 unknown[1046]: fetched base config from "system"
Dec 13 02:18:24.953432 unknown[1046]: fetched base config from "system"
Dec 13 02:18:24.954846 ignition[1046]: fetch: fetch complete
Dec 13 02:18:24.953438 unknown[1046]: fetched user config from "aws"
Dec 13 02:18:24.954853 ignition[1046]: fetch: fetch passed
Dec 13 02:18:24.954907 ignition[1046]: Ignition finished successfully
Dec 13 02:18:24.960470 systemd[1]: Finished ignition-fetch.service.
Dec 13 02:18:24.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:24.961738 systemd[1]: Starting ignition-kargs.service...
Dec 13 02:18:24.979116 ignition[1052]: Ignition 2.14.0
Dec 13 02:18:24.979130 ignition[1052]: Stage: kargs
Dec 13 02:18:24.979446 ignition[1052]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:24.979483 ignition[1052]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:24.990385 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:24.992018 ignition[1052]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:24.997194 ignition[1052]: INFO     : PUT result: OK
Dec 13 02:18:25.001173 ignition[1052]: kargs: kargs passed
Dec 13 02:18:25.001468 ignition[1052]: Ignition finished successfully
Dec 13 02:18:25.005570 systemd[1]: Finished ignition-kargs.service.
Dec 13 02:18:25.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.008469 systemd[1]: Starting ignition-disks.service...
Dec 13 02:18:25.018726 ignition[1058]: Ignition 2.14.0
Dec 13 02:18:25.019023 ignition[1058]: Stage: disks
Dec 13 02:18:25.020613 ignition[1058]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:25.022043 ignition[1058]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:25.038955 ignition[1058]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:25.041708 ignition[1058]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:25.045502 ignition[1058]: INFO     : PUT result: OK
Dec 13 02:18:25.052808 ignition[1058]: disks: disks passed
Dec 13 02:18:25.052890 ignition[1058]: Ignition finished successfully
Dec 13 02:18:25.055745 systemd[1]: Finished ignition-disks.service.
Dec 13 02:18:25.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.056777 systemd[1]: Reached target initrd-root-device.target.
Dec 13 02:18:25.059596 systemd[1]: Reached target local-fs-pre.target.
Dec 13 02:18:25.060833 systemd[1]: Reached target local-fs.target.
Dec 13 02:18:25.062102 systemd[1]: Reached target sysinit.target.
Dec 13 02:18:25.064555 systemd[1]: Reached target basic.target.
Dec 13 02:18:25.069804 systemd[1]: Starting systemd-fsck-root.service...
Dec 13 02:18:25.105321 systemd-fsck[1066]: ROOT: clean, 621/553520 files, 56021/553472 blocks
Dec 13 02:18:25.109506 systemd[1]: Finished systemd-fsck-root.service.
Dec 13 02:18:25.123772 kernel: kauditd_printk_skb: 22 callbacks suppressed
Dec 13 02:18:25.123832 kernel: audit: type=1130 audit(1734056305.109:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.111969 systemd[1]: Mounting sysroot.mount...
Dec 13 02:18:25.142408 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 02:18:25.145060 systemd[1]: Mounted sysroot.mount.
Dec 13 02:18:25.150618 systemd[1]: Reached target initrd-root-fs.target.
Dec 13 02:18:25.166457 systemd[1]: Mounting sysroot-usr.mount...
Dec 13 02:18:25.169044 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Dec 13 02:18:25.169156 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 02:18:25.169195 systemd[1]: Reached target ignition-diskful.target.
Dec 13 02:18:25.178183 systemd[1]: Mounted sysroot-usr.mount.
Dec 13 02:18:25.197330 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 02:18:25.202258 systemd[1]: Starting initrd-setup-root.service...
Dec 13 02:18:25.216902 initrd-setup-root[1088]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 02:18:25.222309 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1083)
Dec 13 02:18:25.226311 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 02:18:25.226368 kernel: BTRFS info (device nvme0n1p6): using free space tree
Dec 13 02:18:25.226394 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Dec 13 02:18:25.233294 initrd-setup-root[1112]: cut: /sysroot/etc/group: No such file or directory
Dec 13 02:18:25.239169 initrd-setup-root[1120]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 02:18:25.244899 initrd-setup-root[1128]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 02:18:25.257255 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Dec 13 02:18:25.267098 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 02:18:25.414707 systemd[1]: Finished initrd-setup-root.service.
Dec 13 02:18:25.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.417786 systemd[1]: Starting ignition-mount.service...
Dec 13 02:18:25.420463 kernel: audit: type=1130 audit(1734056305.415:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.424343 systemd[1]: Starting sysroot-boot.service...
Dec 13 02:18:25.431397 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Dec 13 02:18:25.431520 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Dec 13 02:18:25.461700 ignition[1149]: INFO     : Ignition 2.14.0
Dec 13 02:18:25.461700 ignition[1149]: INFO     : Stage: mount
Dec 13 02:18:25.463713 ignition[1149]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:25.463713 ignition[1149]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:25.472539 systemd[1]: Finished sysroot-boot.service.
Dec 13 02:18:25.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.478297 kernel: audit: type=1130 audit(1734056305.472:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.479196 ignition[1149]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:25.480807 ignition[1149]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:25.483724 ignition[1149]: INFO     : PUT result: OK
Dec 13 02:18:25.486561 ignition[1149]: INFO     : mount: mount passed
Dec 13 02:18:25.487961 ignition[1149]: INFO     : Ignition finished successfully
Dec 13 02:18:25.489499 systemd[1]: Finished ignition-mount.service.
Dec 13 02:18:25.494790 kernel: audit: type=1130 audit(1734056305.489:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:25.495663 systemd[1]: Starting ignition-files.service...
Dec 13 02:18:25.506058 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 02:18:25.523307 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1158)
Dec 13 02:18:25.526142 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 02:18:25.526199 kernel: BTRFS info (device nvme0n1p6): using free space tree
Dec 13 02:18:25.526211 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Dec 13 02:18:25.533315 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Dec 13 02:18:25.537707 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 02:18:25.553790 ignition[1177]: INFO     : Ignition 2.14.0
Dec 13 02:18:25.553790 ignition[1177]: INFO     : Stage: files
Dec 13 02:18:25.553790 ignition[1177]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:25.558711 ignition[1177]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:25.571691 ignition[1177]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:25.573364 ignition[1177]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:25.576209 ignition[1177]: INFO     : PUT result: OK
Dec 13 02:18:25.581372 ignition[1177]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 02:18:25.594071 ignition[1177]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 02:18:25.594071 ignition[1177]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 02:18:25.612275 ignition[1177]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 02:18:25.613951 ignition[1177]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 02:18:25.616204 unknown[1177]: wrote ssh authorized keys file for user: core
Dec 13 02:18:25.617441 ignition[1177]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 02:18:25.630112 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/eks/bootstrap.sh"
Dec 13 02:18:25.632447 ignition[1177]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 02:18:25.641502 ignition[1177]: INFO     : op(1): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem10301099"
Dec 13 02:18:25.644081 ignition[1177]: CRITICAL : op(1): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem10301099": device or resource busy
Dec 13 02:18:25.644081 ignition[1177]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem10301099", trying btrfs: device or resource busy
Dec 13 02:18:25.652156 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1180)
Dec 13 02:18:25.652202 ignition[1177]: INFO     : op(2): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem10301099"
Dec 13 02:18:25.654637 ignition[1177]: INFO     : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem10301099"
Dec 13 02:18:25.658446 ignition[1177]: INFO     : op(3): [started]  unmounting "/mnt/oem10301099"
Dec 13 02:18:25.660259 ignition[1177]: INFO     : op(3): [finished] unmounting "/mnt/oem10301099"
Dec 13 02:18:25.660259 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh"
Dec 13 02:18:25.660259 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 02:18:25.666690 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 02:18:25.671357 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 02:18:25.671357 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 02:18:25.671357 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Dec 13 02:18:25.671357 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Dec 13 02:18:25.671357 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Dec 13 02:18:25.671357 ignition[1177]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 02:18:25.684740 ignition[1177]: INFO     : op(4): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883865329"
Dec 13 02:18:25.684740 ignition[1177]: CRITICAL : op(4): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883865329": device or resource busy
Dec 13 02:18:25.684740 ignition[1177]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1883865329", trying btrfs: device or resource busy
Dec 13 02:18:25.684740 ignition[1177]: INFO     : op(5): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883865329"
Dec 13 02:18:25.684740 ignition[1177]: INFO     : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883865329"
Dec 13 02:18:25.684740 ignition[1177]: INFO     : op(6): [started]  unmounting "/mnt/oem1883865329"
Dec 13 02:18:25.684740 ignition[1177]: INFO     : op(6): [finished] unmounting "/mnt/oem1883865329"
Dec 13 02:18:25.684740 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Dec 13 02:18:25.684740 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Dec 13 02:18:25.684740 ignition[1177]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 02:18:25.707967 ignition[1177]: INFO     : op(7): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312930474"
Dec 13 02:18:25.707967 ignition[1177]: CRITICAL : op(7): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312930474": device or resource busy
Dec 13 02:18:25.707967 ignition[1177]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2312930474", trying btrfs: device or resource busy
Dec 13 02:18:25.707967 ignition[1177]: INFO     : op(8): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312930474"
Dec 13 02:18:25.707967 ignition[1177]: INFO     : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312930474"
Dec 13 02:18:25.707967 ignition[1177]: INFO     : op(9): [started]  unmounting "/mnt/oem2312930474"
Dec 13 02:18:25.707967 ignition[1177]: INFO     : op(9): [finished] unmounting "/mnt/oem2312930474"
Dec 13 02:18:25.707967 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Dec 13 02:18:25.707967 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/systemd/system/nvidia.service"
Dec 13 02:18:25.707967 ignition[1177]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 02:18:25.748275 ignition[1177]: INFO     : op(a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem356078551"
Dec 13 02:18:25.750130 ignition[1177]: CRITICAL : op(a): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem356078551": device or resource busy
Dec 13 02:18:25.750130 ignition[1177]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem356078551", trying btrfs: device or resource busy
Dec 13 02:18:25.750130 ignition[1177]: INFO     : op(b): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem356078551"
Dec 13 02:18:25.750130 ignition[1177]: INFO     : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem356078551"
Dec 13 02:18:25.758328 ignition[1177]: INFO     : op(c): [started]  unmounting "/mnt/oem356078551"
Dec 13 02:18:25.758328 ignition[1177]: INFO     : op(c): [finished] unmounting "/mnt/oem356078551"
Dec 13 02:18:25.758328 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service"
Dec 13 02:18:25.758328 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Dec 13 02:18:25.758328 ignition[1177]: INFO     : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1
Dec 13 02:18:25.808807 systemd-networkd[1022]: eth0: Gained IPv6LL
Dec 13 02:18:26.242695 ignition[1177]: INFO     : GET result: OK
Dec 13 02:18:26.731832 ignition[1177]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Dec 13 02:18:26.731832 ignition[1177]: INFO     : files: op(b): [started]  processing unit "coreos-metadata-sshkeys@.service"
Dec 13 02:18:26.731832 ignition[1177]: INFO     : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service"
Dec 13 02:18:26.731832 ignition[1177]: INFO     : files: op(c): [started]  processing unit "amazon-ssm-agent.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(c): op(d): [started]  writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(c): [finished] processing unit "amazon-ssm-agent.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(e): [started]  processing unit "nvidia.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(e): [finished] processing unit "nvidia.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(f): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(10): [started]  setting preset to enabled for "amazon-ssm-agent.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(10): [finished] setting preset to enabled for "amazon-ssm-agent.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(11): [started]  setting preset to enabled for "nvidia.service"
Dec 13 02:18:26.739599 ignition[1177]: INFO     : files: op(11): [finished] setting preset to enabled for "nvidia.service"
Dec 13 02:18:26.771009 ignition[1177]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 02:18:26.772915 ignition[1177]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 02:18:26.774941 ignition[1177]: INFO     : files: files passed
Dec 13 02:18:26.774941 ignition[1177]: INFO     : Ignition finished successfully
Dec 13 02:18:26.778496 systemd[1]: Finished ignition-files.service.
Dec 13 02:18:26.793277 kernel: audit: type=1130 audit(1734056306.778:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.799708 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Dec 13 02:18:26.803273 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Dec 13 02:18:26.804482 systemd[1]: Starting ignition-quench.service...
Dec 13 02:18:26.810726 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 02:18:26.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.811574 systemd[1]: Finished ignition-quench.service.
Dec 13 02:18:26.824777 kernel: audit: type=1130 audit(1734056306.812:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.824813 kernel: audit: type=1131 audit(1734056306.812:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.834124 initrd-setup-root-after-ignition[1202]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 02:18:26.847991 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Dec 13 02:18:26.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.852979 systemd[1]: Reached target ignition-complete.target.
Dec 13 02:18:26.860251 kernel: audit: type=1130 audit(1734056306.851:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.864908 systemd[1]: Starting initrd-parse-etc.service...
Dec 13 02:18:26.890072 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 02:18:26.890206 systemd[1]: Finished initrd-parse-etc.service.
Dec 13 02:18:26.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.895703 systemd[1]: Reached target initrd-fs.target.
Dec 13 02:18:26.907989 kernel: audit: type=1130 audit(1734056306.894:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.908023 kernel: audit: type=1131 audit(1734056306.894:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.907974 systemd[1]: Reached target initrd.target.
Dec 13 02:18:26.913614 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Dec 13 02:18:26.916558 systemd[1]: Starting dracut-pre-pivot.service...
Dec 13 02:18:26.934108 systemd[1]: Finished dracut-pre-pivot.service.
Dec 13 02:18:26.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.937168 systemd[1]: Starting initrd-cleanup.service...
Dec 13 02:18:26.956866 systemd[1]: Stopped target nss-lookup.target.
Dec 13 02:18:26.958706 systemd[1]: Stopped target remote-cryptsetup.target.
Dec 13 02:18:26.961572 systemd[1]: Stopped target timers.target.
Dec 13 02:18:26.963380 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 02:18:26.964531 systemd[1]: Stopped dracut-pre-pivot.service.
Dec 13 02:18:26.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:26.966560 systemd[1]: Stopped target initrd.target.
Dec 13 02:18:26.968432 systemd[1]: Stopped target basic.target.
Dec 13 02:18:26.971383 systemd[1]: Stopped target ignition-complete.target.
Dec 13 02:18:26.974261 systemd[1]: Stopped target ignition-diskful.target.
Dec 13 02:18:26.976057 systemd[1]: Stopped target initrd-root-device.target.
Dec 13 02:18:26.978537 systemd[1]: Stopped target remote-fs.target.
Dec 13 02:18:26.983569 systemd[1]: Stopped target remote-fs-pre.target.
Dec 13 02:18:26.984957 systemd[1]: Stopped target sysinit.target.
Dec 13 02:18:26.990310 systemd[1]: Stopped target local-fs.target.
Dec 13 02:18:26.993077 systemd[1]: Stopped target local-fs-pre.target.
Dec 13 02:18:26.997155 systemd[1]: Stopped target swap.target.
Dec 13 02:18:26.999305 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 02:18:26.999586 systemd[1]: Stopped dracut-pre-mount.service.
Dec 13 02:18:27.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.007198 systemd[1]: Stopped target cryptsetup.target.
Dec 13 02:18:27.009479 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 02:18:27.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.009941 systemd[1]: Stopped dracut-initqueue.service.
Dec 13 02:18:27.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.012149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 02:18:27.012457 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Dec 13 02:18:27.013699 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 02:18:27.013798 systemd[1]: Stopped ignition-files.service.
Dec 13 02:18:27.018100 systemd[1]: Stopping ignition-mount.service...
Dec 13 02:18:27.025463 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 02:18:27.026376 systemd[1]: Stopped kmod-static-nodes.service.
Dec 13 02:18:27.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.035896 systemd[1]: Stopping sysroot-boot.service...
Dec 13 02:18:27.046479 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 02:18:27.047006 systemd[1]: Stopped systemd-udev-trigger.service.
Dec 13 02:18:27.050778 ignition[1215]: INFO     : Ignition 2.14.0
Dec 13 02:18:27.050778 ignition[1215]: INFO     : Stage: umount
Dec 13 02:18:27.050778 ignition[1215]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:18:27.050778 ignition[1215]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Dec 13 02:18:27.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.059234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 02:18:27.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.059417 systemd[1]: Stopped dracut-pre-trigger.service.
Dec 13 02:18:27.068133 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 02:18:27.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.086533 ignition[1215]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Dec 13 02:18:27.086533 ignition[1215]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Dec 13 02:18:27.086533 ignition[1215]: INFO     : PUT result: OK
Dec 13 02:18:27.086533 ignition[1215]: INFO     : umount: umount passed
Dec 13 02:18:27.086533 ignition[1215]: INFO     : Ignition finished successfully
Dec 13 02:18:27.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.068600 systemd[1]: Finished initrd-cleanup.service.
Dec 13 02:18:27.087366 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 02:18:27.087476 systemd[1]: Stopped ignition-mount.service.
Dec 13 02:18:27.090673 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 02:18:27.090768 systemd[1]: Stopped ignition-disks.service.
Dec 13 02:18:27.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.092587 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 02:18:27.092648 systemd[1]: Stopped ignition-kargs.service.
Dec 13 02:18:27.095507 systemd[1]: ignition-fetch.service: Deactivated successfully.
Dec 13 02:18:27.095558 systemd[1]: Stopped ignition-fetch.service.
Dec 13 02:18:27.102480 systemd[1]: Stopped target network.target.
Dec 13 02:18:27.104869 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 02:18:27.105021 systemd[1]: Stopped ignition-fetch-offline.service.
Dec 13 02:18:27.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.112063 systemd[1]: Stopped target paths.target.
Dec 13 02:18:27.113733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 02:18:27.118375 systemd[1]: Stopped systemd-ask-password-console.path.
Dec 13 02:18:27.118516 systemd[1]: Stopped target slices.target.
Dec 13 02:18:27.122376 systemd[1]: Stopped target sockets.target.
Dec 13 02:18:27.124204 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 02:18:27.124262 systemd[1]: Closed iscsid.socket.
Dec 13 02:18:27.125915 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 02:18:27.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.125957 systemd[1]: Closed iscsiuio.socket.
Dec 13 02:18:27.128624 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 02:18:27.128707 systemd[1]: Stopped ignition-setup.service.
Dec 13 02:18:27.131193 systemd[1]: Stopping systemd-networkd.service...
Dec 13 02:18:27.132681 systemd[1]: Stopping systemd-resolved.service...
Dec 13 02:18:27.134542 systemd-networkd[1022]: eth0: DHCPv6 lease lost
Dec 13 02:18:27.140471 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 02:18:27.145292 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 02:18:27.145439 systemd[1]: Stopped systemd-resolved.service.
Dec 13 02:18:27.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.150654 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 02:18:27.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.150818 systemd[1]: Stopped systemd-networkd.service.
Dec 13 02:18:27.151000 audit: BPF prog-id=6 op=UNLOAD
Dec 13 02:18:27.153365 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 02:18:27.153000 audit: BPF prog-id=9 op=UNLOAD
Dec 13 02:18:27.153412 systemd[1]: Closed systemd-networkd.socket.
Dec 13 02:18:27.157948 systemd[1]: Stopping network-cleanup.service...
Dec 13 02:18:27.161637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 02:18:27.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.161734 systemd[1]: Stopped parse-ip-for-networkd.service.
Dec 13 02:18:27.162875 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 02:18:27.162942 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 02:18:27.168314 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 02:18:27.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.168388 systemd[1]: Stopped systemd-modules-load.service.
Dec 13 02:18:27.178898 systemd[1]: Stopping systemd-udevd.service...
Dec 13 02:18:27.182676 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 02:18:27.191856 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 02:18:27.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.191970 systemd[1]: Stopped network-cleanup.service.
Dec 13 02:18:27.196965 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 02:18:27.198940 systemd[1]: Stopped systemd-udevd.service.
Dec 13 02:18:27.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.201191 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 02:18:27.201267 systemd[1]: Closed systemd-udevd-control.socket.
Dec 13 02:18:27.209385 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 02:18:27.209605 systemd[1]: Closed systemd-udevd-kernel.socket.
Dec 13 02:18:27.212052 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 02:18:27.212113 systemd[1]: Stopped dracut-pre-udev.service.
Dec 13 02:18:27.217400 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 02:18:27.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.217462 systemd[1]: Stopped dracut-cmdline.service.
Dec 13 02:18:27.218574 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 02:18:27.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.218621 systemd[1]: Stopped dracut-cmdline-ask.service.
Dec 13 02:18:27.220851 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Dec 13 02:18:27.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.221839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 02:18:27.222115 systemd[1]: Stopped systemd-vconsole-setup.service.
Dec 13 02:18:27.239017 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 02:18:27.239370 systemd[1]: Stopped sysroot-boot.service.
Dec 13 02:18:27.242157 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 02:18:27.242239 systemd[1]: Stopped initrd-setup-root.service.
Dec 13 02:18:27.252865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 02:18:27.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:27.252973 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Dec 13 02:18:27.254847 systemd[1]: Reached target initrd-switch-root.target.
Dec 13 02:18:27.258192 systemd[1]: Starting initrd-switch-root.service...
Dec 13 02:18:27.272653 systemd[1]: Switching root.
Dec 13 02:18:27.307521 systemd-journald[185]: Journal stopped
Dec 13 02:18:32.249298 systemd-journald[185]: Received SIGTERM from PID 1 (systemd).
Dec 13 02:18:32.249390 kernel: SELinux:  Class mctp_socket not defined in policy.
Dec 13 02:18:32.249413 kernel: SELinux:  Class anon_inode not defined in policy.
Dec 13 02:18:32.249438 kernel: SELinux: the above unknown classes and permissions will be allowed
Dec 13 02:18:32.249499 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:18:32.249528 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:18:32.249557 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:18:32.249575 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:18:32.249595 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:18:32.249614 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:18:32.249633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 02:18:32.249652 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 02:18:32.249672 systemd[1]: Successfully loaded SELinux policy in 90.537ms.
Dec 13 02:18:32.249700 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.879ms.
Dec 13 02:18:32.249725 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 02:18:32.249747 systemd[1]: Detected virtualization amazon.
Dec 13 02:18:32.249767 systemd[1]: Detected architecture x86-64.
Dec 13 02:18:32.249788 systemd[1]: Detected first boot.
Dec 13 02:18:32.249809 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 02:18:32.249830 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Dec 13 02:18:32.249850 systemd[1]: Populated /etc with preset unit settings.
Dec 13 02:18:32.249874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 02:18:32.249934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 02:18:32.249958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 02:18:32.249979 kernel: kauditd_printk_skb: 50 callbacks suppressed
Dec 13 02:18:32.249999 kernel: audit: type=1334 audit(1734056311.828:86): prog-id=12 op=LOAD
Dec 13 02:18:32.250018 kernel: audit: type=1334 audit(1734056311.828:87): prog-id=3 op=UNLOAD
Dec 13 02:18:32.250040 kernel: audit: type=1334 audit(1734056311.829:88): prog-id=13 op=LOAD
Dec 13 02:18:32.250058 kernel: audit: type=1334 audit(1734056311.830:89): prog-id=14 op=LOAD
Dec 13 02:18:32.250077 kernel: audit: type=1334 audit(1734056311.830:90): prog-id=4 op=UNLOAD
Dec 13 02:18:32.250096 kernel: audit: type=1334 audit(1734056311.830:91): prog-id=5 op=UNLOAD
Dec 13 02:18:32.250115 kernel: audit: type=1334 audit(1734056311.833:92): prog-id=15 op=LOAD
Dec 13 02:18:32.250133 kernel: audit: type=1334 audit(1734056311.833:93): prog-id=12 op=UNLOAD
Dec 13 02:18:32.250153 kernel: audit: type=1334 audit(1734056311.834:94): prog-id=16 op=LOAD
Dec 13 02:18:32.250172 kernel: audit: type=1334 audit(1734056311.835:95): prog-id=17 op=LOAD
Dec 13 02:18:32.250195 systemd[1]: iscsiuio.service: Deactivated successfully.
Dec 13 02:18:32.252005 systemd[1]: Stopped iscsiuio.service.
Dec 13 02:18:32.252051 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 02:18:32.252072 systemd[1]: Stopped iscsid.service.
Dec 13 02:18:32.252094 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 02:18:32.252114 systemd[1]: Stopped initrd-switch-root.service.
Dec 13 02:18:32.252136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 02:18:32.252157 systemd[1]: Created slice system-addon\x2dconfig.slice.
Dec 13 02:18:32.252178 systemd[1]: Created slice system-addon\x2drun.slice.
Dec 13 02:18:32.252204 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Dec 13 02:18:32.252245 systemd[1]: Created slice system-getty.slice.
Dec 13 02:18:32.252267 systemd[1]: Created slice system-modprobe.slice.
Dec 13 02:18:32.252288 systemd[1]: Created slice system-serial\x2dgetty.slice.
Dec 13 02:18:32.252309 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Dec 13 02:18:32.252330 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Dec 13 02:18:32.252350 systemd[1]: Created slice user.slice.
Dec 13 02:18:32.252371 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 02:18:32.252391 systemd[1]: Started systemd-ask-password-wall.path.
Dec 13 02:18:32.252416 systemd[1]: Set up automount boot.automount.
Dec 13 02:18:32.252436 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Dec 13 02:18:32.252457 systemd[1]: Stopped target initrd-switch-root.target.
Dec 13 02:18:32.252477 systemd[1]: Stopped target initrd-fs.target.
Dec 13 02:18:32.252499 systemd[1]: Stopped target initrd-root-fs.target.
Dec 13 02:18:32.252519 systemd[1]: Reached target integritysetup.target.
Dec 13 02:18:32.252539 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 02:18:32.252560 systemd[1]: Reached target remote-fs.target.
Dec 13 02:18:32.252580 systemd[1]: Reached target slices.target.
Dec 13 02:18:32.252604 systemd[1]: Reached target swap.target.
Dec 13 02:18:32.252626 systemd[1]: Reached target torcx.target.
Dec 13 02:18:32.252647 systemd[1]: Reached target veritysetup.target.
Dec 13 02:18:32.252669 systemd[1]: Listening on systemd-coredump.socket.
Dec 13 02:18:32.252689 systemd[1]: Listening on systemd-initctl.socket.
Dec 13 02:18:32.252715 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 02:18:32.252735 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 02:18:32.252755 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 02:18:32.252776 systemd[1]: Listening on systemd-userdbd.socket.
Dec 13 02:18:32.252795 systemd[1]: Mounting dev-hugepages.mount...
Dec 13 02:18:32.252818 systemd[1]: Mounting dev-mqueue.mount...
Dec 13 02:18:32.252841 systemd[1]: Mounting media.mount...
Dec 13 02:18:32.252861 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:18:32.252882 systemd[1]: Mounting sys-kernel-debug.mount...
Dec 13 02:18:32.252902 systemd[1]: Mounting sys-kernel-tracing.mount...
Dec 13 02:18:32.252923 systemd[1]: Mounting tmp.mount...
Dec 13 02:18:32.252943 systemd[1]: Starting flatcar-tmpfiles.service...
Dec 13 02:18:32.252963 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 02:18:32.252986 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 02:18:32.253006 systemd[1]: Starting modprobe@configfs.service...
Dec 13 02:18:32.255706 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:18:32.255751 systemd[1]: Starting modprobe@drm.service...
Dec 13 02:18:32.255773 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:18:32.255794 systemd[1]: Starting modprobe@fuse.service...
Dec 13 02:18:32.255815 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:18:32.255837 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 02:18:32.255859 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 02:18:32.255885 systemd[1]: Stopped systemd-fsck-root.service.
Dec 13 02:18:32.255907 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 02:18:32.255926 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 02:18:32.255944 systemd[1]: Stopped systemd-journald.service.
Dec 13 02:18:32.255963 systemd[1]: Starting systemd-journald.service...
Dec 13 02:18:32.255982 systemd[1]: Starting systemd-modules-load.service...
Dec 13 02:18:32.257253 systemd[1]: Starting systemd-network-generator.service...
Dec 13 02:18:32.257292 systemd[1]: Starting systemd-remount-fs.service...
Dec 13 02:18:32.257313 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 02:18:32.257335 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 02:18:32.257362 systemd[1]: Stopped verity-setup.service.
Dec 13 02:18:32.257383 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:18:32.257404 systemd[1]: Mounted dev-hugepages.mount.
Dec 13 02:18:32.257425 systemd[1]: Mounted dev-mqueue.mount.
Dec 13 02:18:32.257444 systemd[1]: Mounted media.mount.
Dec 13 02:18:32.257582 systemd[1]: Mounted sys-kernel-debug.mount.
Dec 13 02:18:32.257604 systemd[1]: Mounted sys-kernel-tracing.mount.
Dec 13 02:18:32.257624 systemd[1]: Mounted tmp.mount.
Dec 13 02:18:32.257644 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 02:18:32.257669 kernel: loop: module loaded
Dec 13 02:18:32.257691 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 02:18:32.257711 systemd[1]: Finished modprobe@configfs.service.
Dec 13 02:18:32.257732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:18:32.257754 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:18:32.257778 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 02:18:32.257809 systemd[1]: Finished modprobe@drm.service.
Dec 13 02:18:32.257833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:18:32.257855 kernel: fuse: init (API version 7.34)
Dec 13 02:18:32.257879 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:18:32.257908 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:18:32.257930 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:18:32.257951 systemd[1]: Finished systemd-modules-load.service.
Dec 13 02:18:32.257973 systemd[1]: Finished systemd-network-generator.service.
Dec 13 02:18:32.257997 systemd[1]: Finished systemd-remount-fs.service.
Dec 13 02:18:32.258018 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 02:18:32.258039 systemd[1]: Finished modprobe@fuse.service.
Dec 13 02:18:32.258060 systemd[1]: Reached target network-pre.target.
Dec 13 02:18:32.258082 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Dec 13 02:18:32.258103 systemd[1]: Mounting sys-kernel-config.mount...
Dec 13 02:18:32.258124 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 02:18:32.258145 systemd[1]: Starting systemd-hwdb-update.service...
Dec 13 02:18:32.258168 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:18:32.258192 systemd[1]: Starting systemd-random-seed.service...
Dec 13 02:18:32.258213 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:18:32.260028 systemd[1]: Starting systemd-sysctl.service...
Dec 13 02:18:32.260057 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Dec 13 02:18:32.260078 systemd[1]: Mounted sys-kernel-config.mount.
Dec 13 02:18:32.260099 systemd[1]: Finished systemd-random-seed.service.
Dec 13 02:18:32.260121 systemd[1]: Reached target first-boot-complete.target.
Dec 13 02:18:32.260147 systemd-journald[1323]: Journal started
Dec 13 02:18:32.260268 systemd-journald[1323]: Runtime Journal (/run/log/journal/ec2673cb3970df0d0e1769e39acf13a6) is 4.8M, max 38.7M, 33.9M free.
Dec 13 02:18:27.954000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 02:18:28.064000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 02:18:28.064000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 02:18:28.065000 audit: BPF prog-id=10 op=LOAD
Dec 13 02:18:28.065000 audit: BPF prog-id=10 op=UNLOAD
Dec 13 02:18:28.065000 audit: BPF prog-id=11 op=LOAD
Dec 13 02:18:28.065000 audit: BPF prog-id=11 op=UNLOAD
Dec 13 02:18:28.238000 audit[1250]: AVC avc:  denied  { associate } for  pid=1250 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 02:18:28.238000 audit[1250]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1233 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:18:28.238000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 02:18:28.240000 audit[1250]: AVC avc:  denied  { associate } for  pid=1250 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 02:18:28.240000 audit[1250]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=1233 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:18:28.240000 audit: CWD cwd="/"
Dec 13 02:18:28.240000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:28.240000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:28.240000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 02:18:31.828000 audit: BPF prog-id=12 op=LOAD
Dec 13 02:18:31.828000 audit: BPF prog-id=3 op=UNLOAD
Dec 13 02:18:31.829000 audit: BPF prog-id=13 op=LOAD
Dec 13 02:18:31.830000 audit: BPF prog-id=14 op=LOAD
Dec 13 02:18:31.830000 audit: BPF prog-id=4 op=UNLOAD
Dec 13 02:18:31.830000 audit: BPF prog-id=5 op=UNLOAD
Dec 13 02:18:31.833000 audit: BPF prog-id=15 op=LOAD
Dec 13 02:18:31.833000 audit: BPF prog-id=12 op=UNLOAD
Dec 13 02:18:31.834000 audit: BPF prog-id=16 op=LOAD
Dec 13 02:18:31.835000 audit: BPF prog-id=17 op=LOAD
Dec 13 02:18:31.836000 audit: BPF prog-id=13 op=UNLOAD
Dec 13 02:18:31.836000 audit: BPF prog-id=14 op=UNLOAD
Dec 13 02:18:31.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:31.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:31.847000 audit: BPF prog-id=15 op=UNLOAD
Dec 13 02:18:32.263451 systemd[1]: Started systemd-journald.service.
Dec 13 02:18:31.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:31.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:31.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.060000 audit: BPF prog-id=18 op=LOAD
Dec 13 02:18:32.060000 audit: BPF prog-id=19 op=LOAD
Dec 13 02:18:32.061000 audit: BPF prog-id=20 op=LOAD
Dec 13 02:18:32.061000 audit: BPF prog-id=16 op=UNLOAD
Dec 13 02:18:32.061000 audit: BPF prog-id=17 op=UNLOAD
Dec 13 02:18:32.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.238000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 02:18:32.238000 audit[1323]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe834e7760 a2=4000 a3=7ffe834e77fc items=0 ppid=1 pid=1323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:18:32.238000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 02:18:32.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:31.826498 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 02:18:32.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.280474 systemd-journald[1323]: Time spent on flushing to /var/log/journal/ec2673cb3970df0d0e1769e39acf13a6 is 77.383ms for 1148 entries.
Dec 13 02:18:32.280474 systemd-journald[1323]: System Journal (/var/log/journal/ec2673cb3970df0d0e1769e39acf13a6) is 8.0M, max 195.6M, 187.6M free.
Dec 13 02:18:32.368767 systemd-journald[1323]: Received client request to flush runtime journal.
Dec 13 02:18:32.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:28.227508 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 02:18:31.838734 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 02:18:28.228052 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 02:18:32.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:32.268193 systemd[1]: Starting systemd-journal-flush.service...
Dec 13 02:18:28.228079 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 02:18:32.275593 systemd[1]: Finished systemd-sysctl.service.
Dec 13 02:18:28.228125 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Dec 13 02:18:32.331393 systemd[1]: Finished flatcar-tmpfiles.service.
Dec 13 02:18:28.228142 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="skipped missing lower profile" missing profile=oem
Dec 13 02:18:32.334307 systemd[1]: Starting systemd-sysusers.service...
Dec 13 02:18:28.228191 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Dec 13 02:18:32.342962 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 02:18:28.228212 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Dec 13 02:18:32.345830 systemd[1]: Starting systemd-udev-settle.service...
Dec 13 02:18:28.228499 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Dec 13 02:18:32.370995 systemd[1]: Finished systemd-journal-flush.service.
Dec 13 02:18:28.228811 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 02:18:28.228834 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 02:18:28.237761 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Dec 13 02:18:28.237828 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Dec 13 02:18:28.237886 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6
Dec 13 02:18:28.237910 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Dec 13 02:18:28.237940 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6
Dec 13 02:18:28.237963 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Dec 13 02:18:31.305099 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:18:31.305368 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:18:31.305683 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:18:31.305873 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:18:31.305939 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Dec 13 02:18:31.306005 /usr/lib/systemd/system-generators/torcx-generator[1250]: time="2024-12-13T02:18:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Dec 13 02:18:32.378093 udevadm[1366]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 02:18:32.406711 systemd[1]: Finished systemd-sysusers.service.
Dec 13 02:18:32.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.002391 systemd[1]: Finished systemd-hwdb-update.service.
Dec 13 02:18:33.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.002000 audit: BPF prog-id=21 op=LOAD
Dec 13 02:18:33.003000 audit: BPF prog-id=22 op=LOAD
Dec 13 02:18:33.003000 audit: BPF prog-id=7 op=UNLOAD
Dec 13 02:18:33.003000 audit: BPF prog-id=8 op=UNLOAD
Dec 13 02:18:33.005083 systemd[1]: Starting systemd-udevd.service...
Dec 13 02:18:33.024727 systemd-udevd[1368]: Using default interface naming scheme 'v252'.
Dec 13 02:18:33.076549 systemd[1]: Started systemd-udevd.service.
Dec 13 02:18:33.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.078000 audit: BPF prog-id=23 op=LOAD
Dec 13 02:18:33.080514 systemd[1]: Starting systemd-networkd.service...
Dec 13 02:18:33.102000 audit: BPF prog-id=24 op=LOAD
Dec 13 02:18:33.102000 audit: BPF prog-id=25 op=LOAD
Dec 13 02:18:33.102000 audit: BPF prog-id=26 op=LOAD
Dec 13 02:18:33.104761 systemd[1]: Starting systemd-userdbd.service...
Dec 13 02:18:33.167727 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Dec 13 02:18:33.196303 (udev-worker)[1374]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:18:33.204663 systemd[1]: Started systemd-userdbd.service.
Dec 13 02:18:33.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.280736 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Dec 13 02:18:33.286336 kernel: ACPI: button: Power Button [PWRF]
Dec 13 02:18:33.286430 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
Dec 13 02:18:33.295264 kernel: ACPI: button: Sleep Button [SLPF]
Dec 13 02:18:33.341000 audit[1382]: AVC avc:  denied  { confidentiality } for  pid=1382 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 02:18:33.341000 audit[1382]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556a36c88c80 a1=337fc a2=7f2b31c6bbc5 a3=5 items=110 ppid=1368 pid=1382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:18:33.341000 audit: CWD cwd="/"
Dec 13 02:18:33.341000 audit: PATH item=0 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=1 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=2 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=3 name=(null) inode=13767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=4 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=5 name=(null) inode=13768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=6 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=7 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=8 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=9 name=(null) inode=13770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=10 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=11 name=(null) inode=13771 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=12 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=13 name=(null) inode=13772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=14 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=15 name=(null) inode=13773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=16 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=17 name=(null) inode=13774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=18 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=19 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=20 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=21 name=(null) inode=13776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=22 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=23 name=(null) inode=13777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=24 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=25 name=(null) inode=13778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=26 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=27 name=(null) inode=13779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=28 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=29 name=(null) inode=13780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=30 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=31 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=32 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=33 name=(null) inode=13782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=34 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=35 name=(null) inode=13783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=36 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=37 name=(null) inode=13784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=38 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=39 name=(null) inode=13785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=40 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=41 name=(null) inode=13786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=42 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=43 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=44 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=45 name=(null) inode=13788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=46 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=47 name=(null) inode=13789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=48 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=49 name=(null) inode=13790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=50 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=51 name=(null) inode=13791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=52 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=53 name=(null) inode=13792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=54 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=55 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=56 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=57 name=(null) inode=13794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=58 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=59 name=(null) inode=13795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=60 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=61 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=62 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=63 name=(null) inode=13797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=64 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=65 name=(null) inode=13798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=66 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=67 name=(null) inode=13799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=68 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=69 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=70 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=71 name=(null) inode=13801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=72 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=73 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=74 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=75 name=(null) inode=13803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=76 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=77 name=(null) inode=13804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=78 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=79 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=80 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=81 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=82 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=83 name=(null) inode=13807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=84 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=85 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=86 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=87 name=(null) inode=13809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=88 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=89 name=(null) inode=13810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=90 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=91 name=(null) inode=13811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=92 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=93 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=94 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=95 name=(null) inode=13813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=96 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=97 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=98 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=99 name=(null) inode=13815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=100 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=101 name=(null) inode=13816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=102 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=103 name=(null) inode=13817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=104 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=105 name=(null) inode=13818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=106 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=107 name=(null) inode=13819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PATH item=109 name=(null) inode=13820 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:18:33.341000 audit: PROCTITLE proctitle="(udev-worker)"
Dec 13 02:18:33.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.381788 systemd-networkd[1377]: lo: Link UP
Dec 13 02:18:33.381795 systemd-networkd[1377]: lo: Gained carrier
Dec 13 02:18:33.382666 systemd-networkd[1377]: Enumeration completed
Dec 13 02:18:33.382790 systemd[1]: Started systemd-networkd.service.
Dec 13 02:18:33.385641 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 02:18:33.386550 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 02:18:33.394643 systemd-networkd[1377]: eth0: Link UP
Dec 13 02:18:33.395285 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 02:18:33.394867 systemd-networkd[1377]: eth0: Gained carrier
Dec 13 02:18:33.404476 systemd-networkd[1377]: eth0: DHCPv4 address 172.31.16.8/20, gateway 172.31.16.1 acquired from 172.31.16.1
Dec 13 02:18:33.433962 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1372)
Dec 13 02:18:33.446132 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Dec 13 02:18:33.460565 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4
Dec 13 02:18:33.460599 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 02:18:33.579603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 02:18:33.707670 systemd[1]: Finished systemd-udev-settle.service.
Dec 13 02:18:33.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.710165 systemd[1]: Starting lvm2-activation-early.service...
Dec 13 02:18:33.745978 lvm[1482]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 02:18:33.782884 systemd[1]: Finished lvm2-activation-early.service.
Dec 13 02:18:33.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.784333 systemd[1]: Reached target cryptsetup.target.
Dec 13 02:18:33.789131 systemd[1]: Starting lvm2-activation.service...
Dec 13 02:18:33.799260 lvm[1483]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 02:18:33.825505 systemd[1]: Finished lvm2-activation.service.
Dec 13 02:18:33.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:33.826685 systemd[1]: Reached target local-fs-pre.target.
Dec 13 02:18:33.827771 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 02:18:33.827942 systemd[1]: Reached target local-fs.target.
Dec 13 02:18:33.829002 systemd[1]: Reached target machines.target.
Dec 13 02:18:33.831934 systemd[1]: Starting ldconfig.service...
Dec 13 02:18:33.834537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:18:33.834600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:33.836640 systemd[1]: Starting systemd-boot-update.service...
Dec 13 02:18:33.851951 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Dec 13 02:18:33.862519 systemd[1]: Starting systemd-machine-id-commit.service...
Dec 13 02:18:33.875387 systemd[1]: Starting systemd-sysext.service...
Dec 13 02:18:33.889153 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1485 (bootctl)
Dec 13 02:18:33.892173 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Dec 13 02:18:33.910593 systemd[1]: Unmounting usr-share-oem.mount...
Dec 13 02:18:33.919375 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Dec 13 02:18:33.919635 systemd[1]: Unmounted usr-share-oem.mount.
Dec 13 02:18:33.946250 kernel: loop0: detected capacity change from 0 to 210664
Dec 13 02:18:33.946383 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Dec 13 02:18:33.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.067385 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 02:18:34.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.069596 systemd[1]: Finished systemd-machine-id-commit.service.
Dec 13 02:18:34.073249 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 02:18:34.091846 systemd-fsck[1496]: fsck.fat 4.2 (2021-01-31)
Dec 13 02:18:34.091846 systemd-fsck[1496]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters
Dec 13 02:18:34.095387 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Dec 13 02:18:34.099956 kernel: loop1: detected capacity change from 0 to 210664
Dec 13 02:18:34.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.098936 systemd[1]: Mounting boot.mount...
Dec 13 02:18:34.136501 systemd[1]: Mounted boot.mount.
Dec 13 02:18:34.142943 (sd-sysext)[1499]: Using extensions 'kubernetes'.
Dec 13 02:18:34.146387 (sd-sysext)[1499]: Merged extensions into '/usr'.
Dec 13 02:18:34.176506 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:18:34.178831 systemd[1]: Mounting usr-share-oem.mount...
Dec 13 02:18:34.184554 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.187090 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:18:34.190926 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:18:34.193687 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:18:34.197369 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.197637 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:34.197824 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:18:34.202402 systemd[1]: Finished systemd-boot-update.service.
Dec 13 02:18:34.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.205017 systemd[1]: Mounted usr-share-oem.mount.
Dec 13 02:18:34.206827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:18:34.207004 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:18:34.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.208655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:18:34.208816 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:18:34.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.210966 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:18:34.211442 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:18:34.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.212884 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:18:34.213009 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.214675 systemd[1]: Finished systemd-sysext.service.
Dec 13 02:18:34.217439 systemd[1]: Starting ensure-sysext.service...
Dec 13 02:18:34.220771 systemd[1]: Starting systemd-tmpfiles-setup.service...
Dec 13 02:18:34.238630 systemd[1]: Reloading.
Dec 13 02:18:34.275463 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Dec 13 02:18:34.291739 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 02:18:34.315763 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 02:18:34.391840 /usr/lib/systemd/system-generators/torcx-generator[1537]: time="2024-12-13T02:18:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 02:18:34.391890 /usr/lib/systemd/system-generators/torcx-generator[1537]: time="2024-12-13T02:18:34Z" level=info msg="torcx already run"
Dec 13 02:18:34.616115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 02:18:34.616282 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 02:18:34.662017 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 02:18:34.765000 audit: BPF prog-id=27 op=LOAD
Dec 13 02:18:34.765000 audit: BPF prog-id=18 op=UNLOAD
Dec 13 02:18:34.766000 audit: BPF prog-id=28 op=LOAD
Dec 13 02:18:34.766000 audit: BPF prog-id=29 op=LOAD
Dec 13 02:18:34.766000 audit: BPF prog-id=19 op=UNLOAD
Dec 13 02:18:34.766000 audit: BPF prog-id=20 op=UNLOAD
Dec 13 02:18:34.767000 audit: BPF prog-id=30 op=LOAD
Dec 13 02:18:34.767000 audit: BPF prog-id=23 op=UNLOAD
Dec 13 02:18:34.769000 audit: BPF prog-id=31 op=LOAD
Dec 13 02:18:34.769000 audit: BPF prog-id=24 op=UNLOAD
Dec 13 02:18:34.769000 audit: BPF prog-id=32 op=LOAD
Dec 13 02:18:34.770000 audit: BPF prog-id=33 op=LOAD
Dec 13 02:18:34.770000 audit: BPF prog-id=25 op=UNLOAD
Dec 13 02:18:34.770000 audit: BPF prog-id=26 op=UNLOAD
Dec 13 02:18:34.773000 audit: BPF prog-id=34 op=LOAD
Dec 13 02:18:34.773000 audit: BPF prog-id=35 op=LOAD
Dec 13 02:18:34.773000 audit: BPF prog-id=21 op=UNLOAD
Dec 13 02:18:34.773000 audit: BPF prog-id=22 op=UNLOAD
Dec 13 02:18:34.784677 systemd[1]: Finished systemd-tmpfiles-setup.service.
Dec 13 02:18:34.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.799843 systemd[1]: Starting audit-rules.service...
Dec 13 02:18:34.806308 systemd[1]: Starting clean-ca-certificates.service...
Dec 13 02:18:34.813426 systemd[1]: Starting systemd-journal-catalog-update.service...
Dec 13 02:18:34.815000 audit: BPF prog-id=36 op=LOAD
Dec 13 02:18:34.820000 audit: BPF prog-id=37 op=LOAD
Dec 13 02:18:34.818488 systemd[1]: Starting systemd-resolved.service...
Dec 13 02:18:34.828960 systemd[1]: Starting systemd-timesyncd.service...
Dec 13 02:18:34.833129 systemd[1]: Starting systemd-update-utmp.service...
Dec 13 02:18:34.848563 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.851715 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:18:34.854619 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:18:34.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.857674 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:18:34.859315 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.859499 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:34.869000 audit[1597]: SYSTEM_BOOT pid=1597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.860656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:18:34.861052 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:18:34.863079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:18:34.863403 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:18:34.864945 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:18:34.865115 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:18:34.866545 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:18:34.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.866683 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.892554 ldconfig[1484]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 02:18:34.869449 systemd[1]: Finished clean-ca-certificates.service.
Dec 13 02:18:34.877324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.879577 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:18:34.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.882585 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:18:34.886732 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:18:34.887687 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.887946 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:34.888172 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 02:18:34.890056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:18:34.890273 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:18:34.891949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:18:34.892123 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:18:34.894096 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:18:34.894369 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:18:34.896363 systemd-networkd[1377]: eth0: Gained IPv6LL
Dec 13 02:18:34.896420 systemd[1]: Finished ldconfig.service.
Dec 13 02:18:34.899050 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:18:34.899200 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.905429 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 02:18:34.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.907429 systemd[1]: Finished systemd-update-utmp.service.
Dec 13 02:18:34.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.915544 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.921736 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:18:34.932334 systemd[1]: Starting modprobe@drm.service...
Dec 13 02:18:34.939177 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:18:34.946416 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:18:34.949014 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:18:34.949258 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:34.949512 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 02:18:34.955965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:18:34.956170 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:18:34.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.958931 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 02:18:34.959113 systemd[1]: Finished modprobe@drm.service.
Dec 13 02:18:34.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.960936 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:18:34.961130 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:18:34.964980 systemd[1]: Finished ensure-sysext.service.
Dec 13 02:18:34.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.972135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:18:34.972567 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:18:34.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.974624 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:18:34.974886 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:18:34.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:34.976456 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:18:34.976513 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:18:35.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:35.041507 systemd[1]: Finished systemd-journal-catalog-update.service.
Dec 13 02:18:35.055131 systemd[1]: Starting systemd-update-done.service...
Dec 13 02:18:35.088710 systemd[1]: Finished systemd-update-done.service.
Dec 13 02:18:35.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:35.101662 systemd[1]: Started systemd-timesyncd.service.
Dec 13 02:18:35.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:18:35.103323 systemd[1]: Reached target time-set.target.
Dec 13 02:18:35.104000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Dec 13 02:18:35.104000 audit[1622]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc246eb390 a2=420 a3=0 items=0 ppid=1591 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:18:35.104000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Dec 13 02:18:35.106468 augenrules[1622]: No rules
Dec 13 02:18:35.107547 systemd[1]: Finished audit-rules.service.
Dec 13 02:18:35.135086 systemd-resolved[1595]: Positive Trust Anchors:
Dec 13 02:18:35.135108 systemd-resolved[1595]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 02:18:35.135150 systemd-resolved[1595]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 02:18:35.186256 systemd-resolved[1595]: Defaulting to hostname 'linux'.
Dec 13 02:18:35.188088 systemd[1]: Started systemd-resolved.service.
Dec 13 02:18:35.189254 systemd[1]: Reached target network.target.
Dec 13 02:18:35.190511 systemd[1]: Reached target network-online.target.
Dec 13 02:18:35.191558 systemd[1]: Reached target nss-lookup.target.
Dec 13 02:18:35.192479 systemd[1]: Reached target sysinit.target.
Dec 13 02:18:35.194097 systemd[1]: Started motdgen.path.
Dec 13 02:18:35.195200 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Dec 13 02:18:35.197216 systemd[1]: Started logrotate.timer.
Dec 13 02:18:35.198358 systemd[1]: Started mdadm.timer.
Dec 13 02:18:35.199577 systemd[1]: Started systemd-tmpfiles-clean.timer.
Dec 13 02:18:35.200603 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 02:18:35.200746 systemd[1]: Reached target paths.target.
Dec 13 02:18:35.202050 systemd[1]: Reached target timers.target.
Dec 13 02:18:35.203825 systemd[1]: Listening on dbus.socket.
Dec 13 02:18:35.212118 systemd[1]: Starting docker.socket...
Dec 13 02:18:35.216863 systemd[1]: Listening on sshd.socket.
Dec 13 02:18:35.219372 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:35.220891 systemd[1]: Listening on docker.socket.
Dec 13 02:18:35.222146 systemd[1]: Reached target sockets.target.
Dec 13 02:18:35.223503 systemd[1]: Reached target basic.target.
Dec 13 02:18:35.224426 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 02:18:35.224460 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 02:18:35.226563 systemd[1]: Started amazon-ssm-agent.service.
Dec 13 02:18:35.229464 systemd[1]: Starting containerd.service...
Dec 13 02:18:35.233118 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Dec 13 02:18:35.235946 systemd[1]: Starting dbus.service...
Dec 13 02:18:35.238054 systemd[1]: Starting enable-oem-cloudinit.service...
Dec 13 02:18:35.240428 systemd[1]: Starting extend-filesystems.service...
Dec 13 02:18:35.241609 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Dec 13 02:18:35.244088 systemd[1]: Starting kubelet.service...
Dec 13 02:18:35.246512 systemd[1]: Starting motdgen.service...
Dec 13 02:18:35.248855 systemd[1]: Started nvidia.service.
Dec 13 02:18:35.251460 systemd[1]: Starting ssh-key-proc-cmdline.service...
Dec 13 02:18:35.251531 systemd-timesyncd[1596]: Contacted time server 23.168.136.132:123 (0.flatcar.pool.ntp.org).
Dec 13 02:18:35.251671 systemd-timesyncd[1596]: Initial clock synchronization to Fri 2024-12-13 02:18:35.248092 UTC.
Dec 13 02:18:35.254538 systemd[1]: Starting sshd-keygen.service...
Dec 13 02:18:35.261500 systemd[1]: Starting systemd-logind.service...
Dec 13 02:18:35.262784 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:18:35.262861 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 02:18:35.263713 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 02:18:35.264973 systemd[1]: Starting update-engine.service...
Dec 13 02:18:35.267719 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Dec 13 02:18:35.347026 jq[1643]: true
Dec 13 02:18:35.333769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 02:18:35.333991 systemd[1]: Finished ssh-key-proc-cmdline.service.
Dec 13 02:18:35.391249 jq[1634]: false
Dec 13 02:18:35.396067 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 02:18:35.396586 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Dec 13 02:18:35.483017 jq[1654]: true
Dec 13 02:18:35.487915 extend-filesystems[1635]: Found loop1
Dec 13 02:18:35.517244 extend-filesystems[1635]: Found nvme0n1
Dec 13 02:18:35.520346 extend-filesystems[1635]: Found nvme0n1p1
Dec 13 02:18:35.523561 extend-filesystems[1635]: Found nvme0n1p2
Dec 13 02:18:35.525588 extend-filesystems[1635]: Found nvme0n1p3
Dec 13 02:18:35.527328 extend-filesystems[1635]: Found usr
Dec 13 02:18:35.528866 extend-filesystems[1635]: Found nvme0n1p4
Dec 13 02:18:35.530370 extend-filesystems[1635]: Found nvme0n1p6
Dec 13 02:18:35.534272 extend-filesystems[1635]: Found nvme0n1p7
Dec 13 02:18:35.539016 extend-filesystems[1635]: Found nvme0n1p9
Dec 13 02:18:35.541347 extend-filesystems[1635]: Checking size of /dev/nvme0n1p9
Dec 13 02:18:35.598378 dbus-daemon[1633]: [system] SELinux support is enabled
Dec 13 02:18:35.599017 systemd[1]: Started dbus.service.
Dec 13 02:18:35.603637 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 02:18:35.603920 systemd[1]: Finished motdgen.service.
Dec 13 02:18:35.605482 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 02:18:35.605584 systemd[1]: Reached target system-config.target.
Dec 13 02:18:35.606832 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 02:18:35.606857 systemd[1]: Reached target user-config.target.
Dec 13 02:18:35.613706 dbus-daemon[1633]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1377 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Dec 13 02:18:35.619409 extend-filesystems[1635]: Resized partition /dev/nvme0n1p9
Dec 13 02:18:35.631958 systemd[1]: Starting systemd-hostnamed.service...
Dec 13 02:18:35.647519 update_engine[1642]: I1213 02:18:35.645892  1642 main.cc:92] Flatcar Update Engine starting
Dec 13 02:18:35.676254 extend-filesystems[1697]: resize2fs 1.46.5 (30-Dec-2021)
Dec 13 02:18:35.678727 amazon-ssm-agent[1630]: 2024/12/13 02:18:35 Failed to load instance info from vault. RegistrationKey does not exist.
Dec 13 02:18:35.682538 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Dec 13 02:18:35.684289 amazon-ssm-agent[1630]: Initializing new seelog logger
Dec 13 02:18:35.688782 systemd[1]: Started update-engine.service.
Dec 13 02:18:35.691522 update_engine[1642]: I1213 02:18:35.688838  1642 update_check_scheduler.cc:74] Next update check in 7m41s
Dec 13 02:18:35.691888 amazon-ssm-agent[1630]: New Seelog Logger Creation Complete
Dec 13 02:18:35.692059 amazon-ssm-agent[1630]: 2024/12/13 02:18:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Dec 13 02:18:35.692150 amazon-ssm-agent[1630]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Dec 13 02:18:35.692478 amazon-ssm-agent[1630]: 2024/12/13 02:18:35 processing appconfig overrides
Dec 13 02:18:35.694738 systemd[1]: Started locksmithd.service.
Dec 13 02:18:35.778244 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Dec 13 02:18:35.797598 extend-filesystems[1697]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Dec 13 02:18:35.797598 extend-filesystems[1697]: old_desc_blocks = 1, new_desc_blocks = 1
Dec 13 02:18:35.797598 extend-filesystems[1697]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Dec 13 02:18:35.805613 extend-filesystems[1635]: Resized filesystem in /dev/nvme0n1p9
Dec 13 02:18:35.806902 bash[1696]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 02:18:35.798064 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 02:18:35.798340 systemd[1]: Finished extend-filesystems.service.
Dec 13 02:18:35.802467 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Dec 13 02:18:35.826119 env[1647]: time="2024-12-13T02:18:35.826054791Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Dec 13 02:18:35.898702 systemd-logind[1641]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 13 02:18:35.902484 systemd-logind[1641]: Watching system buttons on /dev/input/event2 (Sleep Button)
Dec 13 02:18:35.902681 systemd-logind[1641]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Dec 13 02:18:35.910472 systemd-logind[1641]: New seat seat0.
Dec 13 02:18:35.913774 systemd[1]: Started systemd-logind.service.
Dec 13 02:18:35.951991 env[1647]: time="2024-12-13T02:18:35.951938480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 02:18:35.953629 env[1647]: time="2024-12-13T02:18:35.953591390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:18:35.958002 env[1647]: time="2024-12-13T02:18:35.957952597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 02:18:35.958170 env[1647]: time="2024-12-13T02:18:35.958153209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:18:35.958685 env[1647]: time="2024-12-13T02:18:35.958650603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 02:18:35.962168 env[1647]: time="2024-12-13T02:18:35.962130367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 02:18:35.962355 env[1647]: time="2024-12-13T02:18:35.962331042Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Dec 13 02:18:35.962445 env[1647]: time="2024-12-13T02:18:35.962418320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 02:18:35.962905 env[1647]: time="2024-12-13T02:18:35.962874100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:18:35.963358 env[1647]: time="2024-12-13T02:18:35.963333783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:18:35.963704 env[1647]: time="2024-12-13T02:18:35.963677712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 02:18:35.963784 env[1647]: time="2024-12-13T02:18:35.963770862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 02:18:35.963904 env[1647]: time="2024-12-13T02:18:35.963886605Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Dec 13 02:18:35.963984 env[1647]: time="2024-12-13T02:18:35.963972067Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 02:18:35.973465 systemd[1]: nvidia.service: Deactivated successfully.
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.978990582Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979082519Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979117073Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979241931Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979266648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979287659Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979322013Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979345993Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979368125Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979402115Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979421412Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979440306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979642296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 02:18:35.984483 env[1647]: time="2024-12-13T02:18:35.979785501Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.980291973Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.980345395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.980368847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981430186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981464734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981507176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981526533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981546598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981692013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981717035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981737501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.981773837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.982125133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.982151483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985090 env[1647]: time="2024-12-13T02:18:35.982171861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985635 env[1647]: time="2024-12-13T02:18:35.982204470Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 02:18:35.985635 env[1647]: time="2024-12-13T02:18:35.982249242Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Dec 13 02:18:35.985635 env[1647]: time="2024-12-13T02:18:35.982265822Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 02:18:35.985635 env[1647]: time="2024-12-13T02:18:35.982288630Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Dec 13 02:18:35.985635 env[1647]: time="2024-12-13T02:18:35.983895157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 02:18:35.985829 env[1647]: time="2024-12-13T02:18:35.984384489Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.985975349Z" level=info msg="Connect containerd service"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.986056111Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987150747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987308289Z" level=info msg="Start subscribing containerd event"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987366239Z" level=info msg="Start recovering state"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987448487Z" level=info msg="Start event monitor"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987464605Z" level=info msg="Start snapshots syncer"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987478072Z" level=info msg="Start cni network conf syncer for default"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.987491053Z" level=info msg="Start streaming server"
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.988087096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 02:18:35.992723 env[1647]: time="2024-12-13T02:18:35.988192908Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 02:18:35.988436 systemd[1]: Started containerd.service.
Dec 13 02:18:36.014588 env[1647]: time="2024-12-13T02:18:36.014543446Z" level=info msg="containerd successfully booted in 0.198396s"
Dec 13 02:18:36.047351 dbus-daemon[1633]: [system] Successfully activated service 'org.freedesktop.hostname1'
Dec 13 02:18:36.047542 systemd[1]: Started systemd-hostnamed.service.
Dec 13 02:18:36.051661 dbus-daemon[1633]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1688 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Dec 13 02:18:36.059334 systemd[1]: Starting polkit.service...
Dec 13 02:18:36.091522 polkitd[1752]: Started polkitd version 121
Dec 13 02:18:36.120766 polkitd[1752]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 02:18:36.121815 polkitd[1752]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 02:18:36.126999 polkitd[1752]: Finished loading, compiling and executing 2 rules
Dec 13 02:18:36.127722 dbus-daemon[1633]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Dec 13 02:18:36.128163 polkitd[1752]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 13 02:18:36.127924 systemd[1]: Started polkit.service.
Dec 13 02:18:36.173383 systemd-hostnamed[1688]: Hostname set to <ip-172-31-16-8> (transient)
Dec 13 02:18:36.173517 systemd-resolved[1595]: System hostname changed to 'ip-172-31-16-8'.
Dec 13 02:18:36.331644 coreos-metadata[1632]: Dec 13 02:18:36.330 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Dec 13 02:18:36.348735 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Create new startup processor
Dec 13 02:18:36.350687 coreos-metadata[1632]: Dec 13 02:18:36.350 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1
Dec 13 02:18:36.351667 coreos-metadata[1632]: Dec 13 02:18:36.351 INFO Fetch successful
Dec 13 02:18:36.351667 coreos-metadata[1632]: Dec 13 02:18:36.351 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1
Dec 13 02:18:36.352750 coreos-metadata[1632]: Dec 13 02:18:36.352 INFO Fetch successful
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [LongRunningPluginsManager] registered plugins: {}
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing bookkeeping folders
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO removing the completed state files
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing bookkeeping folders for long running plugins
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing replies folder for MDS reply requests that couldn't reach the service
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing healthcheck folders for long running plugins
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing locations for inventory plugin
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing default location for custom inventory
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing default location for file inventory
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Initializing default location for role inventory
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Init the cloudwatchlogs publisher
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:runDockerAction
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:runDocument
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:softwareInventory
Dec 13 02:18:36.358323 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:configureDocker
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:refreshAssociation
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:configurePackage
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:downloadContent
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:runPowerShellScript
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform independent plugin aws:updateSsmAgent
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Successfully loaded platform dependent plugin aws:runShellScript
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0
Dec 13 02:18:36.358880 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO OS: linux, Arch: amd64
Dec 13 02:18:36.359212 unknown[1632]: wrote ssh authorized keys file for user: core
Dec 13 02:18:36.360710 amazon-ssm-agent[1630]: datastore file /var/lib/amazon/ssm/i-0c009d9872b7f1375/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute
Dec 13 02:18:36.392286 update-ssh-keys[1806]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 02:18:36.393207 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Dec 13 02:18:36.452192 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] Starting session document processing engine...
Dec 13 02:18:36.551181 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] [EngineProcessor] Starting
Dec 13 02:18:36.647026 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module.
Dec 13 02:18:36.744908 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0c009d9872b7f1375, requestId: a5e80ec8-d757-4c1b-b812-875475b9dbf1
Dec 13 02:18:36.802959 locksmithd[1699]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 02:18:36.839667 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] Starting document processing engine...
Dec 13 02:18:36.937115 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [EngineProcessor] Starting
Dec 13 02:18:37.032185 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing
Dec 13 02:18:37.127713 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] Starting message polling
Dec 13 02:18:37.224036 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] Starting send replies to MDS
Dec 13 02:18:37.319659 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [instanceID=i-0c009d9872b7f1375] Starting association polling
Dec 13 02:18:37.415530 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting
Dec 13 02:18:37.511693 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [Association] Launching response handler
Dec 13 02:18:37.607860 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing
Dec 13 02:18:37.704916 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service
Dec 13 02:18:37.801626 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized
Dec 13 02:18:37.811697 systemd[1]: Started kubelet.service.
Dec 13 02:18:37.898422 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] listening reply.
Dec 13 02:18:37.995445 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [HealthCheck] HealthCheck reporting agent health.
Dec 13 02:18:38.034067 sshd_keygen[1672]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 02:18:38.080945 systemd[1]: Finished sshd-keygen.service.
Dec 13 02:18:38.086363 systemd[1]: Starting issuegen.service...
Dec 13 02:18:38.093250 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [OfflineService] Starting document processing engine...
Dec 13 02:18:38.099289 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 02:18:38.099518 systemd[1]: Finished issuegen.service.
Dec 13 02:18:38.103686 systemd[1]: Starting systemd-user-sessions.service...
Dec 13 02:18:38.116609 systemd[1]: Finished systemd-user-sessions.service.
Dec 13 02:18:38.119516 systemd[1]: Started getty@tty1.service.
Dec 13 02:18:38.122706 systemd[1]: Started serial-getty@ttyS0.service.
Dec 13 02:18:38.124451 systemd[1]: Reached target getty.target.
Dec 13 02:18:38.125841 systemd[1]: Reached target multi-user.target.
Dec 13 02:18:38.129998 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Dec 13 02:18:38.144050 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 02:18:38.144328 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Dec 13 02:18:38.145793 systemd[1]: Startup finished in 813ms (kernel) + 6.993s (initrd) + 10.303s (userspace) = 18.110s.
Dec 13 02:18:38.191296 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [OfflineService] [EngineProcessor] Starting
Dec 13 02:18:38.289907 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [OfflineService] [EngineProcessor] Initial processing
Dec 13 02:18:38.387601 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [OfflineService] Starting message polling
Dec 13 02:18:38.485687 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [OfflineService] Starting send replies to MDS
Dec 13 02:18:38.583929 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [LongRunningPluginsManager] starting long running plugin manager
Dec 13 02:18:38.682384 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
Dec 13 02:18:38.781215 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
Dec 13 02:18:38.880030 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [StartupProcessor] Executing startup processor tasks
Dec 13 02:18:38.975831 kubelet[1827]: E1213 02:18:38.975689    1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 02:18:38.978305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:18:38.978477 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 02:18:38.978760 systemd[1]: kubelet.service: Consumed 1.203s CPU time.
Dec 13 02:18:38.979686 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running
Dec 13 02:18:39.078926 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk
Dec 13 02:18:39.178373 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6
Dec 13 02:18:39.278029 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c009d9872b7f1375?role=subscribe&stream=input
Dec 13 02:18:39.377800 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c009d9872b7f1375?role=subscribe&stream=input
Dec 13 02:18:39.477824 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] Starting receiving message from control channel
Dec 13 02:18:39.578101 amazon-ssm-agent[1630]: 2024-12-13 02:18:36 INFO [MessageGatewayService] [EngineProcessor] Initial processing
Dec 13 02:18:43.890794 systemd[1]: Created slice system-sshd.slice.
Dec 13 02:18:43.892629 systemd[1]: Started sshd@0-172.31.16.8:22-139.178.68.195:43194.service.
Dec 13 02:18:44.092108 sshd[1848]: Accepted publickey for core from 139.178.68.195 port 43194 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY
Dec 13 02:18:44.095027 sshd[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:18:44.115124 systemd[1]: Created slice user-500.slice.
Dec 13 02:18:44.125254 systemd[1]: Starting user-runtime-dir@500.service...
Dec 13 02:18:44.133802 systemd-logind[1641]: New session 1 of user core.
Dec 13 02:18:44.159752 systemd[1]: Finished user-runtime-dir@500.service.
Dec 13 02:18:44.165977 systemd[1]: Starting user@500.service...
Dec 13 02:18:44.172971 (systemd)[1851]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:18:44.302374 systemd[1851]: Queued start job for default target default.target.
Dec 13 02:18:44.303269 systemd[1851]: Reached target paths.target.
Dec 13 02:18:44.303306 systemd[1851]: Reached target sockets.target.
Dec 13 02:18:44.303326 systemd[1851]: Reached target timers.target.
Dec 13 02:18:44.303342 systemd[1851]: Reached target basic.target.
Dec 13 02:18:44.303400 systemd[1851]: Reached target default.target.
Dec 13 02:18:44.303440 systemd[1851]: Startup finished in 122ms.
Dec 13 02:18:44.304195 systemd[1]: Started user@500.service.
Dec 13 02:18:44.305293 systemd[1]: Started session-1.scope.
Dec 13 02:18:44.460717 systemd[1]: Started sshd@1-172.31.16.8:22-139.178.68.195:43196.service.
Dec 13 02:18:44.631622 sshd[1860]: Accepted publickey for core from 139.178.68.195 port 43196 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY
Dec 13 02:18:44.633880 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:18:44.646494 systemd-logind[1641]: New session 2 of user core.
Dec 13 02:18:44.648074 systemd[1]: Started session-2.scope.
Dec 13 02:18:44.774632 sshd[1860]: pam_unix(sshd:session): session closed for user core
Dec 13 02:18:44.778110 systemd[1]: sshd@1-172.31.16.8:22-139.178.68.195:43196.service: Deactivated successfully.
Dec 13 02:18:44.779491 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 02:18:44.780649 systemd-logind[1641]: Session 2 logged out. Waiting for processes to exit.
Dec 13 02:18:44.781871 systemd-logind[1641]: Removed session 2.
Dec 13 02:18:44.802305 systemd[1]: Started sshd@2-172.31.16.8:22-139.178.68.195:43204.service.
Dec 13 02:18:44.967159 sshd[1866]: Accepted publickey for core from 139.178.68.195 port 43204 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY
Dec 13 02:18:44.968713 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:18:44.975880 systemd-logind[1641]: New session 3 of user core.
Dec 13 02:18:44.976597 systemd[1]: Started session-3.scope.
Dec 13 02:18:45.103156 sshd[1866]: pam_unix(sshd:session): session closed for user core
Dec 13 02:18:45.108308 systemd[1]: sshd@2-172.31.16.8:22-139.178.68.195:43204.service: Deactivated successfully.
Dec 13 02:18:45.109241 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 02:18:45.110744 systemd-logind[1641]: Session 3 logged out. Waiting for processes to exit.
Dec 13 02:18:45.113621 systemd-logind[1641]: Removed session 3.
Dec 13 02:18:45.129178 systemd[1]: Started sshd@3-172.31.16.8:22-139.178.68.195:43208.service.
Dec 13 02:18:45.299845 sshd[1872]: Accepted publickey for core from 139.178.68.195 port 43208 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY
Dec 13 02:18:45.301974 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:18:45.308163 systemd-logind[1641]: New session 4 of user core.
Dec 13 02:18:45.308744 systemd[1]: Started session-4.scope.
Dec 13 02:18:45.438791 sshd[1872]: pam_unix(sshd:session): session closed for user core
Dec 13 02:18:45.449455 systemd[1]: sshd@3-172.31.16.8:22-139.178.68.195:43208.service: Deactivated successfully.
Dec 13 02:18:45.450722 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 02:18:45.451865 systemd-logind[1641]: Session 4 logged out. Waiting for processes to exit.
Dec 13 02:18:45.453012 systemd-logind[1641]: Removed session 4.
Dec 13 02:18:45.468518 systemd[1]: Started sshd@4-172.31.16.8:22-139.178.68.195:43220.service.
Dec 13 02:18:45.640293 sshd[1878]: Accepted publickey for core from 139.178.68.195 port 43220 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY
Dec 13 02:18:45.647133 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:18:45.662582 systemd-logind[1641]: New session 5 of user core.
Dec 13 02:18:45.662659 systemd[1]: Started session-5.scope.
Dec 13 02:18:45.790714 sudo[1881]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 02:18:45.792006 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 02:18:45.812811 systemd[1]: Starting coreos-metadata.service...
Dec 13 02:18:45.913853 coreos-metadata[1885]: Dec 13 02:18:45.913 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Dec 13 02:18:45.915011 coreos-metadata[1885]: Dec 13 02:18:45.914 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1
Dec 13 02:18:45.915816 coreos-metadata[1885]: Dec 13 02:18:45.915 INFO Fetch successful
Dec 13 02:18:45.916273 coreos-metadata[1885]: Dec 13 02:18:45.915 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1
Dec 13 02:18:45.916698 coreos-metadata[1885]: Dec 13 02:18:45.916 INFO Fetch successful
Dec 13 02:18:45.916763 coreos-metadata[1885]: Dec 13 02:18:45.916 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1
Dec 13 02:18:45.917478 coreos-metadata[1885]: Dec 13 02:18:45.917 INFO Fetch successful
Dec 13 02:18:45.917541 coreos-metadata[1885]: Dec 13 02:18:45.917 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1
Dec 13 02:18:45.918203 coreos-metadata[1885]: Dec 13 02:18:45.918 INFO Fetch successful
Dec 13 02:18:45.918319 coreos-metadata[1885]: Dec 13 02:18:45.918 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1
Dec 13 02:18:45.918728 coreos-metadata[1885]: Dec 13 02:18:45.918 INFO Fetch successful
Dec 13 02:18:45.918828 coreos-metadata[1885]: Dec 13 02:18:45.918 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1
Dec 13 02:18:45.920162 coreos-metadata[1885]: Dec 13 02:18:45.920 INFO Fetch successful
Dec 13 02:18:45.920287 coreos-metadata[1885]: Dec 13 02:18:45.920 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1
Dec 13 02:18:45.920903 coreos-metadata[1885]: Dec 13 02:18:45.920 INFO Fetch successful
Dec 13 02:18:45.921010 coreos-metadata[1885]: Dec 13 02:18:45.920 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1
Dec 13 02:18:45.921503 coreos-metadata[1885]: Dec 13 02:18:45.921 INFO Fetch successful
Dec 13 02:18:45.932660 systemd[1]: Finished coreos-metadata.service.
Dec 13 02:18:47.596662 systemd[1]: Stopped kubelet.service.
Dec 13 02:18:47.596979 systemd[1]: kubelet.service: Consumed 1.203s CPU time.
Dec 13 02:18:47.599973 systemd[1]: Starting kubelet.service...
Dec 13 02:18:47.637721 systemd[1]: Reloading.
Dec 13 02:18:47.782571 /usr/lib/systemd/system-generators/torcx-generator[1945]: time="2024-12-13T02:18:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 02:18:47.791355 /usr/lib/systemd/system-generators/torcx-generator[1945]: time="2024-12-13T02:18:47Z" level=info msg="torcx already run"
Dec 13 02:18:47.900852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 02:18:47.900875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 02:18:47.923211 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 02:18:48.051620 systemd[1]: Stopping kubelet.service...
Dec 13 02:18:48.052376 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 02:18:48.052597 systemd[1]: Stopped kubelet.service.
Dec 13 02:18:48.055049 systemd[1]: Starting kubelet.service...
Dec 13 02:18:48.250084 systemd[1]: Started kubelet.service.
Dec 13 02:18:48.316959 kubelet[2004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 02:18:48.316959 kubelet[2004]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 02:18:48.316959 kubelet[2004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 02:18:48.317454 kubelet[2004]: I1213 02:18:48.317024    2004 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 02:18:49.176613 kubelet[2004]: I1213 02:18:49.176574    2004 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Dec 13 02:18:49.176613 kubelet[2004]: I1213 02:18:49.176604    2004 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 02:18:49.176975 kubelet[2004]: I1213 02:18:49.176953    2004 server.go:927] "Client rotation is on, will bootstrap in background"
Dec 13 02:18:49.201404 kubelet[2004]: I1213 02:18:49.201366    2004 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 02:18:49.216695 kubelet[2004]: I1213 02:18:49.216667    2004 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 02:18:49.216945 kubelet[2004]: I1213 02:18:49.216896    2004 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 02:18:49.217142 kubelet[2004]: I1213 02:18:49.216936    2004 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.16.8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 02:18:49.217300 kubelet[2004]: I1213 02:18:49.217159    2004 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 02:18:49.217300 kubelet[2004]: I1213 02:18:49.217173    2004 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 02:18:49.217435 kubelet[2004]: I1213 02:18:49.217330    2004 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 02:18:49.218477 kubelet[2004]: I1213 02:18:49.218456    2004 kubelet.go:400] "Attempting to sync node with API server"
Dec 13 02:18:49.218477 kubelet[2004]: I1213 02:18:49.218480    2004 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 02:18:49.218613 kubelet[2004]: I1213 02:18:49.218508    2004 kubelet.go:312] "Adding apiserver pod source"
Dec 13 02:18:49.218613 kubelet[2004]: I1213 02:18:49.218529    2004 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 02:18:49.218965 kubelet[2004]: E1213 02:18:49.218934    2004 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:49.219041 kubelet[2004]: E1213 02:18:49.218988    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:49.231262 kubelet[2004]: I1213 02:18:49.231236    2004 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 02:18:49.233989 kubelet[2004]: I1213 02:18:49.233945    2004 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 02:18:49.234134 kubelet[2004]: W1213 02:18:49.234026    2004 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 02:18:49.234692 kubelet[2004]: I1213 02:18:49.234669    2004 server.go:1264] "Started kubelet"
Dec 13 02:18:49.250189 kubelet[2004]: I1213 02:18:49.250143    2004 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 02:18:49.254993 kubelet[2004]: I1213 02:18:49.254918    2004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 02:18:49.255378 kubelet[2004]: I1213 02:18:49.255357    2004 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 02:18:49.259701 kubelet[2004]: W1213 02:18:49.259186    2004 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.16.8" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Dec 13 02:18:49.259701 kubelet[2004]: E1213 02:18:49.259246    2004 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.8" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Dec 13 02:18:49.263079 kubelet[2004]: I1213 02:18:49.263049    2004 server.go:455] "Adding debug handlers to kubelet server"
Dec 13 02:18:49.268118 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Dec 13 02:18:49.268750 kubelet[2004]: I1213 02:18:49.268731    2004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 02:18:49.269894 kubelet[2004]: E1213 02:18:49.269852    2004 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 02:18:49.270280 kubelet[2004]: I1213 02:18:49.270153    2004 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 02:18:49.271669 kubelet[2004]: I1213 02:18:49.271647    2004 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Dec 13 02:18:49.272333 kubelet[2004]: I1213 02:18:49.271710    2004 reconciler.go:26] "Reconciler: start to sync state"
Dec 13 02:18:49.275243 kubelet[2004]: E1213 02:18:49.275191    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:49.278060 kubelet[2004]: I1213 02:18:49.278031    2004 factory.go:221] Registration of the containerd container factory successfully
Dec 13 02:18:49.278060 kubelet[2004]: I1213 02:18:49.278051    2004 factory.go:221] Registration of the systemd container factory successfully
Dec 13 02:18:49.278263 kubelet[2004]: I1213 02:18:49.278145    2004 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 02:18:49.314450 kubelet[2004]: E1213 02:18:49.314417    2004 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.8\" not found" node="172.31.16.8"
Dec 13 02:18:49.315516 kubelet[2004]: I1213 02:18:49.315465    2004 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 02:18:49.315977 kubelet[2004]: I1213 02:18:49.315958    2004 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 02:18:49.316148 kubelet[2004]: I1213 02:18:49.316137    2004 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 02:18:49.319949 kubelet[2004]: I1213 02:18:49.319927    2004 policy_none.go:49] "None policy: Start"
Dec 13 02:18:49.321252 kubelet[2004]: I1213 02:18:49.321235    2004 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 02:18:49.321386 kubelet[2004]: I1213 02:18:49.321378    2004 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 02:18:49.334976 systemd[1]: Created slice kubepods.slice.
Dec 13 02:18:49.343822 systemd[1]: Created slice kubepods-besteffort.slice.
Dec 13 02:18:49.359833 systemd[1]: Created slice kubepods-burstable.slice.
Dec 13 02:18:49.362816 kubelet[2004]: I1213 02:18:49.362786    2004 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 02:18:49.363107 kubelet[2004]: I1213 02:18:49.363058    2004 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Dec 13 02:18:49.363229 kubelet[2004]: I1213 02:18:49.363207    2004 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 02:18:49.366419 kubelet[2004]: E1213 02:18:49.366394    2004 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.8\" not found"
Dec 13 02:18:49.378694 kubelet[2004]: I1213 02:18:49.378666    2004 kubelet_node_status.go:73] "Attempting to register node" node="172.31.16.8"
Dec 13 02:18:49.392846 kubelet[2004]: I1213 02:18:49.392816    2004 kubelet_node_status.go:76] "Successfully registered node" node="172.31.16.8"
Dec 13 02:18:49.409313 kubelet[2004]: E1213 02:18:49.409283    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:49.456099 kubelet[2004]: I1213 02:18:49.455956    2004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 02:18:49.460027 kubelet[2004]: I1213 02:18:49.459993    2004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 02:18:49.460027 kubelet[2004]: I1213 02:18:49.460025    2004 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 02:18:49.460198 kubelet[2004]: I1213 02:18:49.460052    2004 kubelet.go:2337] "Starting kubelet main sync loop"
Dec 13 02:18:49.460198 kubelet[2004]: E1213 02:18:49.460112    2004 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Dec 13 02:18:49.510162 kubelet[2004]: E1213 02:18:49.510127    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:49.611094 kubelet[2004]: E1213 02:18:49.611048    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:49.712159 kubelet[2004]: E1213 02:18:49.712025    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:49.778997 sudo[1881]: pam_unix(sudo:session): session closed for user root
Dec 13 02:18:49.803714 sshd[1878]: pam_unix(sshd:session): session closed for user core
Dec 13 02:18:49.806911 systemd[1]: sshd@4-172.31.16.8:22-139.178.68.195:43220.service: Deactivated successfully.
Dec 13 02:18:49.808157 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 02:18:49.809337 systemd-logind[1641]: Session 5 logged out. Waiting for processes to exit.
Dec 13 02:18:49.810789 systemd-logind[1641]: Removed session 5.
Dec 13 02:18:49.812760 kubelet[2004]: E1213 02:18:49.812733    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:49.915216 kubelet[2004]: E1213 02:18:49.915167    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:50.015899 kubelet[2004]: E1213 02:18:50.015768    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:50.116588 kubelet[2004]: E1213 02:18:50.116545    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:50.190279 kubelet[2004]: I1213 02:18:50.190233    2004 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Dec 13 02:18:50.190476 kubelet[2004]: W1213 02:18:50.190456    2004 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Dec 13 02:18:50.190561 kubelet[2004]: W1213 02:18:50.190502    2004 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Dec 13 02:18:50.190561 kubelet[2004]: W1213 02:18:50.190545    2004 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Dec 13 02:18:50.216683 kubelet[2004]: E1213 02:18:50.216637    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:50.220030 kubelet[2004]: E1213 02:18:50.219949    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:50.317599 kubelet[2004]: E1213 02:18:50.317474    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:50.418406 kubelet[2004]: E1213 02:18:50.418359    2004 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.16.8\" not found"
Dec 13 02:18:50.519690 kubelet[2004]: I1213 02:18:50.519659    2004 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Dec 13 02:18:50.520106 env[1647]: time="2024-12-13T02:18:50.520055172Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 02:18:50.520514 kubelet[2004]: I1213 02:18:50.520267    2004 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Dec 13 02:18:51.220883 kubelet[2004]: E1213 02:18:51.220840    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:51.220883 kubelet[2004]: I1213 02:18:51.220855    2004 apiserver.go:52] "Watching apiserver"
Dec 13 02:18:51.226931 kubelet[2004]: I1213 02:18:51.226885    2004 topology_manager.go:215] "Topology Admit Handler" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" podNamespace="kube-system" podName="cilium-k2jz8"
Dec 13 02:18:51.227104 kubelet[2004]: I1213 02:18:51.227049    2004 topology_manager.go:215] "Topology Admit Handler" podUID="d2c53912-c3a8-49b1-98c6-c5bfaa57c842" podNamespace="kube-system" podName="kube-proxy-vhqzg"
Dec 13 02:18:51.235891 systemd[1]: Created slice kubepods-besteffort-podd2c53912_c3a8_49b1_98c6_c5bfaa57c842.slice.
Dec 13 02:18:51.250154 systemd[1]: Created slice kubepods-burstable-pod6898a2ee_0663_4225_875d_2b64cfe1295b.slice.
Dec 13 02:18:51.272654 kubelet[2004]: I1213 02:18:51.272620    2004 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Dec 13 02:18:51.290278 kubelet[2004]: I1213 02:18:51.290236    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-bpf-maps\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290484 kubelet[2004]: I1213 02:18:51.290308    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-config-path\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290484 kubelet[2004]: I1213 02:18:51.290338    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2c53912-c3a8-49b1-98c6-c5bfaa57c842-lib-modules\") pod \"kube-proxy-vhqzg\" (UID: \"d2c53912-c3a8-49b1-98c6-c5bfaa57c842\") " pod="kube-system/kube-proxy-vhqzg"
Dec 13 02:18:51.290484 kubelet[2004]: I1213 02:18:51.290367    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-run\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290484 kubelet[2004]: I1213 02:18:51.290388    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-cgroup\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290484 kubelet[2004]: I1213 02:18:51.290407    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-lib-modules\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290484 kubelet[2004]: I1213 02:18:51.290427    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-xtables-lock\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290947 kubelet[2004]: I1213 02:18:51.290447    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-hubble-tls\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290947 kubelet[2004]: I1213 02:18:51.290475    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52n7\" (UniqueName: \"kubernetes.io/projected/d2c53912-c3a8-49b1-98c6-c5bfaa57c842-kube-api-access-z52n7\") pod \"kube-proxy-vhqzg\" (UID: \"d2c53912-c3a8-49b1-98c6-c5bfaa57c842\") " pod="kube-system/kube-proxy-vhqzg"
Dec 13 02:18:51.290947 kubelet[2004]: I1213 02:18:51.290560    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-hostproc\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290947 kubelet[2004]: I1213 02:18:51.290592    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cni-path\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290947 kubelet[2004]: I1213 02:18:51.290735    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6898a2ee-0663-4225-875d-2b64cfe1295b-clustermesh-secrets\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.290947 kubelet[2004]: I1213 02:18:51.290765    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-net\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.291192 kubelet[2004]: I1213 02:18:51.290788    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-kernel\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.291192 kubelet[2004]: I1213 02:18:51.290812    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsbmd\" (UniqueName: \"kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-kube-api-access-dsbmd\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.291192 kubelet[2004]: I1213 02:18:51.290837    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-etc-cni-netd\") pod \"cilium-k2jz8\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") " pod="kube-system/cilium-k2jz8"
Dec 13 02:18:51.291192 kubelet[2004]: I1213 02:18:51.290862    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2c53912-c3a8-49b1-98c6-c5bfaa57c842-kube-proxy\") pod \"kube-proxy-vhqzg\" (UID: \"d2c53912-c3a8-49b1-98c6-c5bfaa57c842\") " pod="kube-system/kube-proxy-vhqzg"
Dec 13 02:18:51.291192 kubelet[2004]: I1213 02:18:51.290887    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2c53912-c3a8-49b1-98c6-c5bfaa57c842-xtables-lock\") pod \"kube-proxy-vhqzg\" (UID: \"d2c53912-c3a8-49b1-98c6-c5bfaa57c842\") " pod="kube-system/kube-proxy-vhqzg"
Dec 13 02:18:51.549124 env[1647]: time="2024-12-13T02:18:51.548332786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhqzg,Uid:d2c53912-c3a8-49b1-98c6-c5bfaa57c842,Namespace:kube-system,Attempt:0,}"
Dec 13 02:18:51.560840 env[1647]: time="2024-12-13T02:18:51.560798310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k2jz8,Uid:6898a2ee-0663-4225-875d-2b64cfe1295b,Namespace:kube-system,Attempt:0,}"
Dec 13 02:18:52.119010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999186424.mount: Deactivated successfully.
Dec 13 02:18:52.128727 env[1647]: time="2024-12-13T02:18:52.128630621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.130665 env[1647]: time="2024-12-13T02:18:52.130620895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.135429 env[1647]: time="2024-12-13T02:18:52.135381551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.138939 env[1647]: time="2024-12-13T02:18:52.138886670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.140731 env[1647]: time="2024-12-13T02:18:52.140685786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.141787 env[1647]: time="2024-12-13T02:18:52.141748578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.142641 env[1647]: time="2024-12-13T02:18:52.142610042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.148161 env[1647]: time="2024-12-13T02:18:52.148114195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:18:52.183620 env[1647]: time="2024-12-13T02:18:52.183470158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:18:52.183815 env[1647]: time="2024-12-13T02:18:52.183631759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:18:52.183815 env[1647]: time="2024-12-13T02:18:52.183650749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:18:52.183950 env[1647]: time="2024-12-13T02:18:52.183871014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada pid=2056 runtime=io.containerd.runc.v2
Dec 13 02:18:52.190009 env[1647]: time="2024-12-13T02:18:52.189915861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:18:52.190009 env[1647]: time="2024-12-13T02:18:52.189973149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:18:52.190444 env[1647]: time="2024-12-13T02:18:52.189988239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:18:52.190444 env[1647]: time="2024-12-13T02:18:52.190135900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f94d4b48d32792fa677f83136c4ccc92ff45b83df536bb9df93f8dd7b490844 pid=2070 runtime=io.containerd.runc.v2
Dec 13 02:18:52.222031 kubelet[2004]: E1213 02:18:52.221955    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:52.228630 systemd[1]: Started cri-containerd-887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada.scope.
Dec 13 02:18:52.248663 systemd[1]: Started cri-containerd-6f94d4b48d32792fa677f83136c4ccc92ff45b83df536bb9df93f8dd7b490844.scope.
Dec 13 02:18:52.280073 env[1647]: time="2024-12-13T02:18:52.279652034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k2jz8,Uid:6898a2ee-0663-4225-875d-2b64cfe1295b,Namespace:kube-system,Attempt:0,} returns sandbox id \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\""
Dec 13 02:18:52.284247 env[1647]: time="2024-12-13T02:18:52.284187323Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Dec 13 02:18:52.308303 env[1647]: time="2024-12-13T02:18:52.308166063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhqzg,Uid:d2c53912-c3a8-49b1-98c6-c5bfaa57c842,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f94d4b48d32792fa677f83136c4ccc92ff45b83df536bb9df93f8dd7b490844\""
Dec 13 02:18:53.223258 kubelet[2004]: E1213 02:18:53.222432    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:54.222994 kubelet[2004]: E1213 02:18:54.222941    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:54.232057 amazon-ssm-agent[1630]: 2024-12-13 02:18:54 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.
Dec 13 02:18:55.223533 kubelet[2004]: E1213 02:18:55.223438    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:56.224451 kubelet[2004]: E1213 02:18:56.224371    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:57.225008 kubelet[2004]: E1213 02:18:57.224923    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:58.225307 kubelet[2004]: E1213 02:18:58.225190    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:18:58.276073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870693862.mount: Deactivated successfully.
Dec 13 02:18:59.225868 kubelet[2004]: E1213 02:18:59.225777    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:00.226981 kubelet[2004]: E1213 02:19:00.226927    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:01.227769 kubelet[2004]: E1213 02:19:01.227634    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:02.230039 kubelet[2004]: E1213 02:19:02.229976    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:02.425484 env[1647]: time="2024-12-13T02:19:02.425373307Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:02.427655 env[1647]: time="2024-12-13T02:19:02.427611599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:02.429611 env[1647]: time="2024-12-13T02:19:02.429569039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:02.430246 env[1647]: time="2024-12-13T02:19:02.430199604Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Dec 13 02:19:02.432580 env[1647]: time="2024-12-13T02:19:02.432533578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\""
Dec 13 02:19:02.434811 env[1647]: time="2024-12-13T02:19:02.434766527Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 02:19:02.462399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444006034.mount: Deactivated successfully.
Dec 13 02:19:02.474471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608987255.mount: Deactivated successfully.
Dec 13 02:19:02.490084 env[1647]: time="2024-12-13T02:19:02.489603846Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\""
Dec 13 02:19:02.492030 env[1647]: time="2024-12-13T02:19:02.491760391Z" level=info msg="StartContainer for \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\""
Dec 13 02:19:02.520162 systemd[1]: Started cri-containerd-d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3.scope.
Dec 13 02:19:02.573381 env[1647]: time="2024-12-13T02:19:02.565723570Z" level=info msg="StartContainer for \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\" returns successfully"
Dec 13 02:19:02.578886 systemd[1]: cri-containerd-d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3.scope: Deactivated successfully.
Dec 13 02:19:02.679617 env[1647]: time="2024-12-13T02:19:02.679559990Z" level=info msg="shim disconnected" id=d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3
Dec 13 02:19:02.679617 env[1647]: time="2024-12-13T02:19:02.679612758Z" level=warning msg="cleaning up after shim disconnected" id=d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3 namespace=k8s.io
Dec 13 02:19:02.679617 env[1647]: time="2024-12-13T02:19:02.679626528Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:02.695491 env[1647]: time="2024-12-13T02:19:02.695436600Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2182 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:03.231160 kubelet[2004]: E1213 02:19:03.231087    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:03.456868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3-rootfs.mount: Deactivated successfully.
Dec 13 02:19:03.502774 env[1647]: time="2024-12-13T02:19:03.502392811Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 02:19:03.519648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469315291.mount: Deactivated successfully.
Dec 13 02:19:03.539578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064055277.mount: Deactivated successfully.
Dec 13 02:19:03.547289 env[1647]: time="2024-12-13T02:19:03.547212283Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\""
Dec 13 02:19:03.548739 env[1647]: time="2024-12-13T02:19:03.548691719Z" level=info msg="StartContainer for \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\""
Dec 13 02:19:03.610365 systemd[1]: Started cri-containerd-8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b.scope.
Dec 13 02:19:03.692748 env[1647]: time="2024-12-13T02:19:03.692690586Z" level=info msg="StartContainer for \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\" returns successfully"
Dec 13 02:19:03.707321 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 02:19:03.708148 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 02:19:03.708795 systemd[1]: Stopping systemd-sysctl.service...
Dec 13 02:19:03.712930 systemd[1]: Starting systemd-sysctl.service...
Dec 13 02:19:03.723343 systemd[1]: cri-containerd-8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b.scope: Deactivated successfully.
Dec 13 02:19:03.734945 systemd[1]: Finished systemd-sysctl.service.
Dec 13 02:19:03.844976 env[1647]: time="2024-12-13T02:19:03.843974486Z" level=info msg="shim disconnected" id=8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b
Dec 13 02:19:03.845366 env[1647]: time="2024-12-13T02:19:03.845321860Z" level=warning msg="cleaning up after shim disconnected" id=8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b namespace=k8s.io
Dec 13 02:19:03.845486 env[1647]: time="2024-12-13T02:19:03.845468111Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:03.859986 env[1647]: time="2024-12-13T02:19:03.859939116Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2245 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:04.231318 kubelet[2004]: E1213 02:19:04.231216    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:04.454727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425093252.mount: Deactivated successfully.
Dec 13 02:19:04.505474 env[1647]: time="2024-12-13T02:19:04.505110667Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 02:19:04.536968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004701258.mount: Deactivated successfully.
Dec 13 02:19:04.544371 env[1647]: time="2024-12-13T02:19:04.544326064Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\""
Dec 13 02:19:04.548603 env[1647]: time="2024-12-13T02:19:04.548557930Z" level=info msg="StartContainer for \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\""
Dec 13 02:19:04.599009 systemd[1]: Started cri-containerd-65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032.scope.
Dec 13 02:19:04.641266 env[1647]: time="2024-12-13T02:19:04.641210252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:04.644541 env[1647]: time="2024-12-13T02:19:04.644499633Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:04.647204 env[1647]: time="2024-12-13T02:19:04.647164575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:04.656252 env[1647]: time="2024-12-13T02:19:04.651814904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\""
Dec 13 02:19:04.656252 env[1647]: time="2024-12-13T02:19:04.655451925Z" level=info msg="CreateContainer within sandbox \"6f94d4b48d32792fa677f83136c4ccc92ff45b83df536bb9df93f8dd7b490844\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 02:19:04.656549 env[1647]: time="2024-12-13T02:19:04.656518093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:04.658717 systemd[1]: cri-containerd-65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032.scope: Deactivated successfully.
Dec 13 02:19:04.660156 env[1647]: time="2024-12-13T02:19:04.660021746Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6898a2ee_0663_4225_875d_2b64cfe1295b.slice/cri-containerd-65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032.scope/memory.events\": no such file or directory"
Dec 13 02:19:04.665080 env[1647]: time="2024-12-13T02:19:04.665032561Z" level=info msg="StartContainer for \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\" returns successfully"
Dec 13 02:19:04.691127 env[1647]: time="2024-12-13T02:19:04.691072947Z" level=info msg="CreateContainer within sandbox \"6f94d4b48d32792fa677f83136c4ccc92ff45b83df536bb9df93f8dd7b490844\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"855cd4a593fd789cd026af581ee1a3d3456d9837288a1ebec3c094444d482c82\""
Dec 13 02:19:04.692302 env[1647]: time="2024-12-13T02:19:04.692264623Z" level=info msg="StartContainer for \"855cd4a593fd789cd026af581ee1a3d3456d9837288a1ebec3c094444d482c82\""
Dec 13 02:19:04.724091 systemd[1]: Started cri-containerd-855cd4a593fd789cd026af581ee1a3d3456d9837288a1ebec3c094444d482c82.scope.
Dec 13 02:19:04.791881 env[1647]: time="2024-12-13T02:19:04.790730336Z" level=info msg="StartContainer for \"855cd4a593fd789cd026af581ee1a3d3456d9837288a1ebec3c094444d482c82\" returns successfully"
Dec 13 02:19:04.819707 env[1647]: time="2024-12-13T02:19:04.819656054Z" level=info msg="shim disconnected" id=65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032
Dec 13 02:19:04.820205 env[1647]: time="2024-12-13T02:19:04.820166345Z" level=warning msg="cleaning up after shim disconnected" id=65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032 namespace=k8s.io
Dec 13 02:19:04.820360 env[1647]: time="2024-12-13T02:19:04.820340935Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:04.832186 env[1647]: time="2024-12-13T02:19:04.832137547Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2339 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:05.232199 kubelet[2004]: E1213 02:19:05.232155    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:05.456952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032-rootfs.mount: Deactivated successfully.
Dec 13 02:19:05.516589 env[1647]: time="2024-12-13T02:19:05.514655460Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 02:19:05.564921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191860869.mount: Deactivated successfully.
Dec 13 02:19:05.576425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004636013.mount: Deactivated successfully.
Dec 13 02:19:05.582182 env[1647]: time="2024-12-13T02:19:05.582121647Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\""
Dec 13 02:19:05.582955 env[1647]: time="2024-12-13T02:19:05.582909159Z" level=info msg="StartContainer for \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\""
Dec 13 02:19:05.605444 systemd[1]: Started cri-containerd-4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830.scope.
Dec 13 02:19:05.649831 systemd[1]: cri-containerd-4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830.scope: Deactivated successfully.
Dec 13 02:19:05.659197 env[1647]: time="2024-12-13T02:19:05.659137464Z" level=info msg="StartContainer for \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\" returns successfully"
Dec 13 02:19:05.686529 env[1647]: time="2024-12-13T02:19:05.686472270Z" level=info msg="shim disconnected" id=4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830
Dec 13 02:19:05.686529 env[1647]: time="2024-12-13T02:19:05.686527360Z" level=warning msg="cleaning up after shim disconnected" id=4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830 namespace=k8s.io
Dec 13 02:19:05.686950 env[1647]: time="2024-12-13T02:19:05.686539866Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:05.697646 env[1647]: time="2024-12-13T02:19:05.697447630Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2524 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:06.200722 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 02:19:06.232575 kubelet[2004]: E1213 02:19:06.232532    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:06.562281 env[1647]: time="2024-12-13T02:19:06.561962612Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 02:19:06.578788 kubelet[2004]: I1213 02:19:06.578719    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhqzg" podStartSLOduration=5.235770113 podStartE2EDuration="17.578705158s" podCreationTimestamp="2024-12-13 02:18:49 +0000 UTC" firstStartedPulling="2024-12-13 02:18:52.30989385 +0000 UTC m=+4.053951402" lastFinishedPulling="2024-12-13 02:19:04.652828878 +0000 UTC m=+16.396886447" observedRunningTime="2024-12-13 02:19:05.572105984 +0000 UTC m=+17.316163555" watchObservedRunningTime="2024-12-13 02:19:06.578705158 +0000 UTC m=+18.322762760"
Dec 13 02:19:06.598022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114335171.mount: Deactivated successfully.
Dec 13 02:19:06.620413 env[1647]: time="2024-12-13T02:19:06.620328361Z" level=info msg="CreateContainer within sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\""
Dec 13 02:19:06.621676 env[1647]: time="2024-12-13T02:19:06.621641652Z" level=info msg="StartContainer for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\""
Dec 13 02:19:06.649241 systemd[1]: Started cri-containerd-89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2.scope.
Dec 13 02:19:06.716797 env[1647]: time="2024-12-13T02:19:06.716475744Z" level=info msg="StartContainer for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" returns successfully"
Dec 13 02:19:06.945078 kubelet[2004]: I1213 02:19:06.944874    2004 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Dec 13 02:19:07.233379 kubelet[2004]: E1213 02:19:07.233218    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:07.254279 kernel: Initializing XFRM netlink socket
Dec 13 02:19:07.675947 kubelet[2004]: I1213 02:19:07.675892    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k2jz8" podStartSLOduration=8.526586418 podStartE2EDuration="18.675869703s" podCreationTimestamp="2024-12-13 02:18:49 +0000 UTC" firstStartedPulling="2024-12-13 02:18:52.282893795 +0000 UTC m=+4.026951352" lastFinishedPulling="2024-12-13 02:19:02.432177072 +0000 UTC m=+14.176234637" observedRunningTime="2024-12-13 02:19:07.600501854 +0000 UTC m=+19.344559427" watchObservedRunningTime="2024-12-13 02:19:07.675869703 +0000 UTC m=+19.419927269"
Dec 13 02:19:07.676291 kubelet[2004]: I1213 02:19:07.676264    2004 topology_manager.go:215] "Topology Admit Handler" podUID="375aa75f-2e6b-4cf9-8c5c-65bd9dfe130d" podNamespace="default" podName="nginx-deployment-85f456d6dd-vz2bf"
Dec 13 02:19:07.683943 systemd[1]: Created slice kubepods-besteffort-pod375aa75f_2e6b_4cf9_8c5c_65bd9dfe130d.slice.
Dec 13 02:19:07.737659 kubelet[2004]: I1213 02:19:07.737607    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbvh\" (UniqueName: \"kubernetes.io/projected/375aa75f-2e6b-4cf9-8c5c-65bd9dfe130d-kube-api-access-qvbvh\") pod \"nginx-deployment-85f456d6dd-vz2bf\" (UID: \"375aa75f-2e6b-4cf9-8c5c-65bd9dfe130d\") " pod="default/nginx-deployment-85f456d6dd-vz2bf"
Dec 13 02:19:07.989714 env[1647]: time="2024-12-13T02:19:07.989488183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vz2bf,Uid:375aa75f-2e6b-4cf9-8c5c-65bd9dfe130d,Namespace:default,Attempt:0,}"
Dec 13 02:19:08.234124 kubelet[2004]: E1213 02:19:08.234073    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:08.949179 systemd-networkd[1377]: cilium_host: Link UP
Dec 13 02:19:08.950883 systemd-networkd[1377]: cilium_net: Link UP
Dec 13 02:19:08.952656 systemd-networkd[1377]: cilium_net: Gained carrier
Dec 13 02:19:08.953979 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready
Dec 13 02:19:08.954056 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Dec 13 02:19:08.954164 systemd-networkd[1377]: cilium_host: Gained carrier
Dec 13 02:19:08.959497 (udev-worker)[2415]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:08.959622 (udev-worker)[2683]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:09.143912 systemd-networkd[1377]: cilium_vxlan: Link UP
Dec 13 02:19:09.143922 systemd-networkd[1377]: cilium_vxlan: Gained carrier
Dec 13 02:19:09.219380 kubelet[2004]: E1213 02:19:09.219239    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:09.235128 kubelet[2004]: E1213 02:19:09.235077    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:09.396253 kernel: NET: Registered PF_ALG protocol family
Dec 13 02:19:09.841784 systemd-networkd[1377]: cilium_host: Gained IPv6LL
Dec 13 02:19:09.968454 systemd-networkd[1377]: cilium_net: Gained IPv6LL
Dec 13 02:19:10.235494 kubelet[2004]: E1213 02:19:10.235367    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:10.298928 systemd-networkd[1377]: lxc_health: Link UP
Dec 13 02:19:10.304729 systemd-networkd[1377]: lxc_health: Gained carrier
Dec 13 02:19:10.305340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 02:19:10.577001 systemd-networkd[1377]: lxcd6ade31db164: Link UP
Dec 13 02:19:10.591830 (udev-worker)[3001]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:10.594289 kernel: eth0: renamed from tmpdbb6c
Dec 13 02:19:10.618677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd6ade31db164: link becomes ready
Dec 13 02:19:10.608598 systemd-networkd[1377]: lxcd6ade31db164: Gained carrier
Dec 13 02:19:10.753543 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL
Dec 13 02:19:11.236697 kubelet[2004]: E1213 02:19:11.236630    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:11.696503 systemd-networkd[1377]: lxc_health: Gained IPv6LL
Dec 13 02:19:11.696837 systemd-networkd[1377]: lxcd6ade31db164: Gained IPv6LL
Dec 13 02:19:12.237482 kubelet[2004]: E1213 02:19:12.237416    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:13.237991 kubelet[2004]: E1213 02:19:13.237939    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:14.239069 kubelet[2004]: E1213 02:19:14.239021    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:15.239503 kubelet[2004]: E1213 02:19:15.239456    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:15.924614 env[1647]: time="2024-12-13T02:19:15.924524986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:19:15.924614 env[1647]: time="2024-12-13T02:19:15.924567831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:19:15.924614 env[1647]: time="2024-12-13T02:19:15.924583014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:19:15.925296 env[1647]: time="2024-12-13T02:19:15.925243271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbb6cf545d24f11e0174336d41e4045ba428764324365b754cb8c54887192e70 pid=3043 runtime=io.containerd.runc.v2
Dec 13 02:19:15.955587 systemd[1]: run-containerd-runc-k8s.io-dbb6cf545d24f11e0174336d41e4045ba428764324365b754cb8c54887192e70-runc.aEIaHj.mount: Deactivated successfully.
Dec 13 02:19:15.960112 systemd[1]: Started cri-containerd-dbb6cf545d24f11e0174336d41e4045ba428764324365b754cb8c54887192e70.scope.
Dec 13 02:19:16.041591 env[1647]: time="2024-12-13T02:19:16.041545480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vz2bf,Uid:375aa75f-2e6b-4cf9-8c5c-65bd9dfe130d,Namespace:default,Attempt:0,} returns sandbox id \"dbb6cf545d24f11e0174336d41e4045ba428764324365b754cb8c54887192e70\""
Dec 13 02:19:16.044951 env[1647]: time="2024-12-13T02:19:16.044911489Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Dec 13 02:19:16.240267 kubelet[2004]: E1213 02:19:16.240116    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:17.241183 kubelet[2004]: E1213 02:19:17.241141    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:18.241931 kubelet[2004]: E1213 02:19:18.241861    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:19.099728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073038199.mount: Deactivated successfully.
Dec 13 02:19:19.242877 kubelet[2004]: E1213 02:19:19.242836    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:20.243989 kubelet[2004]: E1213 02:19:20.243947    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:21.020752 env[1647]: time="2024-12-13T02:19:21.020703786Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:21.023420 env[1647]: time="2024-12-13T02:19:21.023356121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:21.026181 env[1647]: time="2024-12-13T02:19:21.026128471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:21.028366 env[1647]: time="2024-12-13T02:19:21.028327152Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:21.029352 env[1647]: time="2024-12-13T02:19:21.029311593Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\""
Dec 13 02:19:21.032734 env[1647]: time="2024-12-13T02:19:21.032699768Z" level=info msg="CreateContainer within sandbox \"dbb6cf545d24f11e0174336d41e4045ba428764324365b754cb8c54887192e70\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Dec 13 02:19:21.040319 update_engine[1642]: I1213 02:19:21.040278  1642 update_attempter.cc:509] Updating boot flags...
Dec 13 02:19:21.051074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153429893.mount: Deactivated successfully.
Dec 13 02:19:21.064898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729701018.mount: Deactivated successfully.
Dec 13 02:19:21.065811 env[1647]: time="2024-12-13T02:19:21.065768702Z" level=info msg="CreateContainer within sandbox \"dbb6cf545d24f11e0174336d41e4045ba428764324365b754cb8c54887192e70\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"57cb9f761f6abab24fe64ebc11e3c71d7bd0d10cd341aaf7eeedfd02346f291b\""
Dec 13 02:19:21.067385 env[1647]: time="2024-12-13T02:19:21.067326322Z" level=info msg="StartContainer for \"57cb9f761f6abab24fe64ebc11e3c71d7bd0d10cd341aaf7eeedfd02346f291b\""
Dec 13 02:19:21.115453 systemd[1]: Started cri-containerd-57cb9f761f6abab24fe64ebc11e3c71d7bd0d10cd341aaf7eeedfd02346f291b.scope.
Dec 13 02:19:21.235724 env[1647]: time="2024-12-13T02:19:21.235571267Z" level=info msg="StartContainer for \"57cb9f761f6abab24fe64ebc11e3c71d7bd0d10cd341aaf7eeedfd02346f291b\" returns successfully"
Dec 13 02:19:21.244727 kubelet[2004]: E1213 02:19:21.244667    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:21.656756 kubelet[2004]: I1213 02:19:21.656681    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-vz2bf" podStartSLOduration=9.66972363 podStartE2EDuration="14.656660635s" podCreationTimestamp="2024-12-13 02:19:07 +0000 UTC" firstStartedPulling="2024-12-13 02:19:16.044203805 +0000 UTC m=+27.788261369" lastFinishedPulling="2024-12-13 02:19:21.031140822 +0000 UTC m=+32.775198374" observedRunningTime="2024-12-13 02:19:21.656660608 +0000 UTC m=+33.400718182" watchObservedRunningTime="2024-12-13 02:19:21.656660635 +0000 UTC m=+33.400718203"
Dec 13 02:19:22.245201 kubelet[2004]: E1213 02:19:22.245139    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:23.245820 kubelet[2004]: E1213 02:19:23.245766    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:24.245937 kubelet[2004]: E1213 02:19:24.245874    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:24.263292 amazon-ssm-agent[1630]: 2024-12-13 02:19:24 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated
Dec 13 02:19:25.246879 kubelet[2004]: E1213 02:19:25.246824    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:26.247838 kubelet[2004]: E1213 02:19:26.247776    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:27.248809 kubelet[2004]: E1213 02:19:27.248756    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:28.249888 kubelet[2004]: E1213 02:19:28.249849    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:28.370041 kubelet[2004]: I1213 02:19:28.369991    2004 topology_manager.go:215] "Topology Admit Handler" podUID="dc019019-6e3e-45a1-a165-561123536d46" podNamespace="default" podName="nfs-server-provisioner-0"
Dec 13 02:19:28.395934 systemd[1]: Created slice kubepods-besteffort-poddc019019_6e3e_45a1_a165_561123536d46.slice.
Dec 13 02:19:28.430253 kubelet[2004]: I1213 02:19:28.430200    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/dc019019-6e3e-45a1-a165-561123536d46-data\") pod \"nfs-server-provisioner-0\" (UID: \"dc019019-6e3e-45a1-a165-561123536d46\") " pod="default/nfs-server-provisioner-0"
Dec 13 02:19:28.430433 kubelet[2004]: I1213 02:19:28.430267    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sskdg\" (UniqueName: \"kubernetes.io/projected/dc019019-6e3e-45a1-a165-561123536d46-kube-api-access-sskdg\") pod \"nfs-server-provisioner-0\" (UID: \"dc019019-6e3e-45a1-a165-561123536d46\") " pod="default/nfs-server-provisioner-0"
Dec 13 02:19:28.706618 env[1647]: time="2024-12-13T02:19:28.706562702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dc019019-6e3e-45a1-a165-561123536d46,Namespace:default,Attempt:0,}"
Dec 13 02:19:28.825861 (udev-worker)[3232]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:28.828153 (udev-worker)[3249]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:28.834182 systemd-networkd[1377]: lxcd75e3e3564d4: Link UP
Dec 13 02:19:28.840307 kernel: eth0: renamed from tmp7a1aa
Dec 13 02:19:28.847852 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 02:19:28.847967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd75e3e3564d4: link becomes ready
Dec 13 02:19:28.848237 systemd-networkd[1377]: lxcd75e3e3564d4: Gained carrier
Dec 13 02:19:29.038029 env[1647]: time="2024-12-13T02:19:29.037868769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:19:29.038029 env[1647]: time="2024-12-13T02:19:29.037916742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:19:29.038029 env[1647]: time="2024-12-13T02:19:29.037932379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:19:29.038873 env[1647]: time="2024-12-13T02:19:29.038801906Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1aa69bcfe22338fe6d6fbe885f99a5502bcb07baa81153b87d61eba4ba77d8 pid=3264 runtime=io.containerd.runc.v2
Dec 13 02:19:29.074376 systemd[1]: Started cri-containerd-7a1aa69bcfe22338fe6d6fbe885f99a5502bcb07baa81153b87d61eba4ba77d8.scope.
Dec 13 02:19:29.133992 env[1647]: time="2024-12-13T02:19:29.133940408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dc019019-6e3e-45a1-a165-561123536d46,Namespace:default,Attempt:0,} returns sandbox id \"7a1aa69bcfe22338fe6d6fbe885f99a5502bcb07baa81153b87d61eba4ba77d8\""
Dec 13 02:19:29.136570 env[1647]: time="2024-12-13T02:19:29.136526797Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Dec 13 02:19:29.219490 kubelet[2004]: E1213 02:19:29.219443    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:29.250999 kubelet[2004]: E1213 02:19:29.250949    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:29.544934 systemd[1]: run-containerd-runc-k8s.io-7a1aa69bcfe22338fe6d6fbe885f99a5502bcb07baa81153b87d61eba4ba77d8-runc.PveBaf.mount: Deactivated successfully.
Dec 13 02:19:30.251937 kubelet[2004]: E1213 02:19:30.251756    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:30.478283 systemd-networkd[1377]: lxcd75e3e3564d4: Gained IPv6LL
Dec 13 02:19:31.252087 kubelet[2004]: E1213 02:19:31.252017    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:32.252617 kubelet[2004]: E1213 02:19:32.252571    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:32.307545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165868153.mount: Deactivated successfully.
Dec 13 02:19:33.253139 kubelet[2004]: E1213 02:19:33.253074    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:34.253970 kubelet[2004]: E1213 02:19:34.253897    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:35.254822 kubelet[2004]: E1213 02:19:35.254579    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:36.256094 kubelet[2004]: E1213 02:19:36.256047    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:36.330597 env[1647]: time="2024-12-13T02:19:36.330539686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:36.456992 env[1647]: time="2024-12-13T02:19:36.456942290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:36.498719 env[1647]: time="2024-12-13T02:19:36.498661292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:36.556180 env[1647]: time="2024-12-13T02:19:36.554872528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:36.557008 env[1647]: time="2024-12-13T02:19:36.556959967Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Dec 13 02:19:36.565214 env[1647]: time="2024-12-13T02:19:36.565166918Z" level=info msg="CreateContainer within sandbox \"7a1aa69bcfe22338fe6d6fbe885f99a5502bcb07baa81153b87d61eba4ba77d8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Dec 13 02:19:36.817672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321254017.mount: Deactivated successfully.
Dec 13 02:19:36.839692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771719375.mount: Deactivated successfully.
Dec 13 02:19:36.886154 env[1647]: time="2024-12-13T02:19:36.886062206Z" level=info msg="CreateContainer within sandbox \"7a1aa69bcfe22338fe6d6fbe885f99a5502bcb07baa81153b87d61eba4ba77d8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"34d273da227c603d111ca6ebdb86b8caf3b89a7c641099eabcba5bfc224ad04f\""
Dec 13 02:19:36.887573 env[1647]: time="2024-12-13T02:19:36.887534804Z" level=info msg="StartContainer for \"34d273da227c603d111ca6ebdb86b8caf3b89a7c641099eabcba5bfc224ad04f\""
Dec 13 02:19:37.100608 systemd[1]: Started cri-containerd-34d273da227c603d111ca6ebdb86b8caf3b89a7c641099eabcba5bfc224ad04f.scope.
Dec 13 02:19:37.256327 kubelet[2004]: E1213 02:19:37.256270    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:37.559026 env[1647]: time="2024-12-13T02:19:37.558979256Z" level=info msg="StartContainer for \"34d273da227c603d111ca6ebdb86b8caf3b89a7c641099eabcba5bfc224ad04f\" returns successfully"
Dec 13 02:19:37.690895 kubelet[2004]: I1213 02:19:37.690822    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.266277454 podStartE2EDuration="9.69078432s" podCreationTimestamp="2024-12-13 02:19:28 +0000 UTC" firstStartedPulling="2024-12-13 02:19:29.135889791 +0000 UTC m=+40.879947342" lastFinishedPulling="2024-12-13 02:19:36.560396645 +0000 UTC m=+48.304454208" observedRunningTime="2024-12-13 02:19:37.689996043 +0000 UTC m=+49.434053616" watchObservedRunningTime="2024-12-13 02:19:37.69078432 +0000 UTC m=+49.434841891"
Dec 13 02:19:38.257180 kubelet[2004]: E1213 02:19:38.257123    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:38.359370 amazon-ssm-agent[1630]: 2024-12-13 02:19:38 INFO [HealthCheck] HealthCheck reporting agent health.
Dec 13 02:19:39.257641 kubelet[2004]: E1213 02:19:39.257591    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:40.258683 kubelet[2004]: E1213 02:19:40.258630    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:41.259096 kubelet[2004]: E1213 02:19:41.259016    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:42.259967 kubelet[2004]: E1213 02:19:42.259907    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:43.260366 kubelet[2004]: E1213 02:19:43.260309    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:44.260816 kubelet[2004]: E1213 02:19:44.260756    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:45.261829 kubelet[2004]: E1213 02:19:45.261775    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:46.262740 kubelet[2004]: E1213 02:19:46.262682    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:46.995799 kubelet[2004]: I1213 02:19:46.995745    2004 topology_manager.go:215] "Topology Admit Handler" podUID="c46cba7e-cc0d-4390-a845-18b2194d2752" podNamespace="default" podName="test-pod-1"
Dec 13 02:19:47.009644 systemd[1]: Created slice kubepods-besteffort-podc46cba7e_cc0d_4390_a845_18b2194d2752.slice.
Dec 13 02:19:47.107793 kubelet[2004]: I1213 02:19:47.107747    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d45331a-00fd-486b-9cfd-8977e97458a7\" (UniqueName: \"kubernetes.io/nfs/c46cba7e-cc0d-4390-a845-18b2194d2752-pvc-8d45331a-00fd-486b-9cfd-8977e97458a7\") pod \"test-pod-1\" (UID: \"c46cba7e-cc0d-4390-a845-18b2194d2752\") " pod="default/test-pod-1"
Dec 13 02:19:47.107987 kubelet[2004]: I1213 02:19:47.107820    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvkpt\" (UniqueName: \"kubernetes.io/projected/c46cba7e-cc0d-4390-a845-18b2194d2752-kube-api-access-bvkpt\") pod \"test-pod-1\" (UID: \"c46cba7e-cc0d-4390-a845-18b2194d2752\") " pod="default/test-pod-1"
Dec 13 02:19:47.264121 kubelet[2004]: E1213 02:19:47.264013    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:47.271286 kernel: FS-Cache: Loaded
Dec 13 02:19:47.326369 kernel: RPC: Registered named UNIX socket transport module.
Dec 13 02:19:47.326514 kernel: RPC: Registered udp transport module.
Dec 13 02:19:47.326544 kernel: RPC: Registered tcp transport module.
Dec 13 02:19:47.328148 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 02:19:47.404269 kernel: FS-Cache: Netfs 'nfs' registered for caching
Dec 13 02:19:47.711662 kernel: NFS: Registering the id_resolver key type
Dec 13 02:19:47.711821 kernel: Key type id_resolver registered
Dec 13 02:19:47.711857 kernel: Key type id_legacy registered
Dec 13 02:19:47.763946 nfsidmap[3391]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Dec 13 02:19:47.768085 nfsidmap[3392]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Dec 13 02:19:47.914446 env[1647]: time="2024-12-13T02:19:47.914395542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c46cba7e-cc0d-4390-a845-18b2194d2752,Namespace:default,Attempt:0,}"
Dec 13 02:19:47.961637 (udev-worker)[3388]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:47.961792 (udev-worker)[3379]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:19:47.968259 kernel: eth0: renamed from tmp470c8
Dec 13 02:19:47.971508 systemd-networkd[1377]: lxc7be7d091ce23: Link UP
Dec 13 02:19:47.974612 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 02:19:47.974728 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7be7d091ce23: link becomes ready
Dec 13 02:19:47.977035 systemd-networkd[1377]: lxc7be7d091ce23: Gained carrier
Dec 13 02:19:48.166180 env[1647]: time="2024-12-13T02:19:48.166070243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:19:48.166180 env[1647]: time="2024-12-13T02:19:48.166132994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:19:48.166823 env[1647]: time="2024-12-13T02:19:48.166148301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:19:48.166823 env[1647]: time="2024-12-13T02:19:48.166711115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/470c8a1521df2a271695aa04d49f925a8a0fbceef8374422b734d1d40a41c76a pid=3418 runtime=io.containerd.runc.v2
Dec 13 02:19:48.193200 systemd[1]: Started cri-containerd-470c8a1521df2a271695aa04d49f925a8a0fbceef8374422b734d1d40a41c76a.scope.
Dec 13 02:19:48.265113 kubelet[2004]: E1213 02:19:48.265079    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:48.302241 env[1647]: time="2024-12-13T02:19:48.302188010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c46cba7e-cc0d-4390-a845-18b2194d2752,Namespace:default,Attempt:0,} returns sandbox id \"470c8a1521df2a271695aa04d49f925a8a0fbceef8374422b734d1d40a41c76a\""
Dec 13 02:19:48.307278 env[1647]: time="2024-12-13T02:19:48.307244688Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Dec 13 02:19:48.615985 env[1647]: time="2024-12-13T02:19:48.615478458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:48.617730 env[1647]: time="2024-12-13T02:19:48.617690567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:48.620616 env[1647]: time="2024-12-13T02:19:48.620577714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:48.622584 env[1647]: time="2024-12-13T02:19:48.622547856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:19:48.623302 env[1647]: time="2024-12-13T02:19:48.623189080Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\""
Dec 13 02:19:48.626190 env[1647]: time="2024-12-13T02:19:48.626156882Z" level=info msg="CreateContainer within sandbox \"470c8a1521df2a271695aa04d49f925a8a0fbceef8374422b734d1d40a41c76a\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Dec 13 02:19:48.645308 env[1647]: time="2024-12-13T02:19:48.645260193Z" level=info msg="CreateContainer within sandbox \"470c8a1521df2a271695aa04d49f925a8a0fbceef8374422b734d1d40a41c76a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c437a793a101e83f376a56d16fae00507d6e0e51fcc03db49f4ed9e9aadbafba\""
Dec 13 02:19:48.646173 env[1647]: time="2024-12-13T02:19:48.646137664Z" level=info msg="StartContainer for \"c437a793a101e83f376a56d16fae00507d6e0e51fcc03db49f4ed9e9aadbafba\""
Dec 13 02:19:48.676440 systemd[1]: Started cri-containerd-c437a793a101e83f376a56d16fae00507d6e0e51fcc03db49f4ed9e9aadbafba.scope.
Dec 13 02:19:48.713418 env[1647]: time="2024-12-13T02:19:48.713368480Z" level=info msg="StartContainer for \"c437a793a101e83f376a56d16fae00507d6e0e51fcc03db49f4ed9e9aadbafba\" returns successfully"
Dec 13 02:19:49.200527 systemd-networkd[1377]: lxc7be7d091ce23: Gained IPv6LL
Dec 13 02:19:49.221370 kubelet[2004]: E1213 02:19:49.221324    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:49.265427 kubelet[2004]: E1213 02:19:49.265389    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:49.726091 kubelet[2004]: I1213 02:19:49.725975    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.404554501 podStartE2EDuration="21.723767086s" podCreationTimestamp="2024-12-13 02:19:28 +0000 UTC" firstStartedPulling="2024-12-13 02:19:48.305471992 +0000 UTC m=+60.049529541" lastFinishedPulling="2024-12-13 02:19:48.624684564 +0000 UTC m=+60.368742126" observedRunningTime="2024-12-13 02:19:49.72283761 +0000 UTC m=+61.466895181" watchObservedRunningTime="2024-12-13 02:19:49.723767086 +0000 UTC m=+61.467824656"
Dec 13 02:19:50.267482 kubelet[2004]: E1213 02:19:50.267425    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:51.268182 kubelet[2004]: E1213 02:19:51.268140    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:52.268902 kubelet[2004]: E1213 02:19:52.268854    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:53.270127 kubelet[2004]: E1213 02:19:53.270069    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:53.697383 systemd[1]: run-containerd-runc-k8s.io-89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2-runc.GhMerq.mount: Deactivated successfully.
Dec 13 02:19:53.748837 env[1647]: time="2024-12-13T02:19:53.748765379Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 02:19:53.770889 env[1647]: time="2024-12-13T02:19:53.770847778Z" level=info msg="StopContainer for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" with timeout 2 (s)"
Dec 13 02:19:53.771180 env[1647]: time="2024-12-13T02:19:53.771145446Z" level=info msg="Stop container \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" with signal terminated"
Dec 13 02:19:53.779632 systemd-networkd[1377]: lxc_health: Link DOWN
Dec 13 02:19:53.779642 systemd-networkd[1377]: lxc_health: Lost carrier
Dec 13 02:19:53.892640 systemd[1]: cri-containerd-89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2.scope: Deactivated successfully.
Dec 13 02:19:53.892955 systemd[1]: cri-containerd-89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2.scope: Consumed 8.248s CPU time.
Dec 13 02:19:53.918209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2-rootfs.mount: Deactivated successfully.
Dec 13 02:19:53.929181 env[1647]: time="2024-12-13T02:19:53.929131328Z" level=info msg="shim disconnected" id=89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2
Dec 13 02:19:53.929181 env[1647]: time="2024-12-13T02:19:53.929178697Z" level=warning msg="cleaning up after shim disconnected" id=89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2 namespace=k8s.io
Dec 13 02:19:53.929494 env[1647]: time="2024-12-13T02:19:53.929191661Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:53.938461 env[1647]: time="2024-12-13T02:19:53.938403114Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3547 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:53.944480 env[1647]: time="2024-12-13T02:19:53.944434278Z" level=info msg="StopContainer for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" returns successfully"
Dec 13 02:19:53.945474 env[1647]: time="2024-12-13T02:19:53.945434142Z" level=info msg="StopPodSandbox for \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\""
Dec 13 02:19:53.945613 env[1647]: time="2024-12-13T02:19:53.945510021Z" level=info msg="Container to stop \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:19:53.945613 env[1647]: time="2024-12-13T02:19:53.945532787Z" level=info msg="Container to stop \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:19:53.945613 env[1647]: time="2024-12-13T02:19:53.945548882Z" level=info msg="Container to stop \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:19:53.945613 env[1647]: time="2024-12-13T02:19:53.945564918Z" level=info msg="Container to stop \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:19:53.945613 env[1647]: time="2024-12-13T02:19:53.945580057Z" level=info msg="Container to stop \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:19:53.948151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada-shm.mount: Deactivated successfully.
Dec 13 02:19:53.962278 systemd[1]: cri-containerd-887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada.scope: Deactivated successfully.
Dec 13 02:19:53.993945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada-rootfs.mount: Deactivated successfully.
Dec 13 02:19:54.001972 env[1647]: time="2024-12-13T02:19:54.001913622Z" level=info msg="shim disconnected" id=887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada
Dec 13 02:19:54.003127 env[1647]: time="2024-12-13T02:19:54.001974876Z" level=warning msg="cleaning up after shim disconnected" id=887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada namespace=k8s.io
Dec 13 02:19:54.003127 env[1647]: time="2024-12-13T02:19:54.001998197Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:54.012836 env[1647]: time="2024-12-13T02:19:54.012774419Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3578 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:54.013647 env[1647]: time="2024-12-13T02:19:54.013608412Z" level=info msg="TearDown network for sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" successfully"
Dec 13 02:19:54.013771 env[1647]: time="2024-12-13T02:19:54.013641188Z" level=info msg="StopPodSandbox for \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" returns successfully"
Dec 13 02:19:54.169012 kubelet[2004]: I1213 02:19:54.168951    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-lib-modules\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169012 kubelet[2004]: I1213 02:19:54.169018    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-hubble-tls\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169383 kubelet[2004]: I1213 02:19:54.169043    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cni-path\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169383 kubelet[2004]: I1213 02:19:54.169065    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-kernel\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169383 kubelet[2004]: I1213 02:19:54.169092    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-bpf-maps\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169383 kubelet[2004]: I1213 02:19:54.169115    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-xtables-lock\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169383 kubelet[2004]: I1213 02:19:54.169145    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6898a2ee-0663-4225-875d-2b64cfe1295b-clustermesh-secrets\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169383 kubelet[2004]: I1213 02:19:54.169178    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-etc-cni-netd\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169667 kubelet[2004]: I1213 02:19:54.169199    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-run\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169667 kubelet[2004]: I1213 02:19:54.169241    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-hostproc\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169667 kubelet[2004]: I1213 02:19:54.169270    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsbmd\" (UniqueName: \"kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-kube-api-access-dsbmd\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169667 kubelet[2004]: I1213 02:19:54.169327    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-config-path\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169667 kubelet[2004]: I1213 02:19:54.169349    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-cgroup\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.169667 kubelet[2004]: I1213 02:19:54.169372    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-net\") pod \"6898a2ee-0663-4225-875d-2b64cfe1295b\" (UID: \"6898a2ee-0663-4225-875d-2b64cfe1295b\") "
Dec 13 02:19:54.176893 kubelet[2004]: I1213 02:19:54.176808    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.177084 kubelet[2004]: I1213 02:19:54.176961    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.177084 kubelet[2004]: I1213 02:19:54.176992    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.177084 kubelet[2004]: I1213 02:19:54.177016    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-hostproc" (OuterVolumeSpecName: "hostproc") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.185206 kubelet[2004]: I1213 02:19:54.185159    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-kube-api-access-dsbmd" (OuterVolumeSpecName: "kube-api-access-dsbmd") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "kube-api-access-dsbmd". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:19:54.185389 kubelet[2004]: I1213 02:19:54.185159    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6898a2ee-0663-4225-875d-2b64cfe1295b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 02:19:54.185504 kubelet[2004]: I1213 02:19:54.185479    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.191877 kubelet[2004]: I1213 02:19:54.191815    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 02:19:54.192053 kubelet[2004]: I1213 02:19:54.191930    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.192053 kubelet[2004]: I1213 02:19:54.191958    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.192053 kubelet[2004]: I1213 02:19:54.191977    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cni-path" (OuterVolumeSpecName: "cni-path") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.192053 kubelet[2004]: I1213 02:19:54.192003    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.192053 kubelet[2004]: I1213 02:19:54.192025    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:54.193704 kubelet[2004]: I1213 02:19:54.193654    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6898a2ee-0663-4225-875d-2b64cfe1295b" (UID: "6898a2ee-0663-4225-875d-2b64cfe1295b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270295    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-config-path\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270338    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-cgroup\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270353    2004 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-net\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270365    2004 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-host-proc-sys-kernel\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270379    2004 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-lib-modules\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270390    2004 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-hubble-tls\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270402    2004 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cni-path\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.270626 kubelet[2004]: I1213 02:19:54.270415    2004 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6898a2ee-0663-4225-875d-2b64cfe1295b-clustermesh-secrets\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: I1213 02:19:54.270427    2004 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-etc-cni-netd\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: I1213 02:19:54.270438    2004 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-bpf-maps\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: I1213 02:19:54.270449    2004 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-xtables-lock\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: I1213 02:19:54.270474    2004 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-hostproc\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: I1213 02:19:54.270487    2004 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dsbmd\" (UniqueName: \"kubernetes.io/projected/6898a2ee-0663-4225-875d-2b64cfe1295b-kube-api-access-dsbmd\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: I1213 02:19:54.270499    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6898a2ee-0663-4225-875d-2b64cfe1295b-cilium-run\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:54.271415 kubelet[2004]: E1213 02:19:54.270564    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:54.413041 kubelet[2004]: E1213 02:19:54.412982    2004 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 02:19:54.693660 systemd[1]: var-lib-kubelet-pods-6898a2ee\x2d0663\x2d4225\x2d875d\x2d2b64cfe1295b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddsbmd.mount: Deactivated successfully.
Dec 13 02:19:54.693885 systemd[1]: var-lib-kubelet-pods-6898a2ee\x2d0663\x2d4225\x2d875d\x2d2b64cfe1295b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 02:19:54.693987 systemd[1]: var-lib-kubelet-pods-6898a2ee\x2d0663\x2d4225\x2d875d\x2d2b64cfe1295b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 02:19:54.725314 kubelet[2004]: I1213 02:19:54.725285    2004 scope.go:117] "RemoveContainer" containerID="89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2"
Dec 13 02:19:54.734137 systemd[1]: Removed slice kubepods-burstable-pod6898a2ee_0663_4225_875d_2b64cfe1295b.slice.
Dec 13 02:19:54.734414 systemd[1]: kubepods-burstable-pod6898a2ee_0663_4225_875d_2b64cfe1295b.slice: Consumed 8.386s CPU time.
Dec 13 02:19:54.738041 env[1647]: time="2024-12-13T02:19:54.737281221Z" level=info msg="RemoveContainer for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\""
Dec 13 02:19:54.743461 env[1647]: time="2024-12-13T02:19:54.743406113Z" level=info msg="RemoveContainer for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" returns successfully"
Dec 13 02:19:54.747579 kubelet[2004]: I1213 02:19:54.747543    2004 scope.go:117] "RemoveContainer" containerID="4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830"
Dec 13 02:19:54.755346 env[1647]: time="2024-12-13T02:19:54.755296279Z" level=info msg="RemoveContainer for \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\""
Dec 13 02:19:54.765376 env[1647]: time="2024-12-13T02:19:54.765319961Z" level=info msg="RemoveContainer for \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\" returns successfully"
Dec 13 02:19:54.765782 kubelet[2004]: I1213 02:19:54.765754    2004 scope.go:117] "RemoveContainer" containerID="65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032"
Dec 13 02:19:54.769064 env[1647]: time="2024-12-13T02:19:54.769019785Z" level=info msg="RemoveContainer for \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\""
Dec 13 02:19:54.772540 env[1647]: time="2024-12-13T02:19:54.772497871Z" level=info msg="RemoveContainer for \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\" returns successfully"
Dec 13 02:19:54.772950 kubelet[2004]: I1213 02:19:54.772927    2004 scope.go:117] "RemoveContainer" containerID="8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b"
Dec 13 02:19:54.774205 env[1647]: time="2024-12-13T02:19:54.774171146Z" level=info msg="RemoveContainer for \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\""
Dec 13 02:19:54.780193 env[1647]: time="2024-12-13T02:19:54.780142729Z" level=info msg="RemoveContainer for \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\" returns successfully"
Dec 13 02:19:54.781556 kubelet[2004]: I1213 02:19:54.781479    2004 scope.go:117] "RemoveContainer" containerID="d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3"
Dec 13 02:19:54.787816 env[1647]: time="2024-12-13T02:19:54.787771054Z" level=info msg="RemoveContainer for \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\""
Dec 13 02:19:54.791969 env[1647]: time="2024-12-13T02:19:54.791917556Z" level=info msg="RemoveContainer for \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\" returns successfully"
Dec 13 02:19:54.792644 kubelet[2004]: I1213 02:19:54.792619    2004 scope.go:117] "RemoveContainer" containerID="89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2"
Dec 13 02:19:54.792954 env[1647]: time="2024-12-13T02:19:54.792880796Z" level=error msg="ContainerStatus for \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\": not found"
Dec 13 02:19:54.793120 kubelet[2004]: E1213 02:19:54.793092    2004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\": not found" containerID="89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2"
Dec 13 02:19:54.793259 kubelet[2004]: I1213 02:19:54.793128    2004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2"} err="failed to get container status \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\": rpc error: code = NotFound desc = an error occurred when try to find container \"89e41f1e5d5f7a37ba965a7ab89af9b06d877b49e078ce907506dc47dac55ed2\": not found"
Dec 13 02:19:54.793344 kubelet[2004]: I1213 02:19:54.793261    2004 scope.go:117] "RemoveContainer" containerID="4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830"
Dec 13 02:19:54.793532 env[1647]: time="2024-12-13T02:19:54.793473395Z" level=error msg="ContainerStatus for \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\": not found"
Dec 13 02:19:54.793692 kubelet[2004]: E1213 02:19:54.793668    2004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\": not found" containerID="4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830"
Dec 13 02:19:54.793776 kubelet[2004]: I1213 02:19:54.793697    2004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830"} err="failed to get container status \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cc36b5419a54c997db63eca91d7af4137cab962223d0d2aa35e82c420e58830\": not found"
Dec 13 02:19:54.793776 kubelet[2004]: I1213 02:19:54.793722    2004 scope.go:117] "RemoveContainer" containerID="65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032"
Dec 13 02:19:54.794022 env[1647]: time="2024-12-13T02:19:54.793974825Z" level=error msg="ContainerStatus for \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\": not found"
Dec 13 02:19:54.794275 kubelet[2004]: E1213 02:19:54.794251    2004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\": not found" containerID="65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032"
Dec 13 02:19:54.794345 kubelet[2004]: I1213 02:19:54.794278    2004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032"} err="failed to get container status \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\": rpc error: code = NotFound desc = an error occurred when try to find container \"65b8a45738278648cd896d30f66046d14cf7f263272a1babfa5b3ce014f80032\": not found"
Dec 13 02:19:54.794345 kubelet[2004]: I1213 02:19:54.794298    2004 scope.go:117] "RemoveContainer" containerID="8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b"
Dec 13 02:19:54.794648 env[1647]: time="2024-12-13T02:19:54.794593624Z" level=error msg="ContainerStatus for \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\": not found"
Dec 13 02:19:54.795097 kubelet[2004]: E1213 02:19:54.795076    2004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\": not found" containerID="8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b"
Dec 13 02:19:54.795328 kubelet[2004]: I1213 02:19:54.795195    2004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b"} err="failed to get container status \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c66b4aac3c84f6e5d20a52ff067edfd4fa152354c43b7449b25444bda50461b\": not found"
Dec 13 02:19:54.795328 kubelet[2004]: I1213 02:19:54.795238    2004 scope.go:117] "RemoveContainer" containerID="d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3"
Dec 13 02:19:54.795660 env[1647]: time="2024-12-13T02:19:54.795614140Z" level=error msg="ContainerStatus for \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\": not found"
Dec 13 02:19:54.796123 kubelet[2004]: E1213 02:19:54.796094    2004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\": not found" containerID="d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3"
Dec 13 02:19:54.796403 kubelet[2004]: I1213 02:19:54.796127    2004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3"} err="failed to get container status \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d225beb0d40ff02fedc9b5a6368803fd032f851a32214c0c5923903d3380caa3\": not found"
Dec 13 02:19:55.271763 kubelet[2004]: E1213 02:19:55.271708    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:55.463577 kubelet[2004]: I1213 02:19:55.463536    2004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" path="/var/lib/kubelet/pods/6898a2ee-0663-4225-875d-2b64cfe1295b/volumes"
Dec 13 02:19:56.272583 kubelet[2004]: E1213 02:19:56.272528    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:56.857562 kubelet[2004]: I1213 02:19:56.857470    2004 topology_manager.go:215] "Topology Admit Handler" podUID="f2780c99-1080-4245-9e18-fff153aa982c" podNamespace="kube-system" podName="cilium-8lc8p"
Dec 13 02:19:56.862064 kubelet[2004]: E1213 02:19:56.862024    2004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" containerName="apply-sysctl-overwrites"
Dec 13 02:19:56.862064 kubelet[2004]: E1213 02:19:56.862072    2004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" containerName="mount-cgroup"
Dec 13 02:19:56.862064 kubelet[2004]: E1213 02:19:56.862084    2004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" containerName="mount-bpf-fs"
Dec 13 02:19:56.862373 kubelet[2004]: E1213 02:19:56.862093    2004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" containerName="clean-cilium-state"
Dec 13 02:19:56.862373 kubelet[2004]: E1213 02:19:56.862104    2004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" containerName="cilium-agent"
Dec 13 02:19:56.862373 kubelet[2004]: I1213 02:19:56.862145    2004 memory_manager.go:354] "RemoveStaleState removing state" podUID="6898a2ee-0663-4225-875d-2b64cfe1295b" containerName="cilium-agent"
Dec 13 02:19:56.862488 kubelet[2004]: I1213 02:19:56.862459    2004 topology_manager.go:215] "Topology Admit Handler" podUID="904834f1-45a3-4090-a4d0-84cb46f7bbc9" podNamespace="kube-system" podName="cilium-operator-599987898-jhstx"
Dec 13 02:19:56.868874 systemd[1]: Created slice kubepods-besteffort-pod904834f1_45a3_4090_a4d0_84cb46f7bbc9.slice.
Dec 13 02:19:56.876440 systemd[1]: Created slice kubepods-burstable-podf2780c99_1080_4245_9e18_fff153aa982c.slice.
Dec 13 02:19:56.986248 kubelet[2004]: I1213 02:19:56.986151    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-kernel\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.986442 kubelet[2004]: I1213 02:19:56.986267    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-net\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.986442 kubelet[2004]: I1213 02:19:56.986294    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-run\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.986442 kubelet[2004]: I1213 02:19:56.986314    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-hostproc\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.986442 kubelet[2004]: I1213 02:19:56.986337    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-etc-cni-netd\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.986442 kubelet[2004]: I1213 02:19:56.986356    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-xtables-lock\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.986442 kubelet[2004]: I1213 02:19:56.986376    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2780c99-1080-4245-9e18-fff153aa982c-cilium-config-path\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.987881 kubelet[2004]: I1213 02:19:56.987847    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-bpf-maps\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.988024 kubelet[2004]: I1213 02:19:56.987919    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cni-path\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.988024 kubelet[2004]: I1213 02:19:56.987948    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-clustermesh-secrets\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.988024 kubelet[2004]: I1213 02:19:56.987975    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-cilium-ipsec-secrets\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.988024 kubelet[2004]: I1213 02:19:56.988011    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-hubble-tls\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.989630 kubelet[2004]: I1213 02:19:56.988036    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brcsh\" (UniqueName: \"kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-kube-api-access-brcsh\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.989630 kubelet[2004]: I1213 02:19:56.988066    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-lib-modules\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:56.989630 kubelet[2004]: I1213 02:19:56.988092    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/904834f1-45a3-4090-a4d0-84cb46f7bbc9-cilium-config-path\") pod \"cilium-operator-599987898-jhstx\" (UID: \"904834f1-45a3-4090-a4d0-84cb46f7bbc9\") " pod="kube-system/cilium-operator-599987898-jhstx"
Dec 13 02:19:56.989630 kubelet[2004]: I1213 02:19:56.989517    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsc2w\" (UniqueName: \"kubernetes.io/projected/904834f1-45a3-4090-a4d0-84cb46f7bbc9-kube-api-access-wsc2w\") pod \"cilium-operator-599987898-jhstx\" (UID: \"904834f1-45a3-4090-a4d0-84cb46f7bbc9\") " pod="kube-system/cilium-operator-599987898-jhstx"
Dec 13 02:19:56.989630 kubelet[2004]: I1213 02:19:56.989555    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-cgroup\") pod \"cilium-8lc8p\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") " pod="kube-system/cilium-8lc8p"
Dec 13 02:19:57.188180 env[1647]: time="2024-12-13T02:19:57.187084327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lc8p,Uid:f2780c99-1080-4245-9e18-fff153aa982c,Namespace:kube-system,Attempt:0,}"
Dec 13 02:19:57.205995 env[1647]: time="2024-12-13T02:19:57.205911856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:19:57.205995 env[1647]: time="2024-12-13T02:19:57.205954870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:19:57.206282 env[1647]: time="2024-12-13T02:19:57.205971134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:19:57.206400 env[1647]: time="2024-12-13T02:19:57.206291918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7 pid=3608 runtime=io.containerd.runc.v2
Dec 13 02:19:57.235507 systemd[1]: Started cri-containerd-7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7.scope.
Dec 13 02:19:57.273256 env[1647]: time="2024-12-13T02:19:57.272497185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lc8p,Uid:f2780c99-1080-4245-9e18-fff153aa982c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\""
Dec 13 02:19:57.273793 kubelet[2004]: E1213 02:19:57.273599    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:57.277365 env[1647]: time="2024-12-13T02:19:57.277329823Z" level=info msg="CreateContainer within sandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 02:19:57.290920 env[1647]: time="2024-12-13T02:19:57.290874193Z" level=info msg="CreateContainer within sandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\""
Dec 13 02:19:57.291995 env[1647]: time="2024-12-13T02:19:57.291967285Z" level=info msg="StartContainer for \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\""
Dec 13 02:19:57.314566 systemd[1]: Started cri-containerd-2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8.scope.
Dec 13 02:19:57.330472 systemd[1]: cri-containerd-2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8.scope: Deactivated successfully.
Dec 13 02:19:57.348205 env[1647]: time="2024-12-13T02:19:57.348111524Z" level=info msg="shim disconnected" id=2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8
Dec 13 02:19:57.348205 env[1647]: time="2024-12-13T02:19:57.348198370Z" level=warning msg="cleaning up after shim disconnected" id=2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8 namespace=k8s.io
Dec 13 02:19:57.348205 env[1647]: time="2024-12-13T02:19:57.348213997Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:57.366944 env[1647]: time="2024-12-13T02:19:57.366893380Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:19:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Dec 13 02:19:57.367336 env[1647]: time="2024-12-13T02:19:57.367194882Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed"
Dec 13 02:19:57.368343 env[1647]: time="2024-12-13T02:19:57.368291298Z" level=error msg="Failed to pipe stdout of container \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\"" error="reading from a closed fifo"
Dec 13 02:19:57.368469 env[1647]: time="2024-12-13T02:19:57.368318213Z" level=error msg="Failed to pipe stderr of container \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\"" error="reading from a closed fifo"
Dec 13 02:19:57.370372 env[1647]: time="2024-12-13T02:19:57.370314390Z" level=error msg="StartContainer for \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Dec 13 02:19:57.370697 kubelet[2004]: E1213 02:19:57.370654    2004 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8"
Dec 13 02:19:57.374640 kubelet[2004]: E1213 02:19:57.374546    2004 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Dec 13 02:19:57.374640 kubelet[2004]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Dec 13 02:19:57.374640 kubelet[2004]: rm /hostbin/cilium-mount
Dec 13 02:19:57.374835 kubelet[2004]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brcsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8lc8p_kube-system(f2780c99-1080-4245-9e18-fff153aa982c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Dec 13 02:19:57.374835 kubelet[2004]: E1213 02:19:57.374677    2004 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8lc8p" podUID="f2780c99-1080-4245-9e18-fff153aa982c"
Dec 13 02:19:57.475313 env[1647]: time="2024-12-13T02:19:57.475177749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jhstx,Uid:904834f1-45a3-4090-a4d0-84cb46f7bbc9,Namespace:kube-system,Attempt:0,}"
Dec 13 02:19:57.492814 env[1647]: time="2024-12-13T02:19:57.492722019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:19:57.492814 env[1647]: time="2024-12-13T02:19:57.492775277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:19:57.493275 env[1647]: time="2024-12-13T02:19:57.493198428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:19:57.493755 env[1647]: time="2024-12-13T02:19:57.493670013Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57a503eddaed4e9467a5291b246de4ce2a618cd4c65cd5bbb48d11d0088d4d0f pid=3693 runtime=io.containerd.runc.v2
Dec 13 02:19:57.512685 systemd[1]: Started cri-containerd-57a503eddaed4e9467a5291b246de4ce2a618cd4c65cd5bbb48d11d0088d4d0f.scope.
Dec 13 02:19:57.560133 env[1647]: time="2024-12-13T02:19:57.560085724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jhstx,Uid:904834f1-45a3-4090-a4d0-84cb46f7bbc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"57a503eddaed4e9467a5291b246de4ce2a618cd4c65cd5bbb48d11d0088d4d0f\""
Dec 13 02:19:57.561795 env[1647]: time="2024-12-13T02:19:57.561755017Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Dec 13 02:19:57.735753 env[1647]: time="2024-12-13T02:19:57.735630373Z" level=info msg="StopPodSandbox for \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\""
Dec 13 02:19:57.735753 env[1647]: time="2024-12-13T02:19:57.735691821Z" level=info msg="Container to stop \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:19:57.751734 systemd[1]: cri-containerd-7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7.scope: Deactivated successfully.
Dec 13 02:19:57.804326 env[1647]: time="2024-12-13T02:19:57.804270093Z" level=info msg="shim disconnected" id=7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7
Dec 13 02:19:57.805906 env[1647]: time="2024-12-13T02:19:57.805862502Z" level=warning msg="cleaning up after shim disconnected" id=7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7 namespace=k8s.io
Dec 13 02:19:57.806590 env[1647]: time="2024-12-13T02:19:57.806381065Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:57.822106 env[1647]: time="2024-12-13T02:19:57.821998980Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3743 runtime=io.containerd.runc.v2\n"
Dec 13 02:19:57.823908 env[1647]: time="2024-12-13T02:19:57.823739680Z" level=info msg="TearDown network for sandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" successfully"
Dec 13 02:19:57.823908 env[1647]: time="2024-12-13T02:19:57.823895212Z" level=info msg="StopPodSandbox for \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" returns successfully"
Dec 13 02:19:58.000589 kubelet[2004]: I1213 02:19:58.000477    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-lib-modules\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.000551    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.000827    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-kernel\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.000899    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-bpf-maps\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.000954    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.000978    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.000998    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-clustermesh-secrets\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001107 kubelet[2004]: I1213 02:19:58.001019    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cni-path\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001504 kubelet[2004]: I1213 02:19:58.001423    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brcsh\" (UniqueName: \"kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-kube-api-access-brcsh\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001504 kubelet[2004]: I1213 02:19:58.001460    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-etc-cni-netd\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001504 kubelet[2004]: I1213 02:19:58.001489    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2780c99-1080-4245-9e18-fff153aa982c-cilium-config-path\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001637 kubelet[2004]: I1213 02:19:58.001509    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-run\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001637 kubelet[2004]: I1213 02:19:58.001532    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-cgroup\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001637 kubelet[2004]: I1213 02:19:58.001558    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-hubble-tls\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001637 kubelet[2004]: I1213 02:19:58.001582    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-hostproc\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001637 kubelet[2004]: I1213 02:19:58.001606    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-net\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001637 kubelet[2004]: I1213 02:19:58.001631    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-xtables-lock\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001891 kubelet[2004]: I1213 02:19:58.001657    2004 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-cilium-ipsec-secrets\") pod \"f2780c99-1080-4245-9e18-fff153aa982c\" (UID: \"f2780c99-1080-4245-9e18-fff153aa982c\") "
Dec 13 02:19:58.001891 kubelet[2004]: I1213 02:19:58.001730    2004 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-lib-modules\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.001891 kubelet[2004]: I1213 02:19:58.001746    2004 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-kernel\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.001891 kubelet[2004]: I1213 02:19:58.001760    2004 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-bpf-maps\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.005717 kubelet[2004]: I1213 02:19:58.005648    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.005915 kubelet[2004]: I1213 02:19:58.005693    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.006187 kubelet[2004]: I1213 02:19:58.006167    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.017785 kubelet[2004]: I1213 02:19:58.017724    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.021194 kubelet[2004]: I1213 02:19:58.017800    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.021194 kubelet[2004]: I1213 02:19:58.017823    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.023504 kubelet[2004]: I1213 02:19:58.023458    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:19:58.028397 kubelet[2004]: I1213 02:19:58.028350    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:19:58.028688 kubelet[2004]: I1213 02:19:58.028662    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 02:19:58.029142 kubelet[2004]: I1213 02:19:58.029119    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2780c99-1080-4245-9e18-fff153aa982c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 02:19:58.029303 kubelet[2004]: I1213 02:19:58.029139    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 02:19:58.031194 kubelet[2004]: I1213 02:19:58.031160    2004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-kube-api-access-brcsh" (OuterVolumeSpecName: "kube-api-access-brcsh") pod "f2780c99-1080-4245-9e18-fff153aa982c" (UID: "f2780c99-1080-4245-9e18-fff153aa982c"). InnerVolumeSpecName "kube-api-access-brcsh". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:19:58.102961 kubelet[2004]: I1213 02:19:58.102914    2004 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-host-proc-sys-net\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.102961 kubelet[2004]: I1213 02:19:58.102956    2004 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-xtables-lock\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.102961 kubelet[2004]: I1213 02:19:58.102969    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-cilium-ipsec-secrets\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.102980    2004 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2780c99-1080-4245-9e18-fff153aa982c-clustermesh-secrets\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103021    2004 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-brcsh\" (UniqueName: \"kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-kube-api-access-brcsh\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103033    2004 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-etc-cni-netd\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103045    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2780c99-1080-4245-9e18-fff153aa982c-cilium-config-path\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103058    2004 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cni-path\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103069    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-cgroup\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103078    2004 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2780c99-1080-4245-9e18-fff153aa982c-hubble-tls\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103087    2004 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-hostproc\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.103288 kubelet[2004]: I1213 02:19:58.103096    2004 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2780c99-1080-4245-9e18-fff153aa982c-cilium-run\") on node \"172.31.16.8\" DevicePath \"\""
Dec 13 02:19:58.123971 systemd[1]: var-lib-kubelet-pods-f2780c99\x2d1080\x2d4245\x2d9e18\x2dfff153aa982c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbrcsh.mount: Deactivated successfully.
Dec 13 02:19:58.124156 systemd[1]: var-lib-kubelet-pods-f2780c99\x2d1080\x2d4245\x2d9e18\x2dfff153aa982c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 02:19:58.124256 systemd[1]: var-lib-kubelet-pods-f2780c99\x2d1080\x2d4245\x2d9e18\x2dfff153aa982c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Dec 13 02:19:58.124336 systemd[1]: var-lib-kubelet-pods-f2780c99\x2d1080\x2d4245\x2d9e18\x2dfff153aa982c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 02:19:58.274518 kubelet[2004]: E1213 02:19:58.274398    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:58.739491 kubelet[2004]: I1213 02:19:58.739460    2004 scope.go:117] "RemoveContainer" containerID="2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8"
Dec 13 02:19:58.744114 env[1647]: time="2024-12-13T02:19:58.744040552Z" level=info msg="RemoveContainer for \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\""
Dec 13 02:19:58.755932 env[1647]: time="2024-12-13T02:19:58.752247537Z" level=info msg="RemoveContainer for \"2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8\" returns successfully"
Dec 13 02:19:58.755372 systemd[1]: Removed slice kubepods-burstable-podf2780c99_1080_4245_9e18_fff153aa982c.slice.
Dec 13 02:19:58.834323 kubelet[2004]: I1213 02:19:58.834282    2004 topology_manager.go:215] "Topology Admit Handler" podUID="ec13a442-7fe8-4cb6-84c2-053373393127" podNamespace="kube-system" podName="cilium-l4p74"
Dec 13 02:19:58.834511 kubelet[2004]: E1213 02:19:58.834343    2004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2780c99-1080-4245-9e18-fff153aa982c" containerName="mount-cgroup"
Dec 13 02:19:58.834511 kubelet[2004]: I1213 02:19:58.834371    2004 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2780c99-1080-4245-9e18-fff153aa982c" containerName="mount-cgroup"
Dec 13 02:19:58.841088 systemd[1]: Created slice kubepods-burstable-podec13a442_7fe8_4cb6_84c2_053373393127.slice.
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008257    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-lib-modules\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008356    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec13a442-7fe8-4cb6-84c2-053373393127-clustermesh-secrets\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008387    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-cni-path\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008411    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec13a442-7fe8-4cb6-84c2-053373393127-cilium-ipsec-secrets\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008439    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-etc-cni-netd\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008459    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-host-proc-sys-net\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008480    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec13a442-7fe8-4cb6-84c2-053373393127-hubble-tls\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008502    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjzq\" (UniqueName: \"kubernetes.io/projected/ec13a442-7fe8-4cb6-84c2-053373393127-kube-api-access-ssjzq\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008525    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-hostproc\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008549    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-cilium-cgroup\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008572    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec13a442-7fe8-4cb6-84c2-053373393127-cilium-config-path\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008595    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-host-proc-sys-kernel\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008623    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-cilium-run\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008645    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-bpf-maps\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.008865 kubelet[2004]: I1213 02:19:59.008670    2004 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec13a442-7fe8-4cb6-84c2-053373393127-xtables-lock\") pod \"cilium-l4p74\" (UID: \"ec13a442-7fe8-4cb6-84c2-053373393127\") " pod="kube-system/cilium-l4p74"
Dec 13 02:19:59.275995 kubelet[2004]: E1213 02:19:59.275878    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:19:59.414170 kubelet[2004]: E1213 02:19:59.414099    2004 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 02:19:59.451606 env[1647]: time="2024-12-13T02:19:59.451406700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4p74,Uid:ec13a442-7fe8-4cb6-84c2-053373393127,Namespace:kube-system,Attempt:0,}"
Dec 13 02:19:59.467360 kubelet[2004]: I1213 02:19:59.467316    2004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2780c99-1080-4245-9e18-fff153aa982c" path="/var/lib/kubelet/pods/f2780c99-1080-4245-9e18-fff153aa982c/volumes"
Dec 13 02:19:59.521431 env[1647]: time="2024-12-13T02:19:59.521276107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:19:59.521431 env[1647]: time="2024-12-13T02:19:59.521388805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:19:59.521765 env[1647]: time="2024-12-13T02:19:59.521655641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:19:59.522217 env[1647]: time="2024-12-13T02:19:59.522169743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c pid=3772 runtime=io.containerd.runc.v2
Dec 13 02:19:59.541398 systemd[1]: Started cri-containerd-1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c.scope.
Dec 13 02:19:59.582621 env[1647]: time="2024-12-13T02:19:59.582569018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4p74,Uid:ec13a442-7fe8-4cb6-84c2-053373393127,Namespace:kube-system,Attempt:0,} returns sandbox id \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\""
Dec 13 02:19:59.586103 env[1647]: time="2024-12-13T02:19:59.586063751Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 02:19:59.599515 env[1647]: time="2024-12-13T02:19:59.599472781Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777\""
Dec 13 02:19:59.600674 env[1647]: time="2024-12-13T02:19:59.600649611Z" level=info msg="StartContainer for \"ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777\""
Dec 13 02:19:59.628098 systemd[1]: Started cri-containerd-ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777.scope.
Dec 13 02:19:59.679774 env[1647]: time="2024-12-13T02:19:59.679719178Z" level=info msg="StartContainer for \"ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777\" returns successfully"
Dec 13 02:19:59.707976 systemd[1]: cri-containerd-ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777.scope: Deactivated successfully.
Dec 13 02:19:59.765911 env[1647]: time="2024-12-13T02:19:59.765798916Z" level=info msg="shim disconnected" id=ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777
Dec 13 02:19:59.765911 env[1647]: time="2024-12-13T02:19:59.765900365Z" level=warning msg="cleaning up after shim disconnected" id=ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777 namespace=k8s.io
Dec 13 02:19:59.766771 env[1647]: time="2024-12-13T02:19:59.765958055Z" level=info msg="cleaning up dead shim"
Dec 13 02:19:59.784035 env[1647]: time="2024-12-13T02:19:59.783980469Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3859 runtime=io.containerd.runc.v2\n"
Dec 13 02:20:00.277422 kubelet[2004]: E1213 02:20:00.277283    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:00.456118 kubelet[2004]: W1213 02:20:00.455907    2004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2780c99_1080_4245_9e18_fff153aa982c.slice/cri-containerd-2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8.scope WatchSource:0}: container "2c62c64beb523d0e72ed55914f3572e5c583786a7cd81e99c0159e5ba84336b8" in namespace "k8s.io": not found
Dec 13 02:20:00.757323 env[1647]: time="2024-12-13T02:20:00.757269731Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 02:20:00.771794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497845480.mount: Deactivated successfully.
Dec 13 02:20:00.783099 env[1647]: time="2024-12-13T02:20:00.782987786Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d\""
Dec 13 02:20:00.784147 env[1647]: time="2024-12-13T02:20:00.784052392Z" level=info msg="StartContainer for \"a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d\""
Dec 13 02:20:00.831766 systemd[1]: Started cri-containerd-a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d.scope.
Dec 13 02:20:00.869404 env[1647]: time="2024-12-13T02:20:00.869348270Z" level=info msg="StartContainer for \"a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d\" returns successfully"
Dec 13 02:20:00.893382 systemd[1]: cri-containerd-a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d.scope: Deactivated successfully.
Dec 13 02:20:00.936143 env[1647]: time="2024-12-13T02:20:00.936088176Z" level=info msg="shim disconnected" id=a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d
Dec 13 02:20:00.936143 env[1647]: time="2024-12-13T02:20:00.936144799Z" level=warning msg="cleaning up after shim disconnected" id=a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d namespace=k8s.io
Dec 13 02:20:00.936509 env[1647]: time="2024-12-13T02:20:00.936157736Z" level=info msg="cleaning up dead shim"
Dec 13 02:20:00.948579 env[1647]: time="2024-12-13T02:20:00.948527656Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3920 runtime=io.containerd.runc.v2\n"
Dec 13 02:20:01.006081 kubelet[2004]: I1213 02:20:01.006022    2004 setters.go:580] "Node became not ready" node="172.31.16.8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:20:01Z","lastTransitionTime":"2024-12-13T02:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Dec 13 02:20:01.122610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d-rootfs.mount: Deactivated successfully.
Dec 13 02:20:01.278675 kubelet[2004]: E1213 02:20:01.278529    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:01.776795 env[1647]: time="2024-12-13T02:20:01.776740115Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 02:20:01.845751 env[1647]: time="2024-12-13T02:20:01.845684637Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376\""
Dec 13 02:20:01.854958 env[1647]: time="2024-12-13T02:20:01.854909781Z" level=info msg="StartContainer for \"2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376\""
Dec 13 02:20:01.975482 systemd[1]: Started cri-containerd-2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376.scope.
Dec 13 02:20:02.025595 env[1647]: time="2024-12-13T02:20:02.025552281Z" level=info msg="StartContainer for \"2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376\" returns successfully"
Dec 13 02:20:02.034076 systemd[1]: cri-containerd-2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376.scope: Deactivated successfully.
Dec 13 02:20:02.080943 env[1647]: time="2024-12-13T02:20:02.080890719Z" level=info msg="shim disconnected" id=2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376
Dec 13 02:20:02.080943 env[1647]: time="2024-12-13T02:20:02.080941644Z" level=warning msg="cleaning up after shim disconnected" id=2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376 namespace=k8s.io
Dec 13 02:20:02.080943 env[1647]: time="2024-12-13T02:20:02.080953203Z" level=info msg="cleaning up dead shim"
Dec 13 02:20:02.094509 env[1647]: time="2024-12-13T02:20:02.094412728Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3976 runtime=io.containerd.runc.v2\n"
Dec 13 02:20:02.123527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376-rootfs.mount: Deactivated successfully.
Dec 13 02:20:02.279937 kubelet[2004]: E1213 02:20:02.279771    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:02.776507 env[1647]: time="2024-12-13T02:20:02.776463165Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 02:20:02.798431 env[1647]: time="2024-12-13T02:20:02.798379127Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6\""
Dec 13 02:20:02.799609 env[1647]: time="2024-12-13T02:20:02.799573198Z" level=info msg="StartContainer for \"e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6\""
Dec 13 02:20:02.850630 systemd[1]: Started cri-containerd-e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6.scope.
Dec 13 02:20:02.882806 systemd[1]: cri-containerd-e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6.scope: Deactivated successfully.
Dec 13 02:20:02.884731 env[1647]: time="2024-12-13T02:20:02.884687510Z" level=info msg="StartContainer for \"e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6\" returns successfully"
Dec 13 02:20:02.919083 env[1647]: time="2024-12-13T02:20:02.919023697Z" level=info msg="shim disconnected" id=e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6
Dec 13 02:20:02.919443 env[1647]: time="2024-12-13T02:20:02.919087986Z" level=warning msg="cleaning up after shim disconnected" id=e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6 namespace=k8s.io
Dec 13 02:20:02.919443 env[1647]: time="2024-12-13T02:20:02.919101438Z" level=info msg="cleaning up dead shim"
Dec 13 02:20:02.935136 env[1647]: time="2024-12-13T02:20:02.935077301Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4031 runtime=io.containerd.runc.v2\n"
Dec 13 02:20:03.122613 systemd[1]: run-containerd-runc-k8s.io-e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6-runc.ONXI1X.mount: Deactivated successfully.
Dec 13 02:20:03.123161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6-rootfs.mount: Deactivated successfully.
Dec 13 02:20:03.280478 kubelet[2004]: E1213 02:20:03.280435    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:03.624737 kubelet[2004]: W1213 02:20:03.624629    2004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec13a442_7fe8_4cb6_84c2_053373393127.slice/cri-containerd-ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777.scope WatchSource:0}: task ceb98207575a65831472f62ea7f1055d01b4a4c3e3771a27e62e754f52985777 not found: not found
Dec 13 02:20:03.800829 env[1647]: time="2024-12-13T02:20:03.800630840Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 02:20:03.840083 env[1647]: time="2024-12-13T02:20:03.839999944Z" level=info msg="CreateContainer within sandbox \"1229782ffa7fd0d9bc44e2da9bdc00ce5b84f1e15b73c1dc814d7fd87d15320c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38\""
Dec 13 02:20:03.840842 env[1647]: time="2024-12-13T02:20:03.840805138Z" level=info msg="StartContainer for \"b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38\""
Dec 13 02:20:03.865872 systemd[1]: Started cri-containerd-b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38.scope.
Dec 13 02:20:03.909191 env[1647]: time="2024-12-13T02:20:03.908616832Z" level=info msg="StartContainer for \"b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38\" returns successfully"
Dec 13 02:20:04.281584 kubelet[2004]: E1213 02:20:04.281538    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:04.777263 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Dec 13 02:20:05.282018 kubelet[2004]: E1213 02:20:05.281961    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:06.282577 kubelet[2004]: E1213 02:20:06.282517    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:06.735895 kubelet[2004]: W1213 02:20:06.735061    2004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec13a442_7fe8_4cb6_84c2_053373393127.slice/cri-containerd-a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d.scope WatchSource:0}: task a1fce7c983e6a493ce8a0d344727af41d82c02bbf904a6bab59b7374765cde5d not found: not found
Dec 13 02:20:07.270457 systemd[1]: run-containerd-runc-k8s.io-b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38-runc.FoKkFl.mount: Deactivated successfully.
Dec 13 02:20:07.290508 kubelet[2004]: E1213 02:20:07.290430    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:08.194543 env[1647]: time="2024-12-13T02:20:08.194505558Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:20:08.198803 env[1647]: time="2024-12-13T02:20:08.198759268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:20:08.202006 env[1647]: time="2024-12-13T02:20:08.201971270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:20:08.202386 env[1647]: time="2024-12-13T02:20:08.202360423Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Dec 13 02:20:08.218493 env[1647]: time="2024-12-13T02:20:08.218455139Z" level=info msg="CreateContainer within sandbox \"57a503eddaed4e9467a5291b246de4ce2a618cd4c65cd5bbb48d11d0088d4d0f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Dec 13 02:20:08.240586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613013760.mount: Deactivated successfully.
Dec 13 02:20:08.249690 env[1647]: time="2024-12-13T02:20:08.249640939Z" level=info msg="CreateContainer within sandbox \"57a503eddaed4e9467a5291b246de4ce2a618cd4c65cd5bbb48d11d0088d4d0f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"87124a050e54e6b3f6ee3bcc56e9398b7324c997ef338dcbc788882cc228c796\""
Dec 13 02:20:08.250466 env[1647]: time="2024-12-13T02:20:08.250426247Z" level=info msg="StartContainer for \"87124a050e54e6b3f6ee3bcc56e9398b7324c997ef338dcbc788882cc228c796\""
Dec 13 02:20:08.306137 systemd[1]: Started cri-containerd-87124a050e54e6b3f6ee3bcc56e9398b7324c997ef338dcbc788882cc228c796.scope.
Dec 13 02:20:08.317904 kubelet[2004]: E1213 02:20:08.317853    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:08.357273 env[1647]: time="2024-12-13T02:20:08.357209493Z" level=info msg="StartContainer for \"87124a050e54e6b3f6ee3bcc56e9398b7324c997ef338dcbc788882cc228c796\" returns successfully"
Dec 13 02:20:08.460161 systemd-networkd[1377]: lxc_health: Link UP
Dec 13 02:20:08.466985 (udev-worker)[4637]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:20:08.467397 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 02:20:08.467371 systemd-networkd[1377]: lxc_health: Gained carrier
Dec 13 02:20:08.874888 kubelet[2004]: I1213 02:20:08.874384    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l4p74" podStartSLOduration=10.874356053 podStartE2EDuration="10.874356053s" podCreationTimestamp="2024-12-13 02:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:20:04.87421949 +0000 UTC m=+76.618277063" watchObservedRunningTime="2024-12-13 02:20:08.874356053 +0000 UTC m=+80.618413626"
Dec 13 02:20:09.218661 kubelet[2004]: E1213 02:20:09.218614    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:09.319722 kubelet[2004]: E1213 02:20:09.319679    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:09.488436 systemd-networkd[1377]: lxc_health: Gained IPv6LL
Dec 13 02:20:09.584027 kubelet[2004]: I1213 02:20:09.533440    2004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-jhstx" podStartSLOduration=2.888217521 podStartE2EDuration="13.533422714s" podCreationTimestamp="2024-12-13 02:19:56 +0000 UTC" firstStartedPulling="2024-12-13 02:19:57.561338176 +0000 UTC m=+69.305395728" lastFinishedPulling="2024-12-13 02:20:08.206543358 +0000 UTC m=+79.950600921" observedRunningTime="2024-12-13 02:20:08.870406521 +0000 UTC m=+80.614464095" watchObservedRunningTime="2024-12-13 02:20:09.533422714 +0000 UTC m=+81.277480286"
Dec 13 02:20:09.756957 systemd[1]: run-containerd-runc-k8s.io-b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38-runc.rqFMO4.mount: Deactivated successfully.
Dec 13 02:20:09.848303 kubelet[2004]: W1213 02:20:09.848170    2004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec13a442_7fe8_4cb6_84c2_053373393127.slice/cri-containerd-2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376.scope WatchSource:0}: task 2ca8d8f8ad78982d3980e0205450f5a2f2d6e01be7c68dfe5fc713bdfe90e376 not found: not found
Dec 13 02:20:10.320717 kubelet[2004]: E1213 02:20:10.320667    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:11.322090 kubelet[2004]: E1213 02:20:11.322033    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:12.311047 systemd[1]: run-containerd-runc-k8s.io-b53c6f5ab0aa418ce0b4f961e90c5db7b4114cf5e95757b2fb5ce7f9a7c1cf38-runc.JSFoWJ.mount: Deactivated successfully.
Dec 13 02:20:12.324548 kubelet[2004]: E1213 02:20:12.324508    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:12.959231 kubelet[2004]: W1213 02:20:12.959171    2004 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec13a442_7fe8_4cb6_84c2_053373393127.slice/cri-containerd-e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6.scope WatchSource:0}: task e58cd47cd63360db3467dd9a9ec7fd675bf2a6f21a4b9b1222d6acea925bbab6 not found: not found
Dec 13 02:20:13.325994 kubelet[2004]: E1213 02:20:13.325604    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:14.327087 kubelet[2004]: E1213 02:20:14.327038    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:15.327957 kubelet[2004]: E1213 02:20:15.327908    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:16.328304 kubelet[2004]: E1213 02:20:16.328250    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:17.329300 kubelet[2004]: E1213 02:20:17.329261    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:18.330303 kubelet[2004]: E1213 02:20:18.330254    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:19.331115 kubelet[2004]: E1213 02:20:19.331071    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:20.331684 kubelet[2004]: E1213 02:20:20.331629    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:21.332449 kubelet[2004]: E1213 02:20:21.332404    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:22.333847 kubelet[2004]: E1213 02:20:22.333791    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:23.334354 kubelet[2004]: E1213 02:20:23.334285    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:24.334751 kubelet[2004]: E1213 02:20:24.334711    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:25.335205 kubelet[2004]: E1213 02:20:25.335149    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:26.335372 kubelet[2004]: E1213 02:20:26.335330    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:27.335531 kubelet[2004]: E1213 02:20:27.335478    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:28.336525 kubelet[2004]: E1213 02:20:28.336464    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:29.219495 kubelet[2004]: E1213 02:20:29.219442    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:29.337517 kubelet[2004]: E1213 02:20:29.337464    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:30.337933 kubelet[2004]: E1213 02:20:30.337876    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:31.299882 kubelet[2004]: E1213 02:20:31.299821    2004 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:20:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:20:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:20:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:20:21Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71035905},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\\\",\\\"registry.k8s.io/kube-proxy:v1.30.8\\\"],\\\"sizeBytes\\\":29056489},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.16.8\": Patch \"https://172.31.19.93:6443/api/v1/nodes/172.31.16.8/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Dec 13 02:20:31.338559 kubelet[2004]: E1213 02:20:31.338500    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:31.469760 kubelet[2004]: E1213 02:20:31.469683    2004 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Dec 13 02:20:32.339498 kubelet[2004]: E1213 02:20:32.339442    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:33.340208 kubelet[2004]: E1213 02:20:33.340152    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:34.340572 kubelet[2004]: E1213 02:20:34.340462    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:35.340722 kubelet[2004]: E1213 02:20:35.340663    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:36.340875 kubelet[2004]: E1213 02:20:36.340822    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:37.341582 kubelet[2004]: E1213 02:20:37.341528    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:38.342682 kubelet[2004]: E1213 02:20:38.342624    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:39.343423 kubelet[2004]: E1213 02:20:39.343295    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:40.343599 kubelet[2004]: E1213 02:20:40.343542    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:41.300421 kubelet[2004]: E1213 02:20:41.300370    2004 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.16.8\": Get \"https://172.31.19.93:6443/api/v1/nodes/172.31.16.8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Dec 13 02:20:41.344616 kubelet[2004]: E1213 02:20:41.344558    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:41.470423 kubelet[2004]: E1213 02:20:41.470367    2004 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Dec 13 02:20:42.345040 kubelet[2004]: E1213 02:20:42.344987    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:43.345257 kubelet[2004]: E1213 02:20:43.345188    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:44.346126 kubelet[2004]: E1213 02:20:44.346062    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:45.169647 kubelet[2004]: E1213 02:20:45.168870    2004 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": unexpected EOF"
Dec 13 02:20:45.184631 kubelet[2004]: E1213 02:20:45.184584    2004 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": read tcp 172.31.16.8:34822->172.31.19.93:6443: read: connection reset by peer"
Dec 13 02:20:45.185690 kubelet[2004]: E1213 02:20:45.185644    2004 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused"
Dec 13 02:20:45.185845 kubelet[2004]: I1213 02:20:45.185709    2004 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
Dec 13 02:20:45.186384 kubelet[2004]: E1213 02:20:45.186351    2004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="200ms"
Dec 13 02:20:45.346604 kubelet[2004]: E1213 02:20:45.346556    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:45.388303 kubelet[2004]: E1213 02:20:45.388257    2004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="400ms"
Dec 13 02:20:45.790457 kubelet[2004]: E1213 02:20:45.789659    2004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="800ms"
Dec 13 02:20:46.174576 kubelet[2004]: E1213 02:20:46.174484    2004 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.16.8\": Get \"https://172.31.19.93:6443/api/v1/nodes/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused - error from a previous attempt: unexpected EOF"
Dec 13 02:20:46.175399 kubelet[2004]: E1213 02:20:46.175364    2004 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.16.8\": Get \"https://172.31.19.93:6443/api/v1/nodes/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused"
Dec 13 02:20:46.176018 kubelet[2004]: E1213 02:20:46.175992    2004 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.16.8\": Get \"https://172.31.19.93:6443/api/v1/nodes/172.31.16.8?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused"
Dec 13 02:20:46.176018 kubelet[2004]: E1213 02:20:46.176016    2004 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
Dec 13 02:20:46.347740 kubelet[2004]: E1213 02:20:46.347630    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:47.348053 kubelet[2004]: E1213 02:20:47.348013    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:48.348207 kubelet[2004]: E1213 02:20:48.348164    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:49.219618 kubelet[2004]: E1213 02:20:49.219561    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:49.285072 env[1647]: time="2024-12-13T02:20:49.285022916Z" level=info msg="StopPodSandbox for \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\""
Dec 13 02:20:49.285602 env[1647]: time="2024-12-13T02:20:49.285137498Z" level=info msg="TearDown network for sandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" successfully"
Dec 13 02:20:49.285602 env[1647]: time="2024-12-13T02:20:49.285183932Z" level=info msg="StopPodSandbox for \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" returns successfully"
Dec 13 02:20:49.289129 env[1647]: time="2024-12-13T02:20:49.288613017Z" level=info msg="RemovePodSandbox for \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\""
Dec 13 02:20:49.289350 env[1647]: time="2024-12-13T02:20:49.289137061Z" level=info msg="Forcibly stopping sandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\""
Dec 13 02:20:49.289350 env[1647]: time="2024-12-13T02:20:49.289268192Z" level=info msg="TearDown network for sandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" successfully"
Dec 13 02:20:49.300527 env[1647]: time="2024-12-13T02:20:49.300254286Z" level=info msg="RemovePodSandbox \"7b6f4d82731e56b5cc7e4ac9866e76a99bf4ced42cacf2a61c481cfa30a42ec7\" returns successfully"
Dec 13 02:20:49.303022 env[1647]: time="2024-12-13T02:20:49.302586634Z" level=info msg="StopPodSandbox for \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\""
Dec 13 02:20:49.303184 env[1647]: time="2024-12-13T02:20:49.303092948Z" level=info msg="TearDown network for sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" successfully"
Dec 13 02:20:49.303184 env[1647]: time="2024-12-13T02:20:49.303167261Z" level=info msg="StopPodSandbox for \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" returns successfully"
Dec 13 02:20:49.304880 env[1647]: time="2024-12-13T02:20:49.304783503Z" level=info msg="RemovePodSandbox for \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\""
Dec 13 02:20:49.305005 env[1647]: time="2024-12-13T02:20:49.304886395Z" level=info msg="Forcibly stopping sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\""
Dec 13 02:20:49.305005 env[1647]: time="2024-12-13T02:20:49.304986411Z" level=info msg="TearDown network for sandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" successfully"
Dec 13 02:20:49.312553 env[1647]: time="2024-12-13T02:20:49.312498432Z" level=info msg="RemovePodSandbox \"887d8a56601ce21f7f69b01316c5303f508be4c207d38de1d44a9b31bdb6cada\" returns successfully"
Dec 13 02:20:49.349475 kubelet[2004]: E1213 02:20:49.349423    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:50.350335 kubelet[2004]: E1213 02:20:50.350281    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:51.351158 kubelet[2004]: E1213 02:20:51.351096    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:52.361042 kubelet[2004]: E1213 02:20:52.351471    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:53.352067 kubelet[2004]: E1213 02:20:53.352013    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:54.352241 kubelet[2004]: E1213 02:20:54.352174    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:55.353363 kubelet[2004]: E1213 02:20:55.353309    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:56.354543 kubelet[2004]: E1213 02:20:56.354486    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:56.591053 kubelet[2004]: E1213 02:20:56.590991    2004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
Dec 13 02:20:57.354764 kubelet[2004]: E1213 02:20:57.354715    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:58.355632 kubelet[2004]: E1213 02:20:58.355587    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:20:59.356486 kubelet[2004]: E1213 02:20:59.356434    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:00.357016 kubelet[2004]: E1213 02:21:00.356962    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:01.358097 kubelet[2004]: E1213 02:21:01.358038    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:02.359087 kubelet[2004]: E1213 02:21:02.359027    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:03.360869 kubelet[2004]: E1213 02:21:03.360811    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:04.361836 kubelet[2004]: E1213 02:21:04.361778    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:05.362931 kubelet[2004]: E1213 02:21:05.362877    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:06.363133 kubelet[2004]: E1213 02:21:06.363090    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:06.550934 kubelet[2004]: E1213 02:21:06.550880    2004 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.16.8\": Get \"https://172.31.19.93:6443/api/v1/nodes/172.31.16.8?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 13 02:21:07.364144 kubelet[2004]: E1213 02:21:07.364084    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:08.192867 kubelet[2004]: E1213 02:21:08.192666    2004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.8?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s"
Dec 13 02:21:08.365389 kubelet[2004]: E1213 02:21:08.365328    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:09.219521 kubelet[2004]: E1213 02:21:09.219479    2004 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:09.366281 kubelet[2004]: E1213 02:21:09.366231    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:21:10.367178 kubelet[2004]: E1213 02:21:10.367125    2004 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"